Sample records for obtained good estimates

  1. An Introduction to Goodness of Fit for PMU Parameter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riepnieks, Artis; Kirkham, Harold

    2017-10-01

    New results of measurements of phasor-like signals are presented based on our previous work on the topic. In this document an improved estimation method is described. The algorithm (which is realized in MATLAB software) is discussed. We examine the effect of noisy and distorted signals on the Goodness of Fit metric. The estimation method is shown to be performing very well with clean data and with a measurement window as short as a half a cycle and as few as 5 samples per cycle. The Goodness of Fit decreases predictably with added phase noise, and seems to be acceptable evenmore » with visible distortion in the signal. While the exact results we obtain are specific to our method of estimation, the Goodness of Fit method could be implemented in any phasor measurement unit.« less

  2. 12 CFR 1024.7 - Good faith estimate.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 8 2014-01-01 2014-01-01 false Good faith estimate. 1024.7 Section 1024.7... (REGULATION X) Mortgage Settlement and Escrow Accounts § 1024.7 Good faith estimate. (a) Lender to provide. (1..., 2014. For the convenience of the user, the revised text is set forth as follows: § 1024.7 Good faith...

  3. 12 CFR 1024.7 - Good faith estimate.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 8 2012-01-01 2012-01-01 false Good faith estimate. 1024.7 Section 1024.7 Banks and Banking BUREAU OF CONSUMER FINANCIAL PROTECTION REAL ESTATE SETTLEMENT PROCEDURES ACT (REGULATION X) § 1024.7 Good faith estimate. (a) Lender to provide. (1) Except as otherwise provided in...

  4. 12 CFR 1024.7 - Good faith estimate.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Good faith estimate. 1024.7 Section 1024.7 Banks and Banking BUREAU OF CONSUMER FINANCIAL PROTECTION REAL ESTATE SETTLEMENT PROCEDURES ACT (REGULATION X) § 1024.7 Good faith estimate. (a) Lender to provide. (1) Except as otherwise provided in...

  5. 24 CFR 3500.7 - Good faith estimate.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 5 2013-04-01 2013-04-01 false Good faith estimate. 3500.7 Section 3500.7 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued... DEVELOPMENT REAL ESTATE SETTLEMENT PROCEDURES ACT § 3500.7 Good faith estimate. (a) Lender to provide. (1...

  6. 24 CFR 3500.7 - Good faith estimate.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 5 2011-04-01 2011-04-01 false Good faith estimate. 3500.7 Section 3500.7 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued... DEVELOPMENT REAL ESTATE SETTLEMENT PROCEDURES ACT § 3500.7 Good faith estimate. (a) Lender to provide. (1...

  7. 24 CFR 3500.7 - Good faith estimate.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 5 2014-04-01 2014-04-01 false Good faith estimate. 3500.7 Section 3500.7 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued... DEVELOPMENT REAL ESTATE SETTLEMENT PROCEDURES ACT § 3500.7 Good faith estimate. (a) Lender to provide. (1...

  8. 24 CFR 3500.7 - Good faith estimate.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 5 2012-04-01 2012-04-01 false Good faith estimate. 3500.7 Section 3500.7 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued... DEVELOPMENT REAL ESTATE SETTLEMENT PROCEDURES ACT § 3500.7 Good faith estimate. (a) Lender to provide. (1...

  9. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  10. Peaks Over Threshold (POT): A methodology for automatic threshold estimation using goodness of fit p-value

    NASA Astrophysics Data System (ADS)

    Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.

    2017-04-01

    Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.

  11. NMR permeability estimators in 'chalk' carbonate rocks obtained under different relaxation times and MICP size scalings

    NASA Astrophysics Data System (ADS)

    Rios, Edmilson Helton; Figueiredo, Irineu; Moss, Adam Keith; Pritchard, Timothy Neil; Glassborow, Brent Anthony; Guedes Domingues, Ana Beatriz; Bagueira de Vasconcellos Azeredo, Rodrigo

    2016-07-01

    The effect of the selection of different nuclear magnetic resonance (NMR) relaxation times for permeability estimation is investigated for a set of fully brine-saturated rocks acquired from Cretaceous carbonate reservoirs in the North Sea and Middle East. Estimators that are obtained from the relaxation times based on the Pythagorean means are compared with estimators that are obtained from the relaxation times based on the concept of a cumulative saturation cut-off. Select portions of the longitudinal (T1) and transverse (T2) relaxation-time distributions are systematically evaluated by applying various cut-offs, analogous to the Winland-Pittman approach for mercury injection capillary pressure (MICP) curves. Finally, different approaches to matching the NMR and MICP distributions using different mean-based scaling factors are validated based on the performance of the related size-scaled estimators. The good results that were obtained demonstrate possible alternatives to the commonly adopted logarithmic mean estimator and reinforce the importance of NMR-MICP integration to improving carbonate permeability estimates.

  12. Improving the quality of parameter estimates obtained from slug tests

    USGS Publications Warehouse

    Butler, J.J.; McElwee, C.D.; Liu, W.

    1996-01-01

    The slug test is one of the most commonly used field methods for obtaining in situ estimates of hydraulic conductivity. Despite its prevalence, this method has received criticism from many quarters in the ground-water community. This criticism emphasizes the poor quality of the estimated parameters, a condition that is primarily a product of the somewhat casual approach that is often employed in slug tests. Recently, the Kansas Geological Survey (KGS) has pursued research directed it improving methods for the performance and analysis of slug tests. Based on extensive theoretical and field research, a series of guidelines have been proposed that should enable the quality of parameter estimates to be improved. The most significant of these guidelines are: (1) three or more slug tests should be performed at each well during a given test period; (2) two or more different initial displacements (Ho) should be used at each well during a test period; (3) the method used to initiate a test should enable the slug to be introduced in a near-instantaneous manner and should allow a good estimate of Ho to be obtained; (4) data-acquisition equipment that enables a large quantity of high quality data to be collected should be employed; (5) if an estimate of the storage parameter is needed, an observation well other than the test well should be employed; (6) the method chosen for analysis of the slug-test data should be appropriate for site conditions; (7) use of pre- and post-analysis plots should be an integral component of the analysis procedure, and (8) appropriate well construction parameters should be employed. Data from slug tests performed at a number of KGS field sites demonstrate the importance of these guidelines.

  13. Rediscovery of Good-Turing estimators via Bayesian nonparametrics.

    PubMed

    Favaro, Stefano; Nipoti, Bernardo; Teh, Yee Whye

    2016-03-01

    The problem of estimating discovery probabilities originated in the context of statistical ecology, and in recent years it has become popular due to its frequent appearance in challenging applications arising in genetics, bioinformatics, linguistics, designs of experiments, machine learning, etc. A full range of statistical approaches, parametric and nonparametric as well as frequentist and Bayesian, has been proposed for estimating discovery probabilities. In this article, we investigate the relationships between the celebrated Good-Turing approach, which is a frequentist nonparametric approach developed in the 1940s, and a Bayesian nonparametric approach recently introduced in the literature. Specifically, under the assumption of a two parameter Poisson-Dirichlet prior, we show that Bayesian nonparametric estimators of discovery probabilities are asymptotically equivalent, for a large sample size, to suitably smoothed Good-Turing estimators. As a by-product of this result, we introduce and investigate a methodology for deriving exact and asymptotic credible intervals to be associated with the Bayesian nonparametric estimators of discovery probabilities. The proposed methodology is illustrated through a comprehensive simulation study and the analysis of Expressed Sequence Tags data generated by sequencing a benchmark complementary DNA library. © 2015, The International Biometric Society.

  14. 49 CFR 375.409 - May household goods brokers provide estimates?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... REGULATIONS TRANSPORTATION OF HOUSEHOLD GOODS IN INTERSTATE COMMERCE; CONSUMER PROTECTION REGULATIONS... there is a written agreement between the broker and you, the carrier, adopting the broker's estimate as...

  15. Standard and goodness-of-fit parameter estimation methods for the three-parameter lognormal distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kane, V.E.

    1982-01-01

    A class of goodness-of-fit estimators is found to provide a useful alternative in certain situations to the standard maximum likelihood method which has some undesirable estimation characteristics for estimation from the three-parameter lognormal distribution. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Filliben tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Robustness of the procedures are examined and example data sets analyzed.

  16. Reliability of fish size estimates obtained from multibeam imaging sonar

    USGS Publications Warehouse

    Hightower, Joseph E.; Magowan, Kevin J.; Brown, Lori M.; Fox, Dewayne A.

    2013-01-01

    Multibeam imaging sonars have considerable potential for use in fisheries surveys because the video-like images are easy to interpret, and they contain information about fish size, shape, and swimming behavior, as well as characteristics of occupied habitats. We examined images obtained using a dual-frequency identification sonar (DIDSON) multibeam sonar for Atlantic sturgeon Acipenser oxyrinchus oxyrinchus, striped bass Morone saxatilis, white perch M. americana, and channel catfish Ictalurus punctatus of known size (20–141 cm) to determine the reliability of length estimates. For ranges up to 11 m, percent measurement error (sonar estimate – total length)/total length × 100 varied by species but was not related to the fish's range or aspect angle (orientation relative to the sonar beam). Least-square mean percent error was significantly different from 0.0 for Atlantic sturgeon (x̄  =  −8.34, SE  =  2.39) and white perch (x̄  = 14.48, SE  =  3.99) but not striped bass (x̄  =  3.71, SE  =  2.58) or channel catfish (x̄  = 3.97, SE  =  5.16). Underestimating lengths of Atlantic sturgeon may be due to difficulty in detecting the snout or the longer dorsal lobe of the heterocercal tail. White perch was the smallest species tested, and it had the largest percent measurement errors (both positive and negative) and the lowest percentage of images classified as good or acceptable. Automated length estimates for the four species using Echoview software varied with position in the view-field. Estimates tended to be low at more extreme azimuthal angles (fish's angle off-axis within the view-field), but mean and maximum estimates were highly correlated with total length. Software estimates also were biased by fish images partially outside the view-field and when acoustic crosstalk occurred (when a fish perpendicular to the sonar and at relatively close range is detected in the side lobes of adjacent beams). These sources of

  17. Estimating added sugars in US consumer packaged goods: An application to beverages in 2007-08.

    PubMed

    Ng, Shu Wen; Bricker, Gregory; Li, Kuo-Ping; Yoon, Emily Ford; Kang, Jiyoung; Westrich, Brian

    2015-11-01

    This study developed a method to estimate added sugar content in consumer packaged goods (CPG) that can keep pace with the dynamic food system. A team including registered dietitians, a food scientist and programmers developed a batch-mode ingredient matching and linear programming (LP) approach to estimate the amount of each ingredient needed in a given product to produce a nutrient profile similar to that reported on its nutrition facts label (NFL). Added sugar content was estimated for 7021 products available in 2007-08 that contain sugar from ten beverage categories. Of these, flavored waters had the lowest added sugar amounts (4.3g/100g), while sweetened dairy and dairy alternative beverages had the smallest percentage of added sugars (65.6% of Total Sugars; 33.8% of Calories). Estimation validity was determined by comparing LP estimated values to NFL values, as well as in a small validation study. LP estimates appeared reasonable compared to NFL values for calories, carbohydrates and total sugars, and performed well in the validation test; however, further work is needed to obtain more definitive conclusions on the accuracy of added sugar estimates in CPGs. As nutrition labeling regulations evolve, this approach can be adapted to test for potential product-specific, category-level, and population-level implications.

  18. 24 CFR Appendix C to Part 3500 - Instructions for Completing Good Faith Estimate (GFE) Form

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 5 2012-04-01 2012-04-01 false Instructions for Completing Good Faith Estimate (GFE) Form C Appendix C to Part 3500 Housing and Urban Development Regulations Relating.... 3500, App. C Appendix C to Part 3500—Instructions for Completing Good Faith Estimate (GFE) Form The...

  19. Estimating added sugars in US consumer packaged goods: An application to beverages in 2007–08

    PubMed Central

    Ng, Shu Wen; Bricker, Gregory; Li, Kuo-ping; Yoon, Emily Ford; Kang, Jiyoung; Westrich, Brian

    2015-01-01

    This study developed a method to estimate added sugar content in consumer packaged goods (CPG) that can keep pace with the dynamic food system. A team including registered dietitians, a food scientist and programmers developed a batch-mode ingredient matching and linear programming (LP) approach to estimate the amount of each ingredient needed in a given product to produce a nutrient profile similar to that reported on its nutrition facts label (NFL). Added sugar content was estimated for 7021 products available in 2007–08 that contain sugar from ten beverage categories. Of these, flavored waters had the lowest added sugar amounts (4.3g/100g), while sweetened dairy and dairy alternative beverages had the smallest percentage of added sugars (65.6% of Total Sugars; 33.8% of Calories). Estimation validity was determined by comparing LP estimated values to NFL values, as well as in a small validation study. LP estimates appeared reasonable compared to NFL values for calories, carbohydrates and total sugars, and performed well in the validation test; however, further work is needed to obtain more definitive conclusions on the accuracy of added sugar estimates in CPGs. As nutrition labeling regulations evolve, this approach can be adapted to test for potential product-specific, category-level, and population-level implications. PMID:26273127

  20. Obtaining Reliable Estimates of Ambulatory Physical Activity in People with Parkinson's Disease.

    PubMed

    Paul, Serene S; Ellis, Terry D; Dibble, Leland E; Earhart, Gammon M; Ford, Matthew P; Foreman, K Bo; Cavanaugh, James T

    2016-05-05

    We determined the number of days required, and whether to include weekdays and/or weekends, to obtain reliable measures of ambulatory physical activity in people with Parkinson's disease (PD). Ninety-two persons with PD wore a step activity monitor for seven days. The number of days required to obtain a reliable estimate of daily activity was determined from the mean intraclass correlation (ICC2,1) for all possible combinations of 1-6 consecutive days of monitoring. Two days of monitoring were sufficient to obtain reliable daily activity estimates (ICC2,1 > 0.9). Amount (p = 0.03) but not intensity (p = 0.13) of ambulatory activity was greater on weekdays than weekends. Activity prescription based on amount rather than intensity may be more appropriate for people with PD.

  1. Goodness-of-Fit Tests and Nonparametric Adaptive Estimation for Spike Train Analysis

    PubMed Central

    2014-01-01

    When dealing with classical spike train analysis, the practitioner often performs goodness-of-fit tests to test whether the observed process is a Poisson process, for instance, or if it obeys another type of probabilistic model (Yana et al. in Biophys. J. 46(3):323–330, 1984; Brown et al. in Neural Comput. 14(2):325–346, 2002; Pouzat and Chaffiol in Technical report, http://arxiv.org/abs/arXiv:0909.2785, 2009). In doing so, there is a fundamental plug-in step, where the parameters of the supposed underlying model are estimated. The aim of this article is to show that plug-in has sometimes very undesirable effects. We propose a new method based on subsampling to deal with those plug-in issues in the case of the Kolmogorov–Smirnov test of uniformity. The method relies on the plug-in of good estimates of the underlying model that have to be consistent with a controlled rate of convergence. Some nonparametric estimates satisfying those constraints in the Poisson or in the Hawkes framework are highlighted. Moreover, they share adaptive properties that are useful from a practical point of view. We show the performance of those methods on simulated data. We also provide a complete analysis with these tools on single unit activity recorded on a monkey during a sensory-motor task. Electronic Supplementary Material The online version of this article (doi:10.1186/2190-8567-4-3) contains supplementary material. PMID:24742008

  2. Targeted estimation of nuisance parameters to obtain valid statistical inference.

    PubMed

    van der Laan, Mark J

    2014-01-01

    In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special

  3. Stability of individual loudness functions obtained by magnitude estimation and production

    NASA Technical Reports Server (NTRS)

    Hellman, R. P.

    1981-01-01

    A correlational analysis of individual magnitude estimation and production exponents at the same frequency is performed, as is an analysis of individual exponents produced in different sessions by the same procedure across frequency (250, 1000, and 3000 Hz). Taken as a whole, the results show that individual exponent differences do not decrease by counterbalancing magnitude estimation with magnitude production and that individual exponent differences remain stable over time despite changes in stimulus frequency. Further results show that although individual magnitude estimation and production exponents do not necessarily obey the .6 power law, it is possible to predict the slope of an equal-sensation function averaged for a group of listeners from individual magnitude estimation and production data. On the assumption that individual listeners with sensorineural hearing also produce stable and reliable magnitude functions, it is also shown that the slope of the loudness-recruitment function measured by magnitude estimation and production can be predicted for individuals with bilateral losses of long duration. Results obtained in normal and pathological ears thus suggest that individual listeners can produce loudness judgements that reveal, although indirectly, the input-output characteristic of the auditory system.

  4. Cost-benefit analysis involving addictive goods: contingent valuation to estimate willingness-to-pay for smoking cessation.

    PubMed

    Weimer, David L; Vining, Aidan R; Thomas, Randall K

    2009-02-01

    The valuation of changes in consumption of addictive goods resulting from policy interventions presents a challenge for cost-benefit analysts. Consumer surplus losses from reduced consumption of addictive goods that are measured relative to market demand schedules overestimate the social cost of cessation interventions. This article seeks to show that consumer surplus losses measured using a non-addicted demand schedule provide a better assessment of social cost. Specifically, (1) it develops an addiction model that permits an estimate of the smoker's compensating variation for the elimination of addiction; (2) it employs a contingent valuation survey of current smokers to estimate their willingness-to-pay (WTP) for a treatment that would eliminate addiction; (3) it uses the estimate of WTP from the survey to calculate the fraction of consumer surplus that should be viewed as consumer value; and (4) it provides an estimate of this fraction. The exercise suggests that, as a tentative first and rough rule-of-thumb, only about 75% of the loss of the conventionally measured consumer surplus should be counted as social cost for policies that reduce the consumption of cigarettes. Additional research to estimate this important rule-of-thumb is desirable to address the various caveats relevant to this study. Copyright (c) 2008 John Wiley & Sons, Ltd.

  5. Comparison of internal dose estimates obtained using organ-level, voxel S value, and Monte Carlo techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grimes, Joshua, E-mail: grimes.joshua@mayo.edu; Celler, Anna

    2014-09-15

    (D90) agreeing within ±3%, on average. Conclusions: Several aspects of OLINDA/EXM dose calculation were compared with patient-specific dose estimates obtained using Monte Carlo. Differences in patient anatomy led to large differences in cross-organ doses. However, total organ doses were still in good agreement since most of the deposited dose is due to self-irradiation. Comparison of voxelized doses calculated by Monte Carlo and the voxel S value technique showed that the 3D dose distributions produced by the respective methods are nearly identical.« less

  6. Use of a macroinvertebrate based biotic index to estimate critical metal concentrations for good ecological water quality.

    PubMed

    Van Ael, Evy; De Cooman, Ward; Blust, Ronny; Bervoets, Lieven

    2015-01-01

    Large datasets from total and dissolved metal concentrations in Flemish (Belgium) fresh water systems and the associated macroinvertebrate-based biotic index MMIF (Multimetric Macroinvertebrate Index Flanders) were used to estimate critical metal concentrations for good ecological water quality, as imposed by the European Water Framework Directive (2000). The contribution of different stressors (metals and water characteristics) to the MMIF were studied by constructing generalized linear mixed effect models. Comparison between estimated critical concentrations and the European and Flemish EQS, shows that the EQS for As, Cd, Cu and Zn seem to be sufficient to reach a good ecological quality status as expressed by the invertebrate-based biotic index. In contrast, the EQS for Cr, Hg and Pb are higher than the estimated critical concentrations, which suggests that when environmental concentrations are at the same level as the EQS a good quality status might not be reached. The construction of mixed models that included metal concentrations in their structure did not lead to a significant outcome. However, mixed models showed the primary importance of water characteristics (oxygen level, temperature, ammonium concentration and conductivity) for the MMIF. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Maximum likelihood estimation for predicting the probability of obtaining variable shortleaf pine regeneration densities

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2003-01-01

    A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...

  8. Obtaining Cue Rate Estimates for Some Mysticete Species using Existing Data

    DTIC Science & Technology

    2014-09-30

    primary focus is to obtain cue rates for humpback whales (Megaptera novaeangliae) off the California coast and on the PMRF range. To our knowledge, no... humpback whale cue rates have been calculated for these populations. Once a cue rate is estimated for the populations of humpback whales off the...rates for humpback whales on breeding grounds, in addition to average cue rates for other species of mysticete whales . Cue rates of several other

  9. A fully redundant double difference algorithm for obtaining minimum variance estimates from GPS observations

    NASA Technical Reports Server (NTRS)

    Melbourne, William G.

    1986-01-01

    In double differencing a regression system obtained from concurrent Global Positioning System (GPS) observation sequences, one either undersamples the system to avoid introducing colored measurement statistics, or one fully samples the system incurring the resulting non-diagonal covariance matrix for the differenced measurement errors. A suboptimal estimation result will be obtained in the undersampling case and will also be obtained in the fully sampled case unless the color noise statistics are taken into account. The latter approach requires a least squares weighting matrix derived from inversion of a non-diagonal covariance matrix for the differenced measurement errors instead of inversion of the customary diagonal one associated with white noise processes. Presented is the so-called fully redundant double differencing algorithm for generating a weighted double differenced regression system that yields equivalent estimation results, but features for certain cases a diagonal weighting matrix even though the differenced measurement error statistics are highly colored.

  10. Use of NMR logging to obtain estimates of hydraulic conductivity in the High Plains aquifer, Nebraska, USA

    USGS Publications Warehouse

    Dlubac, Katherine; Knight, Rosemary; Song, Yi-Qiao; Bachman, Nate; Grau, Ben; Cannia, Jim; Williams, John

    2013-01-01

    Hydraulic conductivity (K) is one of the most important parameters of interest in groundwater applications because it quantifies the ease with which water can flow through an aquifer material. Hydraulic conductivity is typically measured by conducting aquifer tests or wellbore flow (WBF) logging. Of interest in our research is the use of proton nuclear magnetic resonance (NMR) logging to obtain information about water-filled porosity and pore space geometry, the combination of which can be used to estimate K. In this study, we acquired a suite of advanced geophysical logs, aquifer tests, WBF logs, and sidewall cores at the field site in Lexington, Nebraska, which is underlain by the High Plains aquifer. We first used two empirical equations developed for petroleum applications to predict K from NMR logging data: the Schlumberger Doll Research equation (KSDR) and the Timur-Coates equation (KT-C), with the standard empirical constants determined for consolidated materials. We upscaled our NMR-derived K estimates to the scale of the WBF-logging K(KWBF-logging) estimates for comparison. All the upscaled KT-C estimates were within an order of magnitude of KWBF-logging and all of the upscaled KSDR estimates were within 2 orders of magnitude of KWBF-logging. We optimized the fit between the upscaled NMR-derived K and KWBF-logging estimates to determine a set of site-specific empirical constants for the unconsolidated materials at our field site. We conclude that reliable estimates of K can be obtained from NMR logging data, thus providing an alternate method for obtaining estimates of K at high levels of vertical resolution.

  11. Method for obtaining a collimated near-unity aspect ratio output beam from a DFB-GSE laser with good beam quality.

    PubMed

    Liew, S K; Carlson, N W

    1992-05-20

    A simple method for obtaining a collimated near-unity aspect ratio output beam from laser sources with extremely large (> 100:1) aspect ratios is demonstrated by using a distributed-feedback grating-surfaceemitting laser. Far-field power-in-the-bucket measurements of the laser indicate good beam quality with a high Strehl ratio.

  12. Estimating population genetic parameters and comparing model goodness-of-fit using DNA sequences with error

    PubMed Central

    Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric

    2010-01-01

    It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140

  13. Probabilities and statistics for backscatter estimates obtained by a scatterometer with applications to new scatterometer design data

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.

  14. Is specific gravity a good estimate of urine osmolality?

    PubMed

    Imran, Sethi; Eva, Goldwater; Christopher, Shutty; Flynn, Ethan; Henner, David

    2010-01-01

    Urine specific gravity (USG) is often used by clinicians to estimate urine osmolality. USG is measured either by refractometry or by reagent strip. We studied the correlation of USG obtained by either method with a concurrently obtained osmolality. Using our laboratory's records, we retrospectively gathered data on 504 urine specimens on patients on whom a simultaneously drawn USG and an osmolality were available. Out of these, 253 USG's were measured by automated refractometry and 251 USG's were measured by reagent strip. Urinalysis data on these subjects were used to determine the correlation between USG and osmolality, adjusting for other variables that may impact the relationship. The other variables considered were pH, protein, glucose, ketones, nitrates, bilirubin, urobilinogen, hemoglobin, and leukocyte esterase. The relationships were analyzed by linear regression. This study demonstrated that USG obtained by both reagent strip and refractometry had a correlation of approximately 0.75 with urine osmolality. The variables affecting the correlation included pH, ketones, bilirubin, urobilinogen, glucose, and protein for the reagent strip and ketones, bilirubin, and hemoglobin for the refractometry method. At a pH of 7 and with an USG of 1.010 predicted osmolality is approximately 300  mosm/kg/H(2)O for either method. For an increase in SG of 0.010, predicted osmolality increases by 182  mosm/kg/H(2) O for the reagent strip and 203  mosm/kg/H(2)O for refractometry. Pathological urines had significantly poorer correlation between USG and osmolality than "clean" urines. In pathological urines, direct measurement of urine osmolality should be used. © 2010 Wiley-Liss, Inc.

  15. Batch Effect Confounding Leads to Strong Bias in Performance Estimates Obtained by Cross-Validation

    PubMed Central

    Delorenzi, Mauro

    2014-01-01

    Background With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences (“batch effects”) as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. Focus The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. Data We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., ‘control’) or group 2 (e.g., ‘treated’). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. Methods We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data. PMID:24967636

  16. Male orthopaedic surgeons and anaesthetists: equally good at estimating fluid volumes (and changing light bulbs) but equally poor at estimating procedure duration.

    PubMed

    Chua, Weiliang; Kong, Chee Hoe; Murphy, Diarmuid Paul

    2015-05-01

    How many orthopods does it take to change a light bulb? One - to refer to the medics for 'Darkness ?Cause'. Additionally, anaesthetists and surgeons often disagree on the estimated blood loss during surgery and the estimated procedure duration. We designed this study to compare the ability of orthopaedic surgeons and anaesthetists in: (a) estimating fluid volumes; (b) estimating procedure durations; and (c) changing light bulbs. Participants had to either be a specialist in anaesthesia or orthopaedic surgery, or a trainee in that specialty for at least two years. Three different fluid specimens were used for volume estimation (44 mL, 88 mL and 144 mL). Two videos of different lengths (140 seconds and 170 seconds), showing the suturing of a banana skin, were used for procedure duration estimation. To determine the ability at changing light bulbs, the participants had to match eight different light sockets to their respective bulbs. 30 male anaesthetists and trainees and 31 male orthopaedic surgeons and trainees participated in this study. Orthopaedic surgeons underestimated the three fluid volumes by 3.9% and anaesthetists overestimated by 5.1% (p = 0.925). Anaesthetists and orthopaedic surgeons overestimated the duration of the two procedures by 21.2% and 43.1%, respectively (p = 0.006). Anaesthetists had a faster mean time in changing light bulbs (70.1 seconds vs. 74.1 seconds, p = 0.319). In an experimental environment, male orthopaedic surgeons are as good as male anaesthetists in estimating fluid volumes (in commonly seen surgical specimens) and in changing light bulbs. Both groups are poor at estimating procedure durations.

  17. Comparison of estimates of left ventricular ejection fraction obtained from gated blood pool imaging, different software packages and cameras.

    PubMed

    Steyn, Rachelle; Boniaszczuk, John; Geldenhuys, Theodore

    2014-01-01

    To determine how two software packages, supplied by Siemens and Hermes, for processing gated blood pool (GBP) studies should be used in our department and whether the use of different cameras for the acquisition of raw data influences the results. The study had two components. For the first component, 200 studies were acquired on a General Electric (GE) camera and processed three times by three operators using the Siemens and Hermes software packages. For the second part, 200 studies were acquired on two different cameras (GE and Siemens). The matched pairs of raw data were processed by one operator using the Siemens and Hermes software packages. The Siemens method consistently gave estimates that were 4.3% higher than the Hermes method (p < 0.001). The differences were not associated with any particular level of left ventricular ejection fraction (LVEF). There was no difference in the estimates of LVEF obtained by the three operators (p = 0.1794). The reproducibility of estimates was good. In 95% of patients, using the Siemens method, the SD of the three estimates of LVEF by operator 1 was ≤ 1.7, operator 2 was ≤ 2.1 and operator 3 was ≤ 1.3. The corresponding values for the Hermes method were ≤ 2.5, ≤ 2.0 and ≤ 2.1. There was no difference in the results of matched pairs of data acquired on different cameras (p = 0.4933) CONCLUSION: Software packages for processing GBP studies are not interchangeable. The report should include the name and version of the software package used. Wherever possible, the same package should be used for serial studies. If this is not possible, the report should include the limits of agreement of the different packages. Data acquisition on different cameras did not influence the results.

  18. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm.

    PubMed

    Schmidt, Taly Gilat; Wang, Adam S; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-10-01

    The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was [Formula: see text], with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors.

  19. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm

    PubMed Central

    Schmidt, Taly Gilat; Wang, Adam S.; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-01-01

    Abstract. The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was −7%, with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors. PMID:27921070

  20. Pearson-type goodness-of-fit test with bootstrap maximum likelihood estimation.

    PubMed

    Yin, Guosheng; Ma, Yanyuan

    2013-01-01

    The Pearson test statistic is constructed by partitioning the data into bins and computing the difference between the observed and expected counts in these bins. If the maximum likelihood estimator (MLE) of the original data is used, the statistic generally does not follow a chi-squared distribution or any explicit distribution. We propose a bootstrap-based modification of the Pearson test statistic to recover the chi-squared distribution. We compute the observed and expected counts in the partitioned bins by using the MLE obtained from a bootstrap sample. This bootstrap-sample MLE adjusts exactly the right amount of randomness to the test statistic, and recovers the chi-squared distribution. The bootstrap chi-squared test is easy to implement, as it only requires fitting exactly the same model to the bootstrap data to obtain the corresponding MLE, and then constructs the bin counts based on the original data. We examine the test size and power of the new model diagnostic procedure using simulation studies and illustrate it with a real data set.

  1. Precise attitude rate estimation using star images obtained by mission telescope for satellite missions

    NASA Astrophysics Data System (ADS)

    Inamori, Takaya; Hosonuma, Takayuki; Ikari, Satoshi; Saisutjarit, Phongsatorn; Sako, Nobutada; Nakasuka, Shinichi

    2015-02-01

    Recently, small satellites have been employed in various satellite missions such as astronomical observation and remote sensing. During these missions, the attitudes of small satellites should be stabilized to a higher accuracy to obtain accurate science data and images. To achieve precise attitude stabilization, these small satellites should estimate their attitude rate under the strict constraints of mass, space, and cost. This research presents a new method for small satellites to precisely estimate angular rate using star blurred images by employing a mission telescope to achieve precise attitude stabilization. In this method, the angular velocity is estimated by assessing the quality of a star image, based on how blurred it appears to be. Because the proposed method utilizes existing mission devices, a satellite does not require additional precise rate sensors, which makes it easier to achieve precise stabilization given the strict constraints possessed by small satellites. The research studied the relationship between estimation accuracy and parameters used to achieve an attitude rate estimation, which has a precision greater than 1 × 10-6 rad/s. The method can be applied to all attitude sensors, which use optics systems such as sun sensors and star trackers (STTs). Finally, the method is applied to the nano astrometry satellite Nano-JASMINE, and we investigate the problems that are expected to arise with real small satellites by performing numerical simulations.

  2. Effect of windowing on lithosphere elastic thickness estimates obtained via the coherence method: Results from northern South America

    NASA Astrophysics Data System (ADS)

    Ojeda, GermáN. Y.; Whitman, Dean

    2002-11-01

    The effective elastic thickness (Te) of the lithosphere is a parameter that describes the flexural strength of a plate. A method routinely used to quantify this parameter is to calculate the coherence between the two-dimensional gravity and topography spectra. Prior to spectra calculation, data grids must be "windowed" in order to avoid edge effects. We investigated the sensitivity of Te estimates obtained via the coherence method to mirroring, Hanning and multitaper windowing techniques on synthetic data as well as on data from northern South America. These analyses suggest that the choice of windowing technique plays an important role in Te estimates and may result in discrepancies of several kilometers depending on the selected windowing method. Te results from mirrored grids tend to be greater than those from Hanning smoothed or multitapered grids. Results obtained from mirrored grids are likely to be over-estimates. This effect may be due to artificial long wavelengths introduced into the data at the time of mirroring. Coherence estimates obtained from three subareas in northern South America indicate that the average effective elastic thickness is in the range of 29-30 km, according to Hanning and multitaper windowed data. Lateral variations across the study area could not be unequivocally determined from this study. We suggest that the resolution of the coherence method does not permit evaluation of small (i.e., ˜5 km), local Te variations. However, the efficiency and robustness of the coherence method in rendering continent-scale estimates of elastic thickness has been confirmed.

  3. Comparison of GPS receiver DCB estimation methods using a GPS network

    NASA Astrophysics Data System (ADS)

    Choi, Byung-Kyu; Park, Jong-Uk; Min Roh, Kyoung; Lee, Sang-Jeong

    2013-07-01

    Two approaches for receiver differential code biases (DCB) estimation using the GPS data obtained from the Korean GPS network (KGN) in South Korea are suggested: the relative and single (absolute) methods. The relative method uses a GPS network, while the single method determines DCBs from a single station only. Their performance was assessed by comparing the receiver DCB values obtained from the relative method with those estimated by the single method. The daily averaged receiver DCBs obtained from the two different approaches showed good agreement for 7 days. The root mean square (RMS) value of those differences is 0.83 nanoseconds (ns). The standard deviation of the receiver DCBs estimated by the relative method was smaller than that of the single method. From these results, it is clear that the relative method can obtain more stable receiver DCBs compared with the single method over a short-term period. Additionally, the comparison between the receiver DCBs obtained by the Korea Astronomy and Space Science Institute (KASI) and those of the IGS Global Ionosphere Maps (GIM) showed a good agreement at 0.3 ns. As the accuracy of DCB values significantly affects the accuracy of ionospheric total electron content (TEC), more studies are needed to ensure the reliability and stability of the estimated receiver DCBs.

  4. Accuracy of patient specific organ-dose estimates obtained using an automated image segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-03-01

    The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.

  5. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  6. Challenges in Obtaining Estimates of the Risk of Tuberculosis Infection During Overseas Deployment.

    PubMed

    Mancuso, James D; Geurts, Mia

    2015-12-01

    Estimates of the risk of tuberculosis (TB) infection resulting from overseas deployment among U.S. military service members have varied widely, and have been plagued by methodological problems. The purpose of this study was to estimate the incidence of TB infection in the U.S. military resulting from deployment. Three populations were examined: 1) a unit of 2,228 soldiers redeploying from Iraq in 2008, 2) a cohort of 1,978 soldiers followed up over 5 years after basic training at Fort Jackson in 2009, and 3) 6,062 participants in the 2011-2012 National Health and Nutrition Examination Survey (NHANES). The risk of TB infection in the deployed population was low-0.6% (95% confidence interval [CI]: 0.1-2.3%)-and was similar to the non-deployed population. The prevalence of latent TB infection (LTBI) in the U.S. population was not significantly different among deployed and non-deployed veterans and those with no military service. The limitations of these retrospective studies highlight the challenge in obtaining valid estimates of risk using retrospective data and the need for a more definitive study. Similar to civilian long-term travelers, risks for TB infection during deployment are focal in nature, and testing should be targeted to only those at increased risk. © The American Society of Tropical Medicine and Hygiene.

  7. The first step toward genetic selection for host tolerance to infectious pathogens: obtaining the tolerance phenotype through group estimates

    PubMed Central

    Doeschl-Wilson, Andrea B.; Villanueva, Beatriz; Kyriazakis, Ilias

    2012-01-01

    Reliable phenotypes are paramount for meaningful quantification of genetic variation and for estimating individual breeding values on which genetic selection is based. In this paper, we assert that genetic improvement of host tolerance to disease, although desirable, may be first of all handicapped by the ability to obtain unbiased tolerance estimates at a phenotypic level. In contrast to resistance, which can be inferred by appropriate measures of within host pathogen burden, tolerance is more difficult to quantify as it refers to change in performance with respect to changes in pathogen burden. For this reason, tolerance phenotypes have only been specified at the level of a group of individuals, where such phenotypes can be estimated using regression analysis. However, few stsudies have raised the potential bias in these estimates resulting from confounding effects between resistance and tolerance. Using a simulation approach, we demonstrate (i) how these group tolerance estimates depend on within group variation and co-variation in resistance, tolerance, and vigor (performance in a pathogen free environment); and (ii) how tolerance estimates are affected by changes in pathogen virulence over the time course of infection and by the timing of measurements. We found that in order to obtain reliable group tolerance estimates, it is important to account for individual variation in vigor, if present, and that all individuals are at the same stage of infection when measurements are taken. The latter requirement makes estimation of tolerance based on cross-sectional field data challenging, as individuals become infected at different time points and the individual onset of infection is unknown. Repeated individual measurements of within host pathogen burden and performance would not only be valuable for inferring the infection status of individuals in field conditions, but would also provide tolerance estimates that capture the entire time course of infection. PMID

  8. Temporal variability patterns in solar radiation estimations

    NASA Astrophysics Data System (ADS)

    Vindel, José M.; Navarro, Ana A.; Valenzuela, Rita X.; Zarzalejo, Luis F.

    2016-06-01

    In this work, solar radiation estimations obtained from a satellite and a numerical weather prediction model in mainland Spain have been compared. Similar comparisons have been formerly carried out, but in this case, the methodology used is different: the temporal variability of both sources of estimation has been compared with the annual evolution of the radiation associated to the different study climate zones. The methodology is based on obtaining behavior patterns, using a Principal Component Analysis, following the annual evolution of solar radiation estimations. Indeed, the adjustment degree to these patterns in each point (assessed from maps of correlation) may be associated with the annual radiation variation (assessed from the interquartile range), which is associated, in turn, to different climate zones. In addition, the goodness of each estimation source has been assessed comparing it with data obtained from the radiation measurements in ground by pyranometers. For the study, radiation data from Satellite Application Facilities and data corresponding to the reanalysis carried out by the European Centre for Medium-Range Weather Forecasts have been used.

  9. An Optimal Estimation Method to Obtain Surface Layer Turbulent Fluxes from Profile Measurements

    NASA Astrophysics Data System (ADS)

    Kang, D.

    2015-12-01

    In the absence of direct turbulence measurements, the turbulence characteristics of the atmospheric surface layer are often derived from measurements of the surface layer mean properties based on Monin-Obukhov Similarity Theory (MOST). This approach requires two levels of the ensemble mean wind, temperature, and water vapor, from which the fluxes of momentum, sensible heat, and water vapor can be obtained. When only one measurement level is available, the roughness heights and the assumed properties of the corresponding variables at the respective roughness heights are used. In practice, the temporal mean with large number of samples are used in place of the ensemble mean. However, in many situations the samples of data are taken from multiple levels. It is thus desirable to derive the boundary layer flux properties using all measurements. In this study, we used an optimal estimation approach to derive surface layer properties based on all available measurements. This approach assumes that the samples are taken from a population whose ensemble mean profile follows the MOST. An optimized estimate is obtained when the results yield a minimum cost function defined as a weighted summation of all error variance at each sample altitude. The weights are based one sample data variance and the altitude of the measurements. This method was applied to measurements in the marine atmospheric surface layer from a small boat using radiosonde on a tethered balloon where temperature and relative humidity profiles in the lowest 50 m were made repeatedly in about 30 minutes. We will present the resultant fluxes and the derived MOST mean profiles using different sets of measurements. The advantage of this method over the 'traditional' methods will be illustrated. Some limitations of this optimization method will also be discussed. Its application to quantify the effects of marine surface layer environment on radar and communication signal propagation will be shown as well.

  10. Estimates of the solar internal angular velocity obtained with the Mt. Wilson 60-foot solar tower

    NASA Technical Reports Server (NTRS)

    Rhodes, Edward J., Jr.; Cacciani, Alessandro; Woodard, Martin; Tomczyk, Steven; Korzennik, Sylvain

    1987-01-01

    Estimates are obtained of the solar internal angular velocity from measurements of the frequency splittings of p-mode oscillations. A 16-day time series of full-disk Dopplergrams obtained during July and August 1984 at the 60-foot tower telescope of the Mt. Wilson Observatory is analyzed. Power spectra were computed for all of the zonal, tesseral, and sectoral p-modes from l = 0 to 89 and for all of the sectoral p-modes from l = 90 to 200. A mean power spectrum was calculated for each degree up to 89. The frequency differences of all of the different nonzonal modes were calculated for these mean power spectra.

  11. Blind estimation of reverberation time

    NASA Astrophysics Data System (ADS)

    Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; O'Brien, William D.; Lansing, Charissa R.; Feng, Albert S.

    2003-11-01

    The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. Many state-of-the-art audio signal processing algorithms, for example in hearing-aids and telephony, are expected to have the ability to characterize the listening environment, and turn on an appropriate processing strategy accordingly. Thus, a method for characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, a method for estimating RT without prior knowledge of sound sources or room geometry is presented. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time-constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.

  12. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  13. Online estimation of room reverberation time

    NASA Astrophysics Data System (ADS)

    Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; Feng, Albert S.

    2003-04-01

    The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. State-of-the-art signal processing algorithms for hearing aids are expected to have the ability to evaluate the characteristics of the listening environment and turn on an appropriate processing strategy accordingly. Thus, a method for the characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method or regression, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, we describe a method for estimating RT without prior knowledge of sound sources or room geometry. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.

  14. SURE Estimates for a Heteroscedastic Hierarchical Model

    PubMed Central

    Xie, Xianchao; Kou, S. C.; Brown, Lawrence D.

    2014-01-01

    Hierarchical models are extensively studied and widely used in statistics and many other scientific areas. They provide an effective tool for combining information from similar resources and achieving partial pooling of inference. Since the seminal work by James and Stein (1961) and Stein (1962), shrinkage estimation has become one major focus for hierarchical models. For the homoscedastic normal model, it is well known that shrinkage estimators, especially the James-Stein estimator, have good risk properties. The heteroscedastic model, though more appropriate for practical applications, is less well studied, and it is unclear what types of shrinkage estimators are superior in terms of the risk. We propose in this paper a class of shrinkage estimators based on Stein’s unbiased estimate of risk (SURE). We study asymptotic properties of various common estimators as the number of means to be estimated grows (p → ∞). We establish the asymptotic optimality property for the SURE estimators. We then extend our construction to create a class of semi-parametric shrinkage estimators and establish corresponding asymptotic optimality results. We emphasize that though the form of our SURE estimators is partially obtained through a normal model at the sampling level, their optimality properties do not heavily depend on such distributional assumptions. We apply the methods to two real data sets and obtain encouraging results. PMID:25301976

  15. The self-perception of dyspnoea threshold during the 6-min walk test: a good alternative to estimate the ventilatory threshold in chronic obstructive pulmonary disease.

    PubMed

    Couillard, Annabelle; Tremey, Emilie; Prefaut, Christian; Varray, Alain; Heraud, Nelly

    2016-12-01

    To determine and/or adjust exercise training intensity for patients when the cardiopulmonary exercise test is not accessible, the determination of dyspnoea threshold (defined as the onset of self-perceived breathing discomfort) during the 6-min walk test (6MWT) could be a good alternative. The aim of this study was to evaluate the feasibility and reproducibility of self-perceived dyspnoea threshold and to determine whether a useful equation to estimate ventilatory threshold from self-perceived dyspnoea threshold could be derived. A total of 82 patients were included and performed two 6MWTs, during which they raised a hand to signal self-perceived dyspnoea threshold. The reproducibility in terms of heart rate (HR) was analysed. On a subsample of patients (n=27), a stepwise regression analysis was carried out to obtain a predictive equation of HR at ventilatory threshold measured during a cardiopulmonary exercise test estimated from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s. Overall, 80% of patients could identify self-perceived dyspnoea threshold during the 6MWT. Self-perceived dyspnoea threshold was reproducibly expressed in HR (coefficient of variation=2.8%). A stepwise regression analysis enabled estimation of HR at ventilatory threshold from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s (adjusted r=0.79, r=0.63, and relative standard deviation=9.8 bpm). This study shows that a majority of patients with chronic obstructive pulmonary disease can identify a self-perceived dyspnoea threshold during the 6MWT. This HR at the dyspnoea threshold is highly reproducible and enable estimation of the HR at the ventilatory threshold.

  16. ALGORITHM BASED ON ARTIFICIAL BEE COLONY FOR UNFOLDING OF NEUTRON SPECTRA OBTAINED WITH BONNER SPHERES.

    PubMed

    Silva, Everton R; Freitas, Bruno M; Santos, Denison S; Maurício, Cláudia L P

    2018-04-13

    Occupational neutron fields usually have energies from the thermal range to some MeV and the characterization of the spectra is essential for estimation of the radioprotection quantities. Thus, the spectrum must be unfolded based on a limited number of measurements. This study implemented an algorithm based on the bee colonies behavior, named Artificial Bee Colony (ABC), where the intelligent behavior of the bees in search of food is reproduced to perform the unfolding of neutron spectra. The experimental measurements used Bonner spheres and 6LiI (Eu) detector, with irradiations using a thermal neutron flux and three reference fields: 241Am-Be, 252Cf and 252Cf (D2O). The ABC obtained good estimation of the expected spectrum even without previous information and its results were closer to expected spectra than those obtained by the SPUNIT algorithm.

  17. Method for obtaining structure and interactions from oriented lipid bilayers

    PubMed Central

    Lyatskaya, Yulia; Liu, Yufeng; Tristram-Nagle, Stephanie; Katsaras, John; Nagle, John F.

    2009-01-01

    Precise calculations are made of the scattering intensity I(q) from an oriented stack of lipid bilayers using a realistic model of fluctuations. The quantities of interest include the bilayer bending modulus Kc , the interbilayer interaction modulus B, and bilayer structure through the form factor F(qz). It is shown how Kc and B may be obtained from data at large qz where fluctuations dominate. Good estimates of F(qz) can be made over wide ranges of qz by using I(q) in q regions away from the peaks and for qr≠0 where details of the scattering domains play little role. Rough estimates of domain sizes can also be made from smaller qz data. Results are presented for data taken on fully hydrated, oriented DOPC bilayers in the Lα phase. These results illustrate the advantages of oriented samples compared to powder samples. PMID:11304287

  18. Atmospheric Turbulence Estimates from a Pulsed Lidar

    NASA Technical Reports Server (NTRS)

    Pruis, Matthew J.; Delisi, Donald P.; Ahmad, Nash'at N.; Proctor, Fred H.

    2013-01-01

    Estimates of the eddy dissipation rate (EDR) were obtained from measurements made by a coherent pulsed lidar and compared with estimates from mesoscale model simulations and measurements from an in situ sonic anemometer at the Denver International Airport and with EDR estimates from the last observation time of the trailing vortex pair. The estimates of EDR from the lidar were obtained using two different methodologies. The two methodologies show consistent estimates of the vertical profiles. Comparison of EDR derived from the Weather Research and Forecast (WRF) mesoscale model with the in situ lidar estimates show good agreement during the daytime convective boundary layer, but the WRF simulations tend to overestimate EDR during the nighttime. The EDR estimates from a sonic anemometer located at 7.3 meters above ground level are approximately one order of magnitude greater than both the WRF and lidar estimates - which are from greater heights - during the daytime convective boundary layer and substantially greater during the nighttime stable boundary layer. The consistency of the EDR estimates from different methods suggests a reasonable ability to predict the temporal evolution of a spatially averaged vertical profile of EDR in an airport terminal area using a mesoscale model during the daytime convective boundary layer. In the stable nighttime boundary layer, there may be added value to EDR estimates provided by in situ lidar measurements.

  19. LC-MS/MS-based approach for obtaining exposure estimates of metabolites in early clinical trials using radioactive metabolites as reference standards.

    PubMed

    Zhang, Donglu; Raghavan, Nirmala; Chando, Theodore; Gambardella, Janice; Fu, Yunlin; Zhang, Duxi; Unger, Steve E; Humphreys, W Griffith

    2007-12-01

    An LC-MS/MS-based approach that employs authentic radioactive metabolites as reference standards was developed to estimate metabolite exposures in early drug development studies. This method is useful to estimate metabolite levels in studies done with non-radiolabeled compounds where metabolite standards are not available to allow standard LC-MS/MS assay development. A metabolite mixture obtained from an in vivo source treated with a radiolabeled compound was partially purified, quantified, and spiked into human plasma to provide metabolite standard curves. Metabolites were analyzed by LC-MS/MS using the specific mass transitions and an internal standard. The metabolite concentrations determined by this approach were found to be comparable to those determined by valid LC-MS/MS assays. This approach does not requires synthesis of authentic metabolites or the knowledge of exact structures of metabolites, and therefore should provide a useful method to obtain early estimates of circulating metabolites in early clinical or toxicological studies.

  20. A time-frequency analysis method to obtain stable estimates of magnetotelluric response function based on Hilbert-Huang transform

    NASA Astrophysics Data System (ADS)

    Cai, Jianhua

    2017-05-01

    The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.

  1. Obtaining Parts

    Science.gov Websites

    The Cosmic Connection Parts for the Berkeley Detector Suppliers: Scintillator Eljen Technology 1 obtain the components needed to build the Berkeley Detector. These companies have helped previous the last update. He estimates that the cost to build a detector varies from $1500 to $2700 depending

  2. A hierarchical estimator development for estimation of tire-road friction coefficient.

    PubMed

    Zhang, Xudong; Göhlich, Dietmar

    2017-01-01

    The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN) and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified "magic formula" tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method.

  3. A hierarchical estimator development for estimation of tire-road friction coefficient

    PubMed Central

    Zhang, Xudong; Göhlich, Dietmar

    2017-01-01

    The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN) and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified “magic formula” tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method. PMID:28178332

  4. Obtaining parsimonious hydraulic conductivity fields using head and transport observations: A Bayesian geostatistical parameter estimation approach

    NASA Astrophysics Data System (ADS)

    Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.

    2009-08-01

    Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.

  5. Obtaining parsimonious hydraulic conductivity fields using head and transport observations: A Bayesian geostatistical parameter estimation approach

    USGS Publications Warehouse

    Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.

    2009-01-01

    Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.

  6. Intakes of culinary herbs and spices from a food frequency questionnaire evaluated against 28-days estimated records

    PubMed Central

    2011-01-01

    Background Worldwide, herbs and spices are much used food flavourings. However, little data exist regarding actual dietary intake of culinary herbs and spices. We developed a food frequency questionnaire (FFQ) for the assessment of habitual diet the preceding year, with focus on phytochemical rich food, including herbs and spices. The aim of the present study was to evaluate the intakes of herbs and spices from the FFQ with estimates of intake from another dietary assessment method. Thus we compared the intake estimates from the FFQ with 28 days of estimated records of herb and spice consumption as a reference method. Methods The evaluation study was conducted among 146 free living adults, who filled in the FFQ and 2-4 weeks later carried out 28 days recording of herb and spice consumption. The FFQ included a section with questions about 27 individual culinary herbs and spices, while the records were open ended records for recording of herbs and spice consumption exclusively. Results Our study showed that the FFQ obtained slightly higher estimates of total intake of herbs and spices than the total intake assessed by the Herbs and Spice Records (HSR). The correlation between the two assessment methods with regard to total intake was good (r = 0.5), and the cross-classification suggests that the FFQ may be used to classify subjects according to total herb and spice intake. For the 8 most frequently consumed individual herbs and spices, the FFQ obtained good estimates of median frequency of intake for 2 herbs/spices, while good estimates of portion sizes were obtained for 4 out of 8 herbs/spices. Conclusions Our results suggested that the FFQ was able to give good estimates of frequency of intake and portion sizes on group level for several of the most frequently used herbs and spices. The FFQ was only able to fairly rank subjects according to frequency of intake of the 8 most frequently consumed herbs and spices. Other studies are warranted to further explore the

  7. Intakes of culinary herbs and spices from a food frequency questionnaire evaluated against 28-days estimated records.

    PubMed

    Carlsen, Monica H; Blomhoff, Rune; Andersen, Lene F

    2011-05-16

    Worldwide, herbs and spices are much used food flavourings. However, little data exist regarding actual dietary intake of culinary herbs and spices. We developed a food frequency questionnaire (FFQ) for the assessment of habitual diet the preceding year, with focus on phytochemical rich food, including herbs and spices. The aim of the present study was to evaluate the intakes of herbs and spices from the FFQ with estimates of intake from another dietary assessment method. Thus we compared the intake estimates from the FFQ with 28 days of estimated records of herb and spice consumption as a reference method. The evaluation study was conducted among 146 free living adults, who filled in the FFQ and 2-4 weeks later carried out 28 days recording of herb and spice consumption. The FFQ included a section with questions about 27 individual culinary herbs and spices, while the records were open ended records for recording of herbs and spice consumption exclusively. Our study showed that the FFQ obtained slightly higher estimates of total intake of herbs and spices than the total intake assessed by the Herbs and Spice Records (HSR). The correlation between the two assessment methods with regard to total intake was good (r = 0.5), and the cross-classification suggests that the FFQ may be used to classify subjects according to total herb and spice intake. For the 8 most frequently consumed individual herbs and spices, the FFQ obtained good estimates of median frequency of intake for 2 herbs/spices, while good estimates of portion sizes were obtained for 4 out of 8 herbs/spices. Our results suggested that the FFQ was able to give good estimates of frequency of intake and portion sizes on group level for several of the most frequently used herbs and spices. The FFQ was only able to fairly rank subjects according to frequency of intake of the 8 most frequently consumed herbs and spices. Other studies are warranted to further explore the intakes of culinary spices and herbs.

  8. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  9. Preliminary comparison between real-time in-vivo spectral and transverse oscillation velocity estimates

    NASA Astrophysics Data System (ADS)

    Pedersen, Mads Møller; Pihl, Michael Johannes; Haugaard, Per; Hansen, Jens Munk; Lindskov Hansen, Kristoffer; Bachmann Nielsen, Michael; Jensen, Jørgen Arendt

    2011-03-01

    Spectral velocity estimation is considered the gold standard in medical ultrasound. Peak systole (PS), end diastole (ED), and resistive index (RI) are used clinically. Angle correction is performed using a flow angle set manually. With Transverse Oscillation (TO) velocity estimates the flow angle, peak systole (PSTO), end diastole (EDTO), and resistive index (RITO) are estimated. This study investigates if these clinical parameters are estimated equally good using spectral and TO data. The right common carotid arteries of three healthy volunteers were scanned longitudinally. Average TO flow angles and std were calculated { 52+/-18 ; 55+/-23 ; 60+/-16 }°. Spectral angles { 52 ; 56 ; 52 }° were obtained from the B-mode images. Obtained values are: PSTO { 76+/-15 ; 89+/-28 ; 77+/-7 } cm/s, spectral PS { 77 ; 110 ; 76 } cm/s, EDTO { 10+/-3 ; 14+/-8 ; 15+/-3 } cm/s, spectral ED { 18 ; 13 ; 20 } cm/s, RITO { 0.87+/-0.05 ; 0.79+/-0.21 ; 0.79+/-0.06 }, and spectral RI { 0.77 ; 0.88 ; 0.73 }. Vector angles are within +/-two std of the spectral angle. TO velocity estimates are within +/-three std of the spectral estimates. RITO are within +/-two std of the spectral estimates. Preliminary data indicates that the TO and spectral velocity estimates are equally good. With TO there is no manual angle setting and no flow angle limitation. TO velocity estimation can also automatically handle situations where the angle varies over the cardiac cycle. More detailed temporal and spatial vector estimates with diagnostic potential are available with the TO velocity estimation.

  10. Age Estimation of Infants Through Metric Analysis of Developing Anterior Deciduous Teeth.

    PubMed

    Viciano, Joan; De Luca, Stefano; Irurita, Javier; Alemán, Inmaculada

    2018-01-01

    This study provides regression equations for estimation of age of infants from the dimensions of their developing deciduous teeth. The sample comprises 97 individuals of known sex and age (62 boys, 35 girls), aged between 2 days and 1,081 days. The age-estimation equations were obtained for the sexes combined, as well as for each sex separately, thus including "sex" as an independent variable. The values of the correlations and determination coefficients obtained for each regression equation indicate good fits for most of the equations obtained. The "sex" factor was statistically significant when included as an independent variable in seven of the regression equations. However, the "sex" factor provided an advantage for age estimation in only three of the equations, compared to those that did not include "sex" as a factor. These data suggest that the ages of infants can be accurately estimated from measurements of their developing deciduous teeth. © 2017 American Academy of Forensic Sciences.

  11. A convenient method of obtaining percentile norms and accompanying interval estimates for self-report mood scales (DASS, DASS-21, HADS, PANAS, and sAD).

    PubMed

    Crawford, John R; Garthwaite, Paul H; Lawrie, Caroline J; Henry, Julie D; MacDonald, Marie A; Sutherland, Jane; Sinha, Priyanka

    2009-06-01

    A series of recent papers have reported normative data from the general adult population for commonly used self-report mood scales. To bring together and supplement these data in order to provide a convenient means of obtaining percentile norms for the mood scales. A computer program was developed that provides point and interval estimates of the percentile rank corresponding to raw scores on the various self-report scales. The program can be used to obtain point and interval estimates of the percentile rank of an individual's raw scores on the DASS, DASS-21, HADS, PANAS, and sAD mood scales, based on normative sample sizes ranging from 758 to 3822. The interval estimates can be obtained using either classical or Bayesian methods as preferred. The computer program (which can be downloaded at www.abdn.ac.uk/~psy086/dept/MoodScore.htm) provides a convenient and reliable means of supplementing existing cut-off scores for self-report mood scales.

  12. A biphasic parameter estimation method for quantitative analysis of dynamic renal scintigraphic data

    NASA Astrophysics Data System (ADS)

    Koh, T. S.; Zhang, Jeff L.; Ong, C. K.; Shuter, B.

    2006-06-01

    Dynamic renal scintigraphy is an established method in nuclear medicine, commonly used for the assessment of renal function. In this paper, a biphasic model fitting method is proposed for simultaneous estimation of both vascular and parenchymal parameters from renal scintigraphic data. These parameters include the renal plasma flow, vascular and parenchymal mean transit times, and the glomerular extraction rate. Monte Carlo simulation was used to evaluate the stability and confidence of the parameter estimates obtained by the proposed biphasic method, before applying the method on actual patient study cases to compare with the conventional fitting approach and other established renal indices. The various parameter estimates obtained using the proposed method were found to be consistent with the respective pathologies of the study cases. The renal plasma flow and extraction rate estimated by the proposed method were in good agreement with those previously obtained using dynamic computed tomography and magnetic resonance imaging.

  13. Damping factor estimation using spin wave attenuation in permalloy film

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manago, Takashi, E-mail: manago@fukuoka-u.ac.jp; Yamanoi, Kazuto; Kasai, Shinya

    2015-05-07

    Damping factor of a Permalloy (Py) thin film is estimated by using the magnetostatic spin wave propagation. The attenuation lengths are obtained by the dependence of the transmission intensity on the antenna distance, and decrease with increasing magnetic fields. The relationship between the attenuation length, damping factor, and external magnetic field is derived theoretically, and the damping factor was determined to be 0.0063 by fitting the magnetic field dependence of the attenuation length, using the derived equation. The obtained value is in good agreement with the general value of Py. Thus, this estimation method of the damping factor using spinmore » waves attenuation can be useful tool for ferromagnetic thin films.« less

  14. Selecting good regions to deblur via relative total variation

    NASA Astrophysics Data System (ADS)

    Li, Lerenhan; Yan, Hao; Fan, Zhihua; Zheng, Hanqing; Gao, Changxin; Sang, Nong

    2018-03-01

    Image deblurring is to estimate the blur kernel and to restore the latent image. It is usually divided into two stage, including kernel estimation and image restoration. In kernel estimation, selecting a good region that contains structure information is helpful to the accuracy of estimated kernel. Good region to deblur is usually expert-chosen or in a trial-anderror way. In this paper, we apply a metric named relative total variation (RTV) to discriminate the structure regions from smooth and texture. Given a blurry image, we first calculate the RTV of each pixel to determine whether it is the pixel in structure region, after which, we sample the image in an overlapping way. At last, the sampled region that contains the most structure pixels is the best region to deblur. Both qualitative and quantitative experiments show that our proposed method can help to estimate the kernel accurately.

  15. Uncertainty Estimates of Psychoacoustic Thresholds Obtained from Group Tests

    NASA Technical Reports Server (NTRS)

    Rathsam, Jonathan; Christian, Andrew

    2016-01-01

    Adaptive psychoacoustic test methods, in which the next signal level depends on the response to the previous signal, are the most efficient for determining psychoacoustic thresholds of individual subjects. In many tests conducted in the NASA psychoacoustic labs, the goal is to determine thresholds representative of the general population. To do this economically, non-adaptive testing methods are used in which three or four subjects are tested at the same time with predetermined signal levels. This approach requires us to identify techniques for assessing the uncertainty in resulting group-average psychoacoustic thresholds. In this presentation we examine the Delta Method of frequentist statistics, the Generalized Linear Model (GLM), the Nonparametric Bootstrap, a frequentist method, and Markov Chain Monte Carlo Posterior Estimation and a Bayesian approach. Each technique is exercised on a manufactured, theoretical dataset and then on datasets from two psychoacoustics facilities at NASA. The Delta Method is the simplest to implement and accurate for the cases studied. The GLM is found to be the least robust, and the Bootstrap takes the longest to calculate. The Bayesian Posterior Estimate is the most versatile technique examined because it allows the inclusion of prior information.

  16. The application of mean field theory to image motion estimation.

    PubMed

    Zhang, J; Hanauer, G G

    1995-01-01

    Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.

  17. Greenhouse gases inventory and carbon balance of two dairy systems obtained from two methane-estimation methods.

    PubMed

    Cunha, C S; Lopes, N L; Veloso, C M; Jacovine, L A G; Tomich, T R; Pereira, L G R; Marcondes, M I

    2016-11-15

    The adoption of carbon inventories for dairy farms in tropical countries based on models developed from animals and diets of temperate climates is questionable. Thus, the objectives of this study were to estimate enteric methane (CH4) emissions through the SF6 tracer gas technique and through equations proposed by the Intergovernmental Panel on Climate Change (IPCC) Tier 2 and to calculate the inventory of greenhouse gas (GHG) emissions from two dairy systems. In addition, the carbon balance of these properties was estimated using enteric CH4 emissions obtained using both methodologies. In trial 1, the CH4 emissions were estimated from seven Holstein dairy cattle categories based on the SF6 tracer gas technique and on IPCC equations. The categories used in the study were prepubertal heifers (n=6); pubertal heifers (n=4); pregnant heifers (n=5); high-producing (n=6); medium-producing (n=5); low-producing (n=4) and dry cows (n=5). Enteric methane emission was higher for the category comprising prepubertal heifers when estimated by the equations proposed by the IPCC Tier 2. However, higher CH4 emissions were estimated by the SF6 technique in the categories including medium- and high-producing cows and dry cows. Pubertal heifers, pregnant heifers, and low-producing cows had equal CH4 emissions as estimated by both methods. In trial 2, two dairy farms were monitored for one year to identify all activities that contributed in any way to GHG emissions. The total emission from Farm 1 was 3.21t CO2e/animal/yr, of which 1.63t corresponded to enteric CH4. Farm 2 emitted 3.18t CO2e/animal/yr, with 1.70t of enteric CH4. IPCC estimations can underestimate CH4 emissions from some categories while overestimate others. However, considering the whole property, these discrepancies are offset and we would submit that the equations suggested by the IPCC properly estimate the total CH4 emission and carbon balance of the properties. Thus, the IPCC equations should be utilized with

  18. Estimating physiological skin parameters from hyperspectral signatures

    NASA Astrophysics Data System (ADS)

    Vyas, Saurabh; Banerjee, Amit; Burlina, Philippe

    2013-05-01

    We describe an approach for estimating human skin parameters, such as melanosome concentration, collagen concentration, oxygen saturation, and blood volume, using hyperspectral radiometric measurements (signatures) obtained from in vivo skin. We use a computational model based on Kubelka-Munk theory and the Fresnel equations. This model forward maps the skin parameters to a corresponding multiband reflectance spectra. Machine-learning-based regression is used to generate the inverse map, and hence estimate skin parameters from hyperspectral signatures. We test our methods using synthetic and in vivo skin signatures obtained in the visible through the short wave infrared domains from 24 patients of both genders and Caucasian, Asian, and African American ethnicities. Performance validation shows promising results: good agreement with the ground truth and well-established physiological precepts. These methods have potential use in the characterization of skin abnormalities and in minimally-invasive prescreening of malignant skin cancers.

  19. Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation

    NASA Astrophysics Data System (ADS)

    Demir, Uygar; Toker, Cenk; Çenet, Duygu

    2016-07-01

    Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent

  20. Automatic estimation of retinal nerve fiber bundle orientation in SD-OCT images using a structure-oriented smoothing filter

    NASA Astrophysics Data System (ADS)

    Ghafaryasl, Babak; Baart, Robert; de Boer, Johannes F.; Vermeer, Koenraad A.; van Vliet, Lucas J.

    2017-02-01

    Optical coherence tomography (OCT) yields high-resolution, three-dimensional images of the retina. A better understanding of retinal nerve fiber bundle (RNFB) trajectories in combination with visual field data may be used for future diagnosis and monitoring of glaucoma. However, manual tracing of these bundles is a tedious task. In this work, we present an automatic technique to estimate the orientation of RNFBs from volumetric OCT scans. Our method consists of several steps, starting from automatic segmentation of the RNFL. Then, a stack of en face images around the posterior nerve fiber layer interface was extracted. The image showing the best visibility of RNFB trajectories was selected for further processing. After denoising the selected en face image, a semblance structure-oriented filter was applied to probe the strength of local linear structure in a discrete set of orientations creating an orientation space. Gaussian filtering along the orientation axis in this space is used to find the dominant orientation. Next, a confidence map was created to supplement the estimated orientation. This confidence map was used as pixel weight in normalized convolution to regularize the semblance filter response after which a new orientation estimate can be obtained. Finally, after several iterations an orientation field corresponding to the strongest local orientation was obtained. The RNFB orientations of six macular scans from three subjects were estimated. For all scans, visual inspection shows a good agreement between the estimated orientation fields and the RNFB trajectories in the en face images. Additionally, a good correlation between the orientation fields of two scans of the same subject was observed. Our method was also applied to a larger field of view around the macula. Manual tracing of the RNFB trajectories shows a good agreement with the automatically obtained streamlines obtained by fiber tracking.

  1. Improved optical flow motion estimation for digital image stabilization

    NASA Astrophysics Data System (ADS)

    Lai, Lijun; Xu, Zhiyong; Zhang, Xuyao

    2015-11-01

    Optical flow is the instantaneous motion vector at each pixel in the image frame at a time instant. The gradient-based approach for optical flow computation can't work well when the video motion is too large. To alleviate such problem, we incorporate this algorithm into a pyramid multi-resolution coarse-to-fine search strategy. Using pyramid strategy to obtain multi-resolution images; Using iterative relationship from the highest level to the lowest level to obtain inter-frames' affine parameters; Subsequence frames compensate back to the first frame to obtain stabilized sequence. The experiment results demonstrate that the promoted method has good performance in global motion estimation.

  2. A wet chemical method for the estimation of carbon in uranium carbides.

    PubMed

    Chandramouli, V; Yadav, R B; Rao, P R

    1987-09-01

    A wet chemical method for the estimation of carbon in uranium carbides has been developed, based on oxidation with a saturated solution of sodium dichromate in 9M sulphuric acid, absorption of the evolved carbon dioxide in a known excess of barium hydroxide solution, and titration of the excess of barium hydroxide with standard potassium hydrogen phthalate solution. The carbon content obtained is in good agreement with that obtained by combustion and titration.

  3. VizieR Online Data Catalog: GOODS-MUSIC catalog updated version (Santini+, 2009)

    NASA Astrophysics Data System (ADS)

    Santini, P.; Fontana, A.; Grazian, A.; Salimbeni, S.; Fiore, F.; Fontanot, F.; Boutsia, K.; Castelllano, M.; Cristiani, S.; de Santis, C.; Gallozzi, S.; Giallongo, E.; Nonino, M.; Menci, N.; Paris, D.; Pentericci, L.; Vanzella, E.

    2009-06-01

    The GOODS-MUSIC multiwavelength catalog provides photometric and spectroscopic information for galaxies in the GOODS Southern field. It includes two U images obtained with the ESO 2.2m telescope and one U band image from VLT-VIMOS, the ACS-HST images in four optical (B,V,i,z) bands, the VLT-ISAAC J, H, and Ks bands as well as the Spitzer images at 3.6, 4.5, 5.8, and 8 micron (IRAC) and 24 micron (MIPS). Most of these images have been made publicly available in the coadded version by the GOODS team, while the U band data were retrieved in raw format and reduced by our team. We also collected all the available spectroscopic information from public spectroscopic surveys and cross-correlated the spectroscopic redshifts with our photometric catalog. For the unobserved fraction of the objects, we applied our photometric redshift code to obtain well-calibrated photometric redshifts. The final catalog is made up of 15208 objects, with 209 known stars and 61 AGNs. The major new feature of this updated release is the inclusion of 24 micron photometry. Further improvements concern a revised photometry in the four IRAC bands (mainly based on the use of new PSF-matching kernerls and on a revised procedure for estimating the background), the enlargement of the sample of galaxies with spectroscopic redshifts, the addition of objects selected on the IRAC 4.5 micron image and a more careful selection of AGN sources. (1 data file).

  4. Beef quality parameters estimation using ultrasound and color images

    PubMed Central

    2015-01-01

    Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. Proposal An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. Conclusions The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat. PMID:25734452

  5. Brain Tissue Compartment Density Estimated Using Diffusion-Weighted MRI Yields Tissue Parameters Consistent With Histology

    PubMed Central

    Sepehrband, Farshid; Clark, Kristi A.; Ullmann, Jeremy F.P.; Kurniawan, Nyoman D.; Leanage, Gayeshika; Reutens, David C.; Yang, Zhengyi

    2015-01-01

    We examined whether quantitative density measures of cerebral tissue consistent with histology can be obtained from diffusion magnetic resonance imaging (MRI). By incorporating prior knowledge of myelin and cell membrane densities, absolute tissue density values were estimated from relative intra-cellular and intra-neurite density values obtained from diffusion MRI. The NODDI (neurite orientation distribution and density imaging) technique, which can be applied clinically, was used. Myelin density estimates were compared with the results of electron and light microscopy in ex vivo mouse brain and with published density estimates in a healthy human brain. In ex vivo mouse brain, estimated myelin densities in different sub-regions of the mouse corpus callosum were almost identical to values obtained from electron microscopy (Diffusion MRI: 42±6%, 36±4% and 43±5%; electron microscopy: 41±10%, 36±8% and 44±12% in genu, body and splenium, respectively). In the human brain, good agreement was observed between estimated fiber density measurements and previously reported values based on electron microscopy. Estimated density values were unaffected by crossing fibers. PMID:26096639

  6. Comparison of Species Richness Estimates Obtained Using Nearly Complete Fragments and Simulated Pyrosequencing-Generated Fragments in 16S rRNA Gene-Based Environmental Surveys▿ †

    PubMed Central

    Youssef, Noha; Sheik, Cody S.; Krumholz, Lee R.; Najar, Fares Z.; Roe, Bruce A.; Elshahed, Mostafa S.

    2009-01-01

    Pyrosequencing-based 16S rRNA gene surveys are increasingly utilized to study highly diverse bacterial communities, with special emphasis on utilizing the large number of sequences obtained (tens to hundreds of thousands) for species richness estimation. However, it is not yet clear how the number of operational taxonomic units (OTUs) and, hence, species richness estimates determined using shorter fragments at different taxonomic cutoffs correlates with the number of OTUs assigned using longer, nearly complete 16S rRNA gene fragments. We constructed a 16S rRNA clone library from an undisturbed tallgrass prairie soil (1,132 clones) and used it to compare species richness estimates obtained using eight pyrosequencing candidate fragments (99 to 361 bp in length) and the nearly full-length fragment. Fragments encompassing the V1 and V2 (V1+V2) region and the V6 region (generated using primer pairs 8F-338R and 967F-1046R) overestimated species richness; fragments encompassing the V3, V7, and V7+V8 hypervariable regions (generated using primer pairs 338F-530R, 1046F-1220R, and 1046F-1392R) underestimated species richness; and fragments encompassing the V4, V5+V6, and V6+V7 regions (generated using primer pairs 530F-805R, 805F-1046R, and 967F-1220R) provided estimates comparable to those obtained with the nearly full-length fragment. These patterns were observed regardless of the alignment method utilized or the parameter used to gauge comparative levels of species richness (number of OTUs observed, slope of scatter plots of pairwise distance values for short and nearly complete fragments, and nonparametric and parametric species richness estimates). Similar results were obtained when analyzing three other datasets derived from soil, adult Zebrafish gut, and basaltic formations in the East Pacific Rise. Regression analysis indicated that these observed discrepancies in species richness estimates within various regions could readily be explained by the proportions of

  7. Regulatory theory: commercially sustainable markets rely upon satisfying the public interest in obtaining credible goods.

    PubMed

    Warren-Jones, Amanda

    2017-10-01

    Regulatory theory is premised on the failure of markets, prompting a focus on regulators and industry from economic perspectives. This article argues that overlooking the public interest in the sustainability of commercial markets risks markets failing completely. This point is exemplified through health care markets - meeting an essential need - and focuses upon innovative medicines as the most desired products in that market. If this seemingly invulnerable market risks failure, there is a pressing need to consider the public interest in sustainable markets within regulatory literature and practice. Innovative medicines are credence goods, meaning that the sustainability of the market fundamentally relies upon the public trusting regulators to vouch for product quality. Yet, quality is being eroded by patent bodies focused on economic benefits from market growth, rather than ensuring innovatory value. Remunerative bodies are not funding medicines relative to market value, and market authorisation bodies are not vouching for robust safety standards or confining market entry to products for 'unmet medical need'. Arguably, this failure to assure quality heightens the risk of the market failing where it cannot be substituted by the reputation or credibility of providers of goods and/or information such as health care professionals/institutions, patient groups or industry.

  8. Estimation of brittleness indices for pay zone determination in a shale-gas reservoir by using elastic properties obtained from micromechanics

    NASA Astrophysics Data System (ADS)

    Lizcano-Hernández, Edgar G.; Nicolás-López, Rubén; Valdiviezo-Mijangos, Oscar C.; Meléndez-Martínez, Jaime

    2018-04-01

    The brittleness indices (BI) of gas-shales are computed by using their effective mechanical properties obtained from micromechanical self-consistent modeling with the purpose of assisting in the identification of the more-brittle regions in shale-gas reservoirs, i.e., the so-called ‘pay zone’. The obtained BI are plotted in lambda-rho versus mu-rho λ ρ -μ ρ and Young’s modulus versus Poisson’s ratio E-ν ternary diagrams along with the estimated elastic properties from log data of three productive shale-gas wells where the pay zone is already known. A quantitative comparison between the obtained BI and the well log data allows for the delimitation of regions where BI values could indicate the best reservoir target in regions with the highest shale-gas exploitation potential. Therefore, a range of values for elastic properties and brittleness indexes that can be used as a data source to support the well placement procedure is obtained.

  9. Shock Formation Height in the Solar Corona Estimated from SDO and Radio Observations

    NASA Technical Reports Server (NTRS)

    Gopalswamy, N.; Nitta, N.

    2011-01-01

    Wave transients at EUV wavelengths and type II radio bursts are good indicators of shock formation in the solar corona. We use recent EUV wave observations from SDO and combine them with metric type II radio data to estimate the height in the corona where the shocks form. We compare the results with those obtained from other methods. We also estimate the shock formation heights independently using white-light observations of coronal mass ejections that ultimately drive the shocks.

  10. Estimates of price and income elasticity in Greece. Greek debt crisis transforming cigarettes into a luxury good: an econometric approach

    PubMed Central

    Tarantilis, Filippos; Athanasakis, Kostas; Zavras, Dimitris; Vozikis, Athanassios; Kyriopoulos, Ioannis

    2015-01-01

    Objective During the past decades, smoking prevalence in Greece was estimated to be near or over 40%. Following a sharp fall in cigarette consumption, as shown in current data, our objective is to assess smokers’ sensitivity to cigarette price and consumer income changes as well as to project health benefits of an additional tax increase. Methods Cigarette consumption was considered as the dependent variable, with Weighted Average Price as a proxy for cigarette price, gross domestic product as a proxy for consumers’ income and dummy variables reflecting smoking restrictions and antismoking campaigns. Values were computed to natural logarithms and regression was performed. Then, four scenarios of tax increase were distinguished in order to calculate potential health benefits. Results Short-run price elasticity is estimated at −0.441 and short-run income elasticity is estimated at 1.040. Antismoking campaigns were found to have a statistically significant impact on consumption. Results indicate that, depending on the level of tax increase, annual per capita consumption could fall by at least 209.83 cigarettes; tax revenue could rise by more than €0.74 billion, while smokers could be reduced by up to 530 568 and at least 465 smoking-related deaths could be averted. Conclusions Price elasticity estimates are similar to previous studies in Greece, while income elasticity estimates are far greater. With cigarettes regarded as a luxury good, a great opportunity is presented for decisionmakers to counter smoking. Increased taxation, along with focused antismoking campaigns, law reinforcement (to ensure compliance with smoking bans) and intensive control for smuggling could invoke a massive blow to the tobacco epidemic in Greece. PMID:25564137

  11. Bias Correction of MODIS AOD using DragonNET to obtain improved estimation of PM2.5

    NASA Astrophysics Data System (ADS)

    Gross, B.; Malakar, N. K.; Atia, A.; Moshary, F.; Ahmed, S. A.; Oo, M. M.

    2014-12-01

    MODIS AOD retreivals using the Dark Target algorithm is strongly affected by the underlying surface reflection properties. In particular, the operational algorithms make use of surface parameterizations trained on global datasets and therefore do not account properly for urban surface differences. This parameterization continues to show an underestimation of the surface reflection which results in a general over-biasing in AOD retrievals. Recent results using the Dragon-Network datasets as well as high resolution retrievals in the NYC area illustrate that this is even more significant at the newest C006 3 km retrievals. In the past, we used AERONET observation in the City College to obtain bias-corrected AOD, but the homogeneity assumptions using only one site for the region is clearly an issue. On the other hand, DragonNET observations provide ample opportunities to obtain better tuning the surface corrections while also providing better statistical validation. In this study we present a neural network method to obtain bias correction of the MODIS AOD using multiple factors including surface reflectivity at 2130nm, sun-view geometrical factors and land-class information. These corrected AOD's are then used together with additional WRF meteorological factors to improve estimates of PM2.5. Efforts to explore the portability to other urban areas will be discussed. In addition, annual surface ratio maps will be developed illustrating that among the land classes, the urban pixels constitute the largest deviations from the operational model.

  12. Good Mathematics Teaching from Mexican High School Students' Perspective

    ERIC Educational Resources Information Center

    Martinez-Sierra, Gustavo

    2014-01-01

    This paper reports a qualitative research that identifies the characteristics of good mathematics teaching from the perspective of Mexican high school students. For this purpose, the social representations of a good mathematics teacher and a good mathematics class were identified in a group of 67 students. In order to obtain information, a…

  13. Obtaining continuous BrAC/BAC estimates in the field: A hybrid system integrating transdermal alcohol biosensor, Intellidrink smartphone app, and BrAC Estimator software tools.

    PubMed

    Luczak, Susan E; Hawkins, Ashley L; Dai, Zheng; Wichmann, Raphael; Wang, Chunming; Rosen, I Gary

    2018-08-01

    Biosensors have been developed to measure transdermal alcohol concentration (TAC), but converting TAC into interpretable indices of blood/breath alcohol concentration (BAC/BrAC) is difficult because of variations that occur in TAC across individuals, drinking episodes, and devices. We have developed mathematical models and the BrAC Estimator software for calibrating and inverting TAC into quantifiable BrAC estimates (eBrAC). The calibration protocol to determine the individualized parameters for a specific individual wearing a specific device requires a drinking session in which BrAC and TAC measurements are obtained simultaneously. This calibration protocol was originally conducted in the laboratory with breath analyzers used to produce the BrAC data. Here we develop and test an alternative calibration protocol using drinking diary data collected in the field with the smartphone app Intellidrink to produce the BrAC calibration data. We compared BrAC Estimator software results for 11 drinking episodes collected by an expert user when using Intellidrink versus breath analyzer measurements as BrAC calibration data. Inversion phase results indicated the Intellidrink calibration protocol produced similar eBrAC curves and captured peak eBrAC to within 0.0003%, time of peak eBrAC to within 18min, and area under the eBrAC curve to within 0.025% alcohol-hours as the breath analyzer calibration protocol. This study provides evidence that drinking diary data can be used in place of breath analyzer data in the BrAC Estimator software calibration procedure, which can reduce participant and researcher burden and expand the potential software user pool beyond researchers studying participants who can drink in the laboratory. Copyright © 2017. Published by Elsevier Ltd.

  14. Estimation of laser beam pointing parameters in the presence of atmospheric turbulence.

    PubMed

    Borah, Deva K; Voelz, David G

    2007-08-10

    The problem of estimating mechanical boresight and jitter performance of a laser pointing system in the presence of atmospheric turbulence is considered. A novel estimator based on maximizing an average probability density function (pdf) of the received signal is presented. The proposed estimator uses a Gaussian far-field mean irradiance profile, and the irradiance pdf is assumed to be lognormal. The estimates are obtained using a sequence of return signal values from the intended target. Alternatively, one can think of the estimates being made by a cooperative target using the received signal samples directly. The estimator does not require sample-to-sample atmospheric turbulence parameter information. The approach is evaluated using wave optics simulation for both weak and strong turbulence conditions. Our results show that very good boresight and jitter estimation performance can be obtained under the weak turbulence regime. We also propose a novel technique to include the effect of very low received intensity values that cannot be measured well by the receiving device. The proposed technique provides significant improvement over a conventional approach where such samples are simply ignored. Since our method is derived from the lognormal irradiance pdf, the performance under strong turbulence is degraded. However, the ideas can be extended with appropriate pdf models to obtain more accurate results under strong turbulence conditions.

  15. Estimation of under-reporting in epidemics using approximations.

    PubMed

    Gamado, Kokouvi; Streftaris, George; Zachary, Stan

    2017-06-01

    Under-reporting in epidemics, when it is ignored, leads to under-estimation of the infection rate and therefore of the reproduction number. In the case of stochastic models with temporal data, a usual approach for dealing with such issues is to apply data augmentation techniques through Bayesian methodology. Departing from earlier literature approaches implemented using reversible jump Markov chain Monte Carlo (RJMCMC) techniques, we make use of approximations to obtain faster estimation with simple MCMC. Comparisons among the methods developed here, and with the RJMCMC approach, are carried out and highlight that approximation-based methodology offers useful alternative inference tools for large epidemics, with a good trade-off between time cost and accuracy.

  16. Probabilities and statistics for backscatter estimates obtained by a scatterometer

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    Methods for the recovery of winds near the surface of the ocean from measurements of the normalized radar backscattering cross section must recognize and make use of the statistics (i.e., the sampling variability) of the backscatter measurements. Radar backscatter values from a scatterometer are random variables with expected values given by a model. A model relates backscatter to properties of the waves on the ocean, which are in turn generated by the winds in the atmospheric marine boundary layer. The effective wind speed and direction at a known height for a neutrally stratified atmosphere are the values to be recovered from the model. The probability density function for the backscatter values is a normal probability distribution with the notable feature that the variance is a known function of the expected value. The sources of signal variability, the effects of this variability on the wind speed estimation, and criteria for the acceptance or rejection of models are discussed. A modified maximum likelihood method for estimating wind vectors is described. Ways to make corrections for the kinds of errors found for the Seasat SASS model function are described, and applications to a new scatterometer are given.

  17. Estimates of price and income elasticity in Greece. Greek debt crisis transforming cigarettes into a luxury good: an econometric approach.

    PubMed

    Tarantilis, Filippos; Athanasakis, Kostas; Zavras, Dimitris; Vozikis, Athanassios; Kyriopoulos, Ioannis

    2015-01-05

    During the past decades, smoking prevalence in Greece was estimated to be near or over 40%. Following a sharp fall in cigarette consumption, as shown in current data, our objective is to assess smokers' sensitivity to cigarette price and consumer income changes as well as to project health benefits of an additional tax increase. Cigarette consumption was considered as the dependent variable, with Weighted Average Price as a proxy for cigarette price, gross domestic product as a proxy for consumers' income and dummy variables reflecting smoking restrictions and antismoking campaigns. Values were computed to natural logarithms and regression was performed. Then, four scenarios of tax increase were distinguished in order to calculate potential health benefits. Short-run price elasticity is estimated at -0.441 and short-run income elasticity is estimated at 1.040. Antismoking campaigns were found to have a statistically significant impact on consumption. Results indicate that, depending on the level of tax increase, annual per capita consumption could fall by at least 209.83 cigarettes; tax revenue could rise by more than €0.74 billion, while smokers could be reduced by up to 530 568 and at least 465 smoking-related deaths could be averted. Price elasticity estimates are similar to previous studies in Greece, while income elasticity estimates are far greater. With cigarettes regarded as a luxury good, a great opportunity is presented for decisionmakers to counter smoking. Increased taxation, along with focused antismoking campaigns, law reinforcement (to ensure compliance with smoking bans) and intensive control for smuggling could invoke a massive blow to the tobacco epidemic in Greece. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  18. Reference-free error estimation for multiple measurement methods.

    PubMed

    Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga

    2018-01-01

    We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.

  19. Venous return curves obtained from graded series of valsalva maneuvers

    NASA Technical Reports Server (NTRS)

    Mastenbrook, S. M., Jr.

    1974-01-01

    The effects were studied of a graded series of valsalva-like maneuvers on the venous return, which was measured transcutaneously in the jugular vein of an anesthetized dog, with the animal serving as its own control. At each of five different levels of central venous pressure, the airway pressure which just stopped venous return during each series of maneuvers was determined. It was found that this end-point airway pressure is not a good estimator of the animal's resting central venous pressure prior to the simulated valsalva maneuver. It was further found that the measured change in right atrial pressure during a valsalva maneuver is less than the change in airway pressure during the same maneuver, instead of being equal, as had been expected. Relative venous return curves were constructed from the data obtained during the graded series of valsalva maneuvers.

  20. Illicit and pharmaceutical drug consumption estimated via wastewater analysis. Part A: chemical analysis and drug use estimates.

    PubMed

    Baker, David R; Barron, Leon; Kasprzyk-Hordern, Barbara

    2014-07-15

    This paper presents, for the first time, community-wide estimation of drug and pharmaceuticals consumption in England using wastewater analysis and a large number of compounds. Among groups of compounds studied were: stimulants, hallucinogens and their metabolites, opioids, morphine derivatives, benzodiazepines, antidepressants and others. Obtained results showed the usefulness of wastewater analysis in order to provide estimates of local community drug consumption. It is noticeable that where target compounds could be compared to NHS prescription statistics, good comparisons were apparent between the two sets of data. These compounds include oxycodone, dihydrocodeine, methadone, tramadol, temazepam and diazepam. Whereas, discrepancies were observed for propoxyphene, codeine, dosulepin and venlafaxine (over-estimations in each case except codeine). Potential reasons for discrepancies include: sales of drugs sold without prescription and not included within NHS data, abuse of a drug with the compound trafficked through illegal sources, different consumption patterns in different areas, direct disposal leading to over estimations when using parent compound as the drug target residue and excretion factors not being representative of the local community. It is noticeable that using a metabolite (and not a parent drug) as a biomarker leads to higher certainty of obtained estimates. With regard to illicit drugs, consistent and logical results were reported. Monitoring of these compounds over a one week period highlighted the expected recreational use of many of these drugs (e.g. cocaine and MDMA) and the more consistent use of others (e.g. methadone). Copyright © 2014 Elsevier B.V. All rights reserved.

  1. When bulk density methods matter: Implications for estimating soil organic carbon pools in rocky soils

    USDA-ARS?s Scientific Manuscript database

    Resolving uncertainty in the carbon cycle is paramount to refining climate predictions. Soil organic carbon (SOC) is a major component of terrestrial C pools, and accuracy of SOC estimates are only as good as the measurements and assumptions used to obtain them. Dryland soils account for a substanti...

  2. A method for modeling bias in a person's estimates of likelihoods of events

    NASA Technical Reports Server (NTRS)

    Nygren, Thomas E.; Morera, Osvaldo

    1988-01-01

    It is of practical importance in decision situations involving risk to train individuals to transform uncertainties into subjective probability estimates that are both accurate and unbiased. We have found that in decision situations involving risk, people often introduce subjective bias in their estimation of the likelihoods of events depending on whether the possible outcomes are perceived as being good or bad. Until now, however, the successful measurement of individual differences in the magnitude of such biases has not been attempted. In this paper we illustrate a modification of a procedure originally outlined by Davidson, Suppes, and Siegel (3) to allow for a quantitatively-based methodology for simultaneously estimating an individual's subjective utility and subjective probability functions. The procedure is now an interactive computer-based algorithm, DSS, that allows for the measurement of biases in probability estimation by obtaining independent measures of two subjective probability functions (S+ and S-) for winning (i.e., good outcomes) and for losing (i.e., bad outcomes) respectively for each individual, and for different experimental conditions within individuals. The algorithm and some recent empirical data are described.

  3. Estimating Evaporative Fraction From Readily Obtainable Variables in Mangrove Forests of the Everglades, U.S.A.

    NASA Technical Reports Server (NTRS)

    Yagci, Ali Levent; Santanello, Joseph A.; Jones, John; Barr, Jordan

    2017-01-01

    A remote-sensing-based model to estimate evaporative fraction (EF) the ratio of latent heat (LE; energy equivalent of evapotranspiration -ET-) to total available energy from easily obtainable remotely-sensed and meteorological parameters is presented. This research specifically addresses the shortcomings of existing ET retrieval methods such as calibration requirements of extensive accurate in situ micro-meteorological and flux tower observations, or of a large set of coarse-resolution or model-derived input datasets. The trapezoid model is capable of generating spatially varying EF maps from standard products such as land surface temperature [T(sub s)] normalized difference vegetation index (NDVI)and daily maximum air temperature [T(sub a)]. The 2009 model results were validated at an eddy-covariance tower (Fluxnet ID: US-Skr) in the Everglades using T(sub s) and NDVI products from Landsat as well as the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors. Results indicate that the model accuracy is within the range of instrument uncertainty, and is dependent on the spatial resolution and selection of end-members (i.e. wet/dry edge). The most accurate results were achieved with the T(sub s) from Landsat relative to the T(sub s) from the MODIS flown on the Terra and Aqua platforms due to the fine spatial resolution of Landsat (30 m). The bias, mean absolute percentage error and root mean square percentage error were as low as 2.9% (3.0%), 9.8% (13.3%), and 12.1% (16.1%) for Landsat-based (MODIS-based) EF estimates, respectively. Overall, this methodology shows promise for bridging the gap between temporally limited ET estimates at Landsat scales and more complex and difficult to constrain global ET remote-sensing models.

  4. Estimating evaporative fraction from readily obtainable variables in mangrove forests of the Everglades, U.S.A.

    USGS Publications Warehouse

    Yagci, Ali Levent; Santanello, Joseph A.; Jones, John W.; Barr, Jordan G.

    2017-01-01

    A remote-sensing-based model to estimate evaporative fraction (EF) – the ratio of latent heat (LE; energy equivalent of evapotranspiration –ET–) to total available energy – from easily obtainable remotely-sensed and meteorological parameters is presented. This research specifically addresses the shortcomings of existing ET retrieval methods such as calibration requirements of extensive accurate in situ micrometeorological and flux tower observations or of a large set of coarse-resolution or model-derived input datasets. The trapezoid model is capable of generating spatially varying EF maps from standard products such as land surface temperature (Ts) normalized difference vegetation index (NDVI) and daily maximum air temperature (Ta). The 2009 model results were validated at an eddy-covariance tower (Fluxnet ID: US-Skr) in the Everglades using Ts and NDVI products from Landsat as well as the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors. Results indicate that the model accuracy is within the range of instrument uncertainty, and is dependent on the spatial resolution and selection of end-members (i.e. wet/dry edge). The most accurate results were achieved with the Ts from Landsat relative to the Ts from the MODIS flown on the Terra and Aqua platforms due to the fine spatial resolution of Landsat (30 m). The bias, mean absolute percentage error and root mean square percentage error were as low as 2.9% (3.0%), 9.8% (13.3%), and 12.1% (16.1%) for Landsat-based (MODIS-based) EF estimates, respectively. Overall, this methodology shows promise for bridging the gap between temporally limited ET estimates at Landsat scales and more complex and difficult to constrain global ET remote-sensing models.

  5. Estimation of unemployment rates using small area estimation model by combining time series and cross-sectional data

    NASA Astrophysics Data System (ADS)

    Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan

    2016-02-01

    Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.

  6. Fuel Burn Estimation Model

    NASA Technical Reports Server (NTRS)

    Chatterji, Gano

    2011-01-01

    Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.

  7. pathChirp: Efficient Available Bandwidth Estimation for Network Paths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cottrell, Les

    2003-04-30

    This paper presents pathChirp, a new active probing tool for estimating the available bandwidth on a communication network path. Based on the concept of ''self-induced congestion,'' pathChirp features an exponential flight pattern of probes we call a chirp. Packet chips offer several significant advantages over current probing schemes based on packet pairs or packet trains. By rapidly increasing the probing rate within each chirp, pathChirp obtains a rich set of information from which to dynamically estimate the available bandwidth. Since it uses only packet interarrival times for estimation, pathChirp does not require synchronous nor highly stable clocks at the sendermore » and receiver. We test pathChirp with simulations and Internet experiments and find that it provides good estimates of the available bandwidth while using only a fraction of the number of probe bytes that current state-of-the-art techniques use.« less

  8. Good Manufacturing Practices (GMP) manufacturing of advanced therapy medicinal products: a novel tailored model for optimizing performance and estimating costs.

    PubMed

    Abou-El-Enein, Mohamed; Römhild, Andy; Kaiser, Daniel; Beier, Carola; Bauer, Gerhard; Volk, Hans-Dieter; Reinke, Petra

    2013-03-01

    Advanced therapy medicinal products (ATMP) have gained considerable attention in academia due to their therapeutic potential. Good Manufacturing Practice (GMP) principles ensure the quality and sterility of manufacturing these products. We developed a model for estimating the manufacturing costs of cell therapy products and optimizing the performance of academic GMP-facilities. The "Clean-Room Technology Assessment Technique" (CTAT) was tested prospectively in the GMP facility of BCRT, Berlin, Germany, then retrospectively in the GMP facility of the University of California-Davis, California, USA. CTAT is a two-level model: level one identifies operational (core) processes and measures their fixed costs; level two identifies production (supporting) processes and measures their variable costs. The model comprises several tools to measure and optimize performance of these processes. Manufacturing costs were itemized using adjusted micro-costing system. CTAT identified GMP activities with strong correlation to the manufacturing process of cell-based products. Building best practice standards allowed for performance improvement and elimination of human errors. The model also demonstrated the unidirectional dependencies that may exist among the core GMP activities. When compared to traditional business models, the CTAT assessment resulted in a more accurate allocation of annual expenses. The estimated expenses were used to set a fee structure for both GMP facilities. A mathematical equation was also developed to provide the final product cost. CTAT can be a useful tool in estimating accurate costs for the ATMPs manufactured in an optimized GMP process. These estimates are useful when analyzing the cost-effectiveness of these novel interventions. Copyright © 2013 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  9. Is My Facility a Good Candidate for CHP?

    EPA Pesticide Factsheets

    Learn if a facility is a good candidate for CHP by answering a list of questions, and access the CHP Spark Spread Estimator, a tool that helps evaluate a prospective CHP system for its potential economic feasibility.

  10. GOODS Far Infrared Imaging with Herschel

    NASA Astrophysics Data System (ADS)

    Frayer, David T.; Elbaz, D.; Dickinson, M.; GOODS-Herschel Team

    2010-01-01

    Most of the stars in galaxies formed at high redshift in dusty environments, where their energy was absorbed and re-radiated at infrared wavelengths. Similarly, much of the growth of nuclear black holes in active galactic nuclei (AGN) was also obscured from direct view at UV/optical and X-ray wavelengths. The Great Observatories Origins Deep Survey Herschel (GOODS-H) open time key program will obtain the deepest far-infrared view of the distant universe, mapping the history of galaxy growth and AGN activity over a broad swath of cosmic time. GOODS-H will image the GOODS-North field with the PACS and SPIRE instruments at 100 to 500 microns, matching the deep survey of GOODS-South in the guaranteed time key program. GOODS-H will also observe an ultradeep sub-field within GOODS-South with PACS, reaching the deepest flux limits planned for Herschel (0.6 mJy at 100 microns with S/N=5). GOODS-H data will detect thousands of luminous and ultraluminous infrared galaxies out to z=4 or beyond, measuring their far-infrared luminosities and spectral energy distributions, and providing the best constraints on star formation rates and AGN activity during this key epoch of galaxy and black hole growth in the young universe.

  11. Keeping a Good Attitude: A Quaternion-Based Orientation Filter for IMUs and MARGs.

    PubMed

    Valenti, Roberto G; Dryanovski, Ivan; Xiao, Jizhong

    2015-08-06

    Orientation estimation using low cost sensors is an important task for Micro Aerial Vehicles (MAVs) in order to obtain a good feedback for the attitude controller. The challenges come from the low accuracy and noisy data of the MicroElectroMechanical System (MEMS) technology, which is the basis of modern, miniaturized inertial sensors. In this article, we describe a novel approach to obtain an estimation of the orientation in quaternion form from the observations of gravity and magnetic field. Our approach provides a quaternion estimation as the algebraic solution of a system from inertial/magnetic observations. We separate the problems of finding the "tilt" quaternion and the heading quaternion in two sub-parts of our system. This procedure is the key for avoiding the impact of the magnetic disturbances on the roll and pitch components of the orientation when the sensor is surrounded by unwanted magnetic flux. We demonstrate the validity of our method first analytically and then empirically using simulated data. We propose a novel complementary filter for MAVs that fuses together gyroscope data with accelerometer and magnetic field readings. The correction part of the filter is based on the method described above and works for both IMU (Inertial Measurement Unit) and MARG (Magnetic, Angular Rate, and Gravity) sensors. We evaluate the effectiveness of the filter and show that it significantly outperforms other common methods, using publicly available datasets with ground-truth data recorded during a real flight experiment of a micro quadrotor helicopter.

  12. Estimating the Propagation of Interdependent Cascading Outages with Multi-Type Branching Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Junjian; Ju, Wenyun; Sun, Kai

    In this paper, the multi-type branching process is applied to describe the statistics and interdependencies of line outages, the load shed, and isolated buses. The offspring mean matrix of the multi-type branching process is estimated by the Expectation Maximization (EM) algorithm and can quantify the extent of outage propagation. The joint distribution of two types of outages is estimated by the multi-type branching process via the Lagrange-Good inversion. The proposed model is tested with data generated by the AC OPA cascading simulations on the IEEE 118-bus system. The largest eigenvalues of the offspring mean matrix indicate that the system ismore » closer to criticality when considering the interdependence of different types of outages. Compared with empirically estimating the joint distribution of the total outages, good estimate is obtained by using the multitype branching process with a much smaller number of cascades, thus greatly improving the efficiency. It is shown that the multitype branching process can effectively predict the distribution of the load shed and isolated buses and their conditional largest possible total outages even when there are no data of them.« less

  13. Robust data enables managers to promote good practice.

    PubMed

    Bassett, Sally; Westmore, Kathryn

    2012-11-01

    This is the third in a series of articles examining the components of good corporate governance. The effective and efficient use of information and sources of information is crucial for good governance. This article explores the ways in which boards and management can obtain and use information to monitor performance and promote good practice, and how boards can be assured about the quality of information on which they rely. The final article in this series will look at the role of accountability in corporate governance.

  14. Uniform stable observer for the disturbance estimation in two renewable energy systems.

    PubMed

    Rubio, José de Jesús; Ochoa, Genaro; Balcazar, Ricardo; Pacheco, Jaime

    2015-09-01

    In this study, an observer for the states and disturbance estimation in two renewable energy systems is introduced. The restrictions of the gains in the proposed observer are found to guarantee its stability and the convergence of its error; furthermore, these results are utilized to obtain a good estimation. The introduced technique is applied for the states and disturbance estimation in a wind turbine and an electric vehicle. The wind turbine has a rotatory tower to catch the incoming air to be transformed in electricity and the electric vehicle has generators connected with its wheels to catch the vehicle movement to be transformed in electricity. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Decoding tactile afferent activity to obtain an estimate of instantaneous force and torque applied to the fingerpad

    PubMed Central

    Birznieks, Ingvars; Redmond, Stephen J.

    2015-01-01

    Dexterous manipulation is not possible without sensory information about object properties and manipulative forces. Fundamental neuroscience has been unable to demonstrate how information about multiple stimulus parameters may be continuously extracted, concurrently, from a population of tactile afferents. This is the first study to demonstrate this, using spike trains recorded from tactile afferents innervating the monkey fingerpad. A multiple-regression model, requiring no a priori knowledge of stimulus-onset times or stimulus combination, was developed to obtain continuous estimates of instantaneous force and torque. The stimuli consisted of a normal-force ramp (to a plateau of 1.8, 2.2, or 2.5 N), on top of which −3.5, −2.0, 0, +2.0, or +3.5 mNm torque was applied about the normal to the skin surface. The model inputs were sliding windows of binned spike counts recorded from each afferent. Models were trained and tested by 15-fold cross-validation to estimate instantaneous normal force and torque over the entire stimulation period. With the use of the spike trains from 58 slow-adapting type I and 25 fast-adapting type I afferents, the instantaneous normal force and torque could be estimated with small error. This study demonstrated that instantaneous force and torque parameters could be reliably extracted from a small number of tactile afferent responses in a real-time fashion with stimulus combinations that the model had not been exposed to during training. Analysis of the model weights may reveal how interactions between stimulus parameters could be disentangled for complex population responses and could be used to test neurophysiologically relevant hypotheses about encoding mechanisms. PMID:25948866

  16. The application of parameter estimation to flight measurements to obtain lateral-directional stability derivatives of an augmented jet-flap STOL airplane

    NASA Technical Reports Server (NTRS)

    Stephenson, J. D.

    1983-01-01

    Flight experiments with an augmented jet flap STOL aircraft provided data from which the lateral directional stability and control derivatives were calculated by applying a linear regression parameter estimation procedure. The tests, which were conducted with the jet flaps set at a 65 deg deflection, covered a large range of angles of attack and engine power settings. The effect of changing the angle of the jet thrust vector was also investigated. Test results are compared with stability derivatives that had been predicted. The roll damping derived from the tests was significantly larger than had been predicted, whereas the other derivatives were generally in agreement with the predictions. Results obtained using a maximum likelihood estimation procedure are compared with those from the linear regression solutions.

  17. Comparing geophysical measurements to theoretical estimates for soil mixtures at low pressures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wildenschild, D; Berge, P A; Berryman, K G

    1999-01-15

    The authors obtained good estimates of measured velocities of sand-peat samples at low pressures by using a theoretical method, the self-consistent theory of Berryman (1980), using sand and porous peat to represent the microstructure of the mixture. They were unable to obtain useful estimates with several other theoretical approaches, because the properties of the quartz, air and peat components of the samples vary over several orders of magnitude. Methods that are useful for consolidated rock cannot be applied directly to unconsolidated materials. Instead, careful consideration of microstructure is necessary to adapt the methods successfully. Future work includes comparison of themore » measured velocity values to additional theoretical estimates, investigation of Vp/Vs ratios and wave amplitudes, as well as modeling of dry and saturated sand-clay mixtures (e.g., Bonner et al., 1997, 1998). The results suggest that field data can be interpreted by comparing laboratory measurements of soil velocities to theoretical estimates of velocities in order to establish a systematic method for predicting velocities for a full range of sand-organic material mixtures at various pressures. Once the theoretical relationship is obtained, it can be used to estimate the soil composition at various depths from field measurements of seismic velocities. Additional refining of the method for relating velocities to soil characteristics is useful for development inversion algorithms.« less

  18. Aerodynamic parameters of High-Angle-of attack Research Vehicle (HARV) estimated from flight data

    NASA Technical Reports Server (NTRS)

    Klein, Vladislav; Ratvasky, Thomas R.; Cobleigh, Brent R.

    1990-01-01

    Aerodynamic parameters of the High-Angle-of-Attack Research Aircraft (HARV) were estimated from flight data at different values of the angle of attack between 10 degrees and 50 degrees. The main part of the data was obtained from small amplitude longitudinal and lateral maneuvers. A small number of large amplitude maneuvers was also used in the estimation. The measured data were first checked for their compatibility. It was found that the accuracy of air data was degraded by unexplained bias errors. Then, the data were analyzed by a stepwise regression method for obtaining a structure of aerodynamic model equations and least squares parameter estimates. Because of high data collinearity in several maneuvers, some of the longitudinal and all lateral maneuvers were reanalyzed by using two biased estimation techniques, the principal components regression and mixed estimation. The estimated parameters in the form of stability and control derivatives, and aerodynamic coefficients were plotted against the angle of attack and compared with the wind tunnel measurements. The influential parameters are, in general, estimated with acceptable accuracy and most of them are in agreement with wind tunnel results. The simulated responses of the aircraft showed good prediction capabilities of the resulting model.

  19. Quantum state tomography and fidelity estimation via Phaselift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Yiping; Liu, Huan; Zhao, Qing, E-mail: qzhaoyuping@bit.edu.cn

    Experiments of multi-photon entanglement have been performed by several groups. Obviously, an increase on the photon number for fidelity estimation and quantum state tomography causes a dramatic increase in the elements of the positive operator valued measures (POVMs), which results in a great consumption of time in measurements. In practice, we wish to obtain a good estimation of fidelity and quantum states through as few measurements as possible for multi-photon entanglement. Phaselift provides such a chance to estimate fidelity for entangling states based on less data. In this paper, we would like to show how the Phaselift works for sixmore » qubits in comparison to the data given by Pan’s group, i.e., we use a fraction of the data as input to estimate the rest of the data through the obtained density matrix, and thus goes beyond the simple fidelity analysis. The fidelity bound is also provided for general Schrödinger Cat state. Based on the fidelity bound, we propose an optimal measurement approach which could both reduce the copies and keep the fidelity bound gap small. The results demonstrate that the Phaselift can help decrease the measured elements of POVMs for six qubits. Our conclusion is based on the prior knowledge that a pure state is the target state prepared by experiments.« less

  20. Good practices in free-energy calculations.

    PubMed

    Pohorille, Andrew; Jarzynski, Christopher; Chipot, Christophe

    2010-08-19

    As access to computational resources continues to increase, free-energy calculations have emerged as a powerful tool that can play a predictive role in a wide range of research areas. Yet, the reliability of these calculations can often be improved significantly if a number of precepts, or good practices, are followed. Although the theory upon which these good practices rely has largely been known for many years, it is often overlooked or simply ignored. In other cases, the theoretical developments are too recent for their potential to be fully grasped and merged into popular platforms for the computation of free-energy differences. In this contribution, the current best practices for carrying out free-energy calculations using free energy perturbation and nonequilibrium work methods are discussed, demonstrating that at little to no additional cost, free-energy estimates could be markedly improved and bounded by meaningful error estimates. Monitoring the probability distributions that underlie the transformation between the states of interest, performing the calculation bidirectionally, stratifying the reaction pathway, and choosing the most appropriate paradigms and algorithms for transforming between states offer significant gains in both accuracy and precision.

  1. Statistical inference based on the nonparametric maximum likelihood estimator under double-truncation.

    PubMed

    Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi

    2015-07-01

    Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.

  2. Comparative analysis on the probability of being a good payer

    NASA Astrophysics Data System (ADS)

    Mihova, V.; Pavlov, V.

    2017-10-01

    Credit risk assessment is crucial for the bank industry. The current practice uses various approaches for the calculation of credit risk. The core of these approaches is the use of multiple regression models, applied in order to assess the risk associated with the approval of people applying for certain products (loans, credit cards, etc.). Based on data from the past, these models try to predict what will happen in the future. Different data requires different type of models. This work studies the causal link between the conduct of an applicant upon payment of the loan and the data that he completed at the time of application. A database of 100 borrowers from a commercial bank is used for the purposes of the study. The available data includes information from the time of application and credit history while paying off the loan. Customers are divided into two groups, based on the credit history: Good and Bad payers. Linear and logistic regression are applied in parallel to the data in order to estimate the probability of being good for new borrowers. A variable, which contains value of 1 for Good borrowers and value of 0 for Bad candidates, is modeled as a dependent variable. To decide which of the variables listed in the database should be used in the modelling process (as independent variables), a correlation analysis is made. Due to the results of it, several combinations of independent variables are tested as initial models - both with linear and logistic regression. The best linear and logistic models are obtained after initial transformation of the data and following a set of standard and robust statistical criteria. A comparative analysis between the two final models is made and scorecards are obtained from both models to assess new customers at the time of application. A cut-off level of points, bellow which to reject the applications and above it - to accept them, has been suggested for both the models, applying the strategy to keep the same Accept Rate as

  3. Climate sensitivity uncertainty: when is good news bad?

    PubMed

    Freeman, Mark C; Wagner, Gernot; Zeckhauser, Richard J

    2015-11-28

    Climate change is real and dangerous. Exactly how bad it will get, however, is uncertain. Uncertainty is particularly relevant for estimates of one of the key parameters: equilibrium climate sensitivity--how eventual temperatures will react as atmospheric carbon dioxide concentrations double. Despite significant advances in climate science and increased confidence in the accuracy of the range itself, the 'likely' range has been 1.5-4.5°C for over three decades. In 2007, the Intergovernmental Panel on Climate Change (IPCC) narrowed it to 2-4.5°C, only to reverse its decision in 2013, reinstating the prior range. In addition, the 2013 IPCC report removed prior mention of 3°C as the 'best estimate'. We interpret the implications of the 2013 IPCC decision to lower the bottom of the range and excise a best estimate. Intuitively, it might seem that a lower bottom would be good news. Here we ask: when might apparently good news about climate sensitivity in fact be bad news in the sense that it lowers societal well-being? The lowered bottom value also implies higher uncertainty about the temperature increase, definitely bad news. Under reasonable assumptions, both the lowering of the lower bound and the removal of the 'best estimate' may well be bad news. © 2015 The Author(s).

  4. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    NASA Astrophysics Data System (ADS)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  5. A comparison of low back kinetic estimates obtained through posture matching, rigid link modeling and an EMG-assisted model.

    PubMed

    Parkinson, R J; Bezaire, M; Callaghan, J P

    2011-07-01

    This study examined errors introduced by a posture matching approach (3DMatch) relative to dynamic three-dimensional rigid link and EMG-assisted models. Eighty-eight lifting trials of various combinations of heights (floor, 0.67, 1.2 m), asymmetry (left, right and center) and mass (7.6 and 9.7 kg) were videotaped while spine postures, ground reaction forces, segment orientations and muscle activations were documented and used to estimate joint moments and forces (L5/S1). Posture matching over predicted peak and cumulative extension moment (p < 0.0001 for all variables). There was no difference between peak compression estimates obtained with posture matching or EMG-assisted approaches (p = 0.7987). Posture matching over predicted cumulative (p < 0.0001) compressive loading due to a bias in standing, however, individualized bias correction eliminated the differences. Therefore, posture matching provides a method to analyze industrial lifting exposures that will predict kinetic values similar to those of more sophisticated models, provided necessary corrections are applied. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  6. Keeping a Good Attitude: A Quaternion-Based Orientation Filter for IMUs and MARGs

    PubMed Central

    Valenti, Roberto G.; Dryanovski, Ivan; Xiao, Jizhong

    2015-01-01

    Orientation estimation using low cost sensors is an important task for Micro Aerial Vehicles (MAVs) in order to obtain a good feedback for the attitude controller. The challenges come from the low accuracy and noisy data of the MicroElectroMechanical System (MEMS) technology, which is the basis of modern, miniaturized inertial sensors. In this article, we describe a novel approach to obtain an estimation of the orientation in quaternion form from the observations of gravity and magnetic field. Our approach provides a quaternion estimation as the algebraic solution of a system from inertial/magnetic observations. We separate the problems of finding the “tilt” quaternion and the heading quaternion in two sub-parts of our system. This procedure is the key for avoiding the impact of the magnetic disturbances on the roll and pitch components of the orientation when the sensor is surrounded by unwanted magnetic flux. We demonstrate the validity of our method first analytically and then empirically using simulated data. We propose a novel complementary filter for MAVs that fuses together gyroscope data with accelerometer and magnetic field readings. The correction part of the filter is based on the method described above and works for both IMU (Inertial Measurement Unit) and MARG (Magnetic, Angular Rate, and Gravity) sensors. We evaluate the effectiveness of the filter and show that it significantly outperforms other common methods, using publicly available datasets with ground-truth data recorded during a real flight experiment of a micro quadrotor helicopter. PMID:26258778

  7. A simple linear model for estimating ozone AOT40 at forest sites from raw passive sampling data.

    PubMed

    Ferretti, Marco; Cristofolini, Fabiana; Cristofori, Antonella; Gerosa, Giacomo; Gottardini, Elena

    2012-08-01

    A rapid, empirical method is described for estimating weekly AOT40 from ozone concentrations measured with passive samplers at forest sites. The method is based on linear regression and was developed after three years of measurements in Trentino (northern Italy). It was tested against an independent set of data from passive sampler sites across Italy. It provides good weekly estimates compared with those measured by conventional monitors (0.85 ≤R(2)≤ 0.970; 97 ≤ RMSE ≤ 302). Estimates obtained using passive sampling at forest sites are comparable to those obtained by another estimation method based on modelling hourly concentrations (R(2) = 0.94; 131 ≤ RMSE ≤ 351). Regression coefficients of passive sampling are similar to those obtained with conventional monitors at forest sites. Testing against an independent dataset generated by passive sampling provided similar results (0.86 ≤R(2)≤ 0.99; 65 ≤ RMSE ≤ 478). Errors tend to accumulate when weekly AOT40 estimates are summed to obtain the total AOT40 over the May-July period, and the median deviation between the two estimation methods based on passive sampling is 11%. The method proposed does not require any assumptions, complex calculation or modelling technique, and can be useful when other estimation methods are not feasible, either in principle or in practice. However, the method is not useful when estimates of hourly concentrations are of interest.

  8. On the estimation of sound speed in two-dimensional Yukawa fluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Semenov, I. L., E-mail: Igor.Semenov@dlr.de; Thomas, H. M.; Khrapak, S. A.

    2015-11-15

    The longitudinal sound speed in two-dimensional Yukawa fluids is estimated using the conventional hydrodynamic expression supplemented by appropriate thermodynamic functions proposed recently by Khrapak et al. [Phys. Plasmas 22, 083706 (2015)]. In contrast to the existing approaches, such as quasi-localized charge approximation (QLCA) and molecular dynamics simulations, our model provides a relatively simple estimate for the sound speed over a wide range of parameters of interest. At strong coupling, our results are shown to be in good agreement with the results obtained using the QLCA approach and those derived from the phonon spectrum for the triangular lattice. On the othermore » hand, our model is also expected to remain accurate at moderate values of the coupling strength. In addition, the obtained results are used to discuss the influence of the strong coupling effects on the adiabatic index of two-dimensional Yukawa fluids.« less

  9. A GRASS GIS module to obtain an estimation of glacier behavior under climate change: A pilot study on Italian glacier

    NASA Astrophysics Data System (ADS)

    Strigaro, Daniele; Moretti, Massimiliano; Mattavelli, Matteo; Frigerio, Ivan; Amicis, Mattia De; Maggi, Valter

    2016-09-01

    The aim of this work is to integrate the Minimal Glacier Model in a Geographic Information System Python module in order to obtain spatial simulations of glacier retreat and to assess the future scenarios with a spatial representation. The Minimal Glacier Models are a simple yet effective way of estimating glacier response to climate fluctuations. This module can be useful for the scientific and glaciological community in order to evaluate glacier behavior, driven by climate forcing. The module, called r.glacio.model, is developed in a GRASS GIS (GRASS Development Team, 2016) environment using Python programming language combined with different libraries as GDAL, OGR, CSV, math, etc. The module is applied and validated on the Rutor glacier, a glacier in the south-western region of the Italian Alps. This glacier is very large in size and features rather regular and lively dynamics. The simulation is calibrated by reconstructing the 3-dimensional dynamics flow line and analyzing the difference between the simulated flow line length variations and the observed glacier fronts coming from ortophotos and DEMs. These simulations are driven by the past mass balance record. Afterwards, the future assessment is estimated by using climatic drivers provided by a set of General Circulation Models participating in the Climate Model Inter-comparison Project 5 effort. The approach devised in r.glacio.model can be applied to most alpine glaciers to obtain a first-order spatial representation of glacier behavior under climate change.

  10. Electrostatic Estimation of Intercalant Jump-Diffusion Barriers Using Finite-Size Ion Models.

    PubMed

    Zimmermann, Nils E R; Hannah, Daniel C; Rong, Ziqin; Liu, Miao; Ceder, Gerbrand; Haranczyk, Maciej; Persson, Kristin A

    2018-02-01

    We report on a scheme for estimating intercalant jump-diffusion barriers that are typically obtained from demanding density functional theory-nudged elastic band calculations. The key idea is to relax a chain of states in the field of the electrostatic potential that is averaged over a spherical volume using different finite-size ion models. For magnesium migrating in typical intercalation materials such as transition-metal oxides, we find that the optimal model is a relatively large shell. This data-driven result parallels typical assumptions made in models based on Onsager's reaction field theory to quantitatively estimate electrostatic solvent effects. Because of its efficiency, our potential of electrostatics-finite ion size (PfEFIS) barrier estimation scheme will enable rapid identification of materials with good ionic mobility.

  11. Estimation of wear in total hip replacement using a ten station hip simulator.

    PubMed

    Brummitt, K; Hardaker, C S

    1996-01-01

    The results of hip simulator tests on a total of 16 total hip joints, all of them 22.25 mm Charnley designs, are presented. Wear at up to 6.75 million cycles was assessed by using a coordinate measuring machine. The results gave good agreement with clinical estimates of wear rate on the same design of joint replacement from a number of sources. Good agreement was also obtained when comparison was made with the published results from more sophisticated simulators. The major source of variation in the results was found to occur in the first million cycles where creep predominates. The results of this study support the use of this type of simplified simulator for estimating wear in a total hip prosthesis. The capability to test a significant number of joints simultaneously may make this mechanism preferable to more complex machines in many cases.

  12. Spectral estimation for characterization of acoustic aberration.

    PubMed

    Varslot, Trond; Angelsen, Bjørn; Waag, Robert C

    2004-07-01

    Spectral estimation based on acoustic backscatter from a motionless stochastic medium is described for characterization of aberration in ultrasonic imaging. The underlying assumptions for the estimation are: The correlation length of the medium is short compared to the length of the transmitted acoustic pulse, an isoplanatic region of sufficient size exists around the focal point, and the backscatter can be modeled as an ergodic stochastic process. The motivation for this work is ultrasonic imaging with aberration correction. Measurements were performed using a two-dimensional array system with 80 x 80 transducer elements and an element pitch of 0.6 mm. The f number for the measurements was 1.2 and the center frequency was 3.0 MHz with a 53% bandwidth. Relative phase of aberration was extracted from estimated cross spectra using a robust least-mean-square-error method based on an orthogonal expansion of the phase differences of neighboring wave forms as a function of frequency. Estimates of cross-spectrum phase from measurements of random scattering through a tissue-mimicking aberrator have confidence bands approximately +/- 5 degrees wide. Both phase and magnitude are in good agreement with a reference characterization obtained from a point scatterer.

  13. Estimation of pulse rate from ambulatory PPG using ensemble empirical mode decomposition and adaptive thresholding.

    PubMed

    Pittara, Melpo; Theocharides, Theocharis; Orphanidou, Christina

    2017-07-01

    A new method for deriving pulse rate from PPG obtained from ambulatory patients is presented. The method employs Ensemble Empirical Mode Decomposition to identify the pulsatile component from noise-corrupted PPG, and then uses a set of physiologically-relevant rules followed by adaptive thresholding, in order to estimate the pulse rate in the presence of noise. The method was optimized and validated using 63 hours of data obtained from ambulatory hospital patients. The F1 score obtained with respect to expertly annotated data was 0.857 and the mean absolute errors of estimated pulse rates with respect to heart rates obtained from ECG collected in parallel were 1.72 bpm for "good" quality PPG and 4.49 bpm for "bad" quality PPG. Both errors are within the clinically acceptable margin-of-error for pulse rate/heart rate measurements, showing the promise of the proposed approach for inclusion in next generation wearable sensors.

  14. An eye model for uncalibrated eye gaze estimation under variable head pose

    NASA Astrophysics Data System (ADS)

    Hnatow, Justin; Savakis, Andreas

    2007-04-01

    Gaze estimation is an important component of computer vision systems that monitor human activity for surveillance, human-computer interaction, and various other applications including iris recognition. Gaze estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and generalize well across users. This paper presents a novel eye model that is employed for efficiently performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the requirement of calibration procedures for each individual user. The positions of the two eye corners and the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing following eye detection, and the remaining parameters are obtained from anthropometric data. This eye model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on still images of subjects at frontal pose (0 °) and side pose (34 °). An upper bound of the model's performance was obtained by manually selecting the eye feature locations. The resulting average absolute error was 2.98 ° for frontal pose and 2.87 ° for side pose. The error was consistent across subjects, which indicates that good generalization was obtained. This level of performance compares well with other gaze estimation systems that utilize a calibration procedure to measure eye features.

  15. On the role of dimensionality and sample size for unstructured and structured covariance matrix estimation

    NASA Technical Reports Server (NTRS)

    Morgera, S. D.; Cooper, D. B.

    1976-01-01

    The experimental observation that a surprisingly small sample size vis-a-vis dimension is needed to achieve good signal-to-interference ratio (SIR) performance with an adaptive predetection filter is explained. The adaptive filter requires estimates as obtained by a recursive stochastic algorithm of the inverse of the filter input data covariance matrix. The SIR performance with sample size is compared for the situations where the covariance matrix estimates are of unstructured (generalized) form and of structured (finite Toeplitz) form; the latter case is consistent with weak stationarity of the input data stochastic process.

  16. Delay, probability, and social discounting in a public goods game.

    PubMed

    Jones, Bryan A; Rachlin, Howard

    2009-01-01

    A human social discount function measures the value to a person of a reward to another person at a given social distance. Just as delay discounting is a hyperbolic function of delay, and probability discounting is a hyperbolic function of odds-against, social discounting is a hyperbolic function of social distance. Experiment 1 obtained individual social, delay, and probability discount functions for a hypothetical $75 reward; participants also indicated how much of an initial $100 endowment they would contribute to a common investment in a public good. Steepness of discounting correlated, across participants, among all three discount dimensions. However, only social and probability discounting were correlated with the public-good contribution; high public-good contributors were more altruistic and also less risk averse than low contributors. Experiment 2 obtained social discount functions with hypothetical $75 rewards and delay discount functions with hypothetical $1,000 rewards, as well as public-good contributions. The results replicated those of Experiment 1; steepness of the two forms of discounting correlated with each other across participants but only social discounting correlated with the public-good contribution. Most participants in Experiment 2 predicted that the average contribution would be lower than their own contribution.

  17. On Obtaining Estimates of the Fraction of Missing Information from Full Information Maximum Likelihood

    ERIC Educational Resources Information Center

    Savalei, Victoria; Rhemtulla, Mijke

    2012-01-01

    Fraction of missing information [lambda][subscript j] is a useful measure of the impact of missing data on the quality of estimation of a particular parameter. This measure can be computed for all parameters in the model, and it communicates the relative loss of efficiency in the estimation of a particular parameter due to missing data. It has…

  18. A method to estimate stellar ages from kinematical data

    NASA Astrophysics Data System (ADS)

    Almeida-Fernandes, F.; Rocha-Pinto, H. J.

    2018-05-01

    We present a method to build a probability density function (PDF) for the age of a star based on its peculiar velocities U, V, and W and its orbital eccentricity. The sample used in this work comes from the Geneva-Copenhagen Survey (GCS) that contains the spatial velocities, orbital eccentricities, and isochronal ages for about 14 000 stars. Using the GCS stars, we fitted the parameters that describe the relations between the distributions of kinematical properties and age. This parametrization allows us to obtain an age probability from the kinematical data. From this age PDF, we estimate an individual average age for the star using the most likely age and the expected age. We have obtained the stellar age PDF for the age of 9102 stars from the GCS and have shown that the distribution of individual ages derived from our method is in good agreement with the distribution of isochronal ages. We also observe a decline in the mean metallicity with our ages for stars younger than 7 Gyr, similar to the one observed for isochronal ages. This method can be useful for the estimation of rough stellar ages for those stars that fall in areas of the Hertzsprung-Russell diagram where isochrones are tightly crowded. As an example of this method, we estimate the age of Trappist-1, which is a M8V star, obtaining the age of t(UVW) = 12.50(+0.29 - 6.23) Gyr.

  19. Estimation of rainfall using remote sensing for Riyadh climate, KSA

    NASA Astrophysics Data System (ADS)

    AlHassoun, Saleh A.

    2013-05-01

    Rainfall data constitute an important parameter for studying water resources-related problems. Remote sensing techniques could provide rapid and comprehensive overview of the rainfall distribution in a given area. Thus, the infrared data from the LandSat satellite in conjunction with the Scofield-oliver method were used to monitor and model rainfall in Riyadh area as a resemble of any area in the Kingdom of Saudi Arabia(KSA). Four convective clouds that covered two rain gage stations were analyzed. Good estimation of rainfall was obtained from satellite images. The results showed that the satellite rainfall estimations were well correlated to rain gage measurements. The satellite climate data appear to be useful for monitoring and modeling rainfall at any area where no rain gage is available.

  20. Application of cokriging techniques for the estimation of hail size

    NASA Astrophysics Data System (ADS)

    Farnell, Carme; Rigo, Tomeu; Martin-Vide, Javier

    2018-01-01

    There are primarily two ways of estimating hail size: the first is the direct interpolation of point observations, and the second is the transformation of remote sensing fields into measurements of hail properties. Both techniques have advantages and limitations as regards generating the resultant map of hail damage. This paper presents a new methodology that combines the above mentioned techniques in an attempt to minimise the limitations and take advantage of the benefits of interpolation and the use of remote sensing data. The methodology was tested for several episodes with good results being obtained for the estimation of hail size at practically all the points analysed. The study area presents a large database of hail episodes, and for this reason, it constitutes an optimal test bench.

  1. An Algorithm for Obtaining the Distribution of 1-Meter Lightning Channel Segment Altitudes for Application in Lightning NOx Production Estimation

    NASA Technical Reports Server (NTRS)

    Peterson, Harold; Koshak, William J.

    2009-01-01

    An algorithm has been developed to estimate the altitude distribution of one-meter lightning channel segments. The algorithm is required as part of a broader objective that involves improving the lightning NOx emission inventories of both regional air quality and global chemistry/climate models. The algorithm was tested and applied to VHF signals detected by the North Alabama Lightning Mapping Array (NALMA). The accuracy of the algorithm was characterized by comparing algorithm output to the plots of individual discharges whose lengths were computed by hand; VHF source amplitude thresholding and smoothing were applied to optimize results. Several thousands of lightning flashes within 120 km of the NALMA network centroid were gathered from all four seasons, and were analyzed by the algorithm. The mean, standard deviation, and median statistics were obtained for all the flashes, the ground flashes, and the cloud flashes. One-meter channel segment altitude distributions were also obtained for the different seasons.

  2. "Inclusive Working Life" in Norway--experience from "Models of Good Practice" enterprises.

    PubMed

    Lie, Arve

    2008-08-01

    To determine whether enterprises belonging to the Bank of Models of Good Practice were more successful than average Norwegian enterprises in the reduction of sickness absence, promotion of early return to work, and prevention of early retirement. In 2004 we selected 86 enterprises with a total of approximately 90000 employees from the Inclusive Working Life (IWL) Bank of Models of Good Practice. One representative of workers and one of management from each enterprise received a questionnaire on the aims, organization, and the results of the IWL program by mail. Data on sickness absence, use of early retirement, and disability retirement in the 2000-2004 period were collected from the National Insurance Registry. Data on comparable enterprises were obtained from the National Bureau of Statistics. The response rate was 65%. Although the IWL campaign was directed at reducing sickness absence, preventing early retirement, and promoting employment of the functionally impaired, most attention was paid to reducing sickness absence. Sickness absence rate in Models of Good Practice enterprises (8.2%) was higher than in comparable enterprises that were not part of the Models of Good Practice (6.9%). Implementation of many IWL activities, empowerment and involvement of employees, and good cooperation with the occupational health service were associated with a lower rate of sickness absence. On average, 0.7% new employees per year received disability pension, which is a significantly lower percentage than expected on the basis of the rate of 1.3% per year in comparable enterprises. Frequent use of disability pensioning was associated with high rate of sickness absence and having many employees older than 50 years. On average, 0.4% employees per year received early retirement compensation, which was expected on the basis of national estimates. Frequent use of early retirement was associated with having many employees older than 50 years. Models of Good Practice enterprises had

  3. Incident CTS in a large pooled cohort study: associations obtained by a Job Exposure Matrix versus associations obtained from observed exposures.

    PubMed

    Dale, Ann Marie; Ekenga, Christine C; Buckner-Petty, Skye; Merlino, Linda; Thiese, Matthew S; Bao, Stephen; Meyers, Alysha Rose; Harris-Adamson, Carisa; Kapellusch, Jay; Eisen, Ellen A; Gerr, Fred; Hegmann, Kurt T; Silverstein, Barbara; Garg, Arun; Rempel, David; Zeringue, Angelique; Evanoff, Bradley A

    2018-03-29

    There is growing use of a job exposure matrix (JEM) to provide exposure estimates in studies of work-related musculoskeletal disorders; few studies have examined the validity of such estimates, nor did compare associations obtained with a JEM with those obtained using other exposures. This study estimated upper extremity exposures using a JEM derived from a publicly available data set (Occupational Network, O*NET), and compared exposure-disease associations for incident carpal tunnel syndrome (CTS) with those obtained using observed physical exposure measures in a large prospective study. 2393 workers from several industries were followed for up to 2.8 years (5.5 person-years). Standard Occupational Classification (SOC) codes were assigned to the job at enrolment. SOC codes linked to physical exposures for forceful hand exertion and repetitive activities were extracted from O*NET. We used multivariable Cox proportional hazards regression models to describe exposure-disease associations for incident CTS for individually observed physical exposures and JEM exposures from O*NET. Both exposure methods found associations between incident CTS and exposures of force and repetition, with evidence of dose-response. Observed associations were similar across the two methods, with somewhat wider CIs for HRs calculated using the JEM method. Exposures estimated using a JEM provided similar exposure-disease associations for CTS when compared with associations obtained using the 'gold standard' method of individual observation. While JEMs have a number of limitations, in some studies they can provide useful exposure estimates in the absence of individual-level observed exposures. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. Application of remote sensing in estimating evapotranspiration in the Platte river basin

    NASA Technical Reports Server (NTRS)

    Blad, B. L.; Rosenberg, N. J.

    1976-01-01

    A 'resistance model' and a mass transport model for estimating evapotranspiration (ET) were tested on large fields of naturally subirrigated alfalfa. Both models make use of crop canopy temperature data. Temperature data were obtained with an IR thermometer and with leaf thermocouples. A Bowen ratio-energy balance (BREB) model, adjusted to account for underestimation of ET during periods of strong sensible heat advection, was used as the standard against which the resistance and mass transport models were compared. Daily estimates by the resistance model were within 10% of estimates made by the BREB model. Daily estimates by the mass transport model did not agree quite as well. Performance was good on clear and cloudy days and also during periods of non-advection and strong advection of sensible heat. The performance of the mass transport and resistance models was less satisfactory for estimation of fluxes of latent heat for short term periods. Both models tended to overestimate at low LE fluxes.

  5. Comparison of precipitable water vapor measurements obtained by microwave radiometry and radiosondes at the Southern Great ...

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lesht, B.M.; Liljegren, J.C.

    1996-12-31

    Comparisons between the precipitable water vapor (PWV) estimated by passive microwave radiometers (MWRs) and that obtained by integrating the vertical profile of water vapor density measured by radiosondes (BBSS) have generally shown good agreement. These comparisons, however, have usually been done over rather short time periods and consequently within limited ranges of total PWV and with limited numbers of radiosondes. We have been making regular comparisons between MWR and BBSS estimates of PWV at the Southern Great Plains Cloud and Radiation Testbed (SGP/CART) site since late 1992 as part of an ongoing quality measurement experiment (QME). This suite of comparisonsmore » spans three annual cycles and a relatively wide range of total PWV amounts. Our findings show that although for the most part the agreement is excellent, differences between the two measurements occur. These differences may be related to the MWR retrieval of PWV and to calibration variations between radiosonde batches.« less

  6. Developing a new solar radiation estimation model based on Buckingham theorem

    NASA Astrophysics Data System (ADS)

    Ekici, Can; Teke, Ismail

    2018-06-01

    While the value of solar radiation can be expressed physically in the days without clouds, this expression becomes difficult in cloudy and complicated weather conditions. In addition, solar radiation measurements are often not taken in developing countries. In such cases, solar radiation estimation models are used. Solar radiation prediction models estimate solar radiation using other measured meteorological parameters those are available in the stations. In this study, a solar radiation estimation model was obtained using Buckingham theorem. This theory has been shown to be useful in predicting solar radiation. In this study, Buckingham theorem is used to express the solar radiation by derivation of dimensionless pi parameters. This derived model is compared with temperature based models in the literature. MPE, RMSE, MBE and NSE error analysis methods are used in this comparison. Allen, Hargreaves, Chen and Bristow-Campbell models in the literature are used for comparison. North Dakota's meteorological data were used to compare the models. Error analysis were applied through the comparisons between the models in the literature and the model that is derived in the study. These comparisons were made using data obtained from North Dakota's agricultural climate network. In these applications, the model obtained within the scope of the study gives better results. Especially, in terms of short-term performance, it has been found that the obtained model gives satisfactory results. It has been seen that this model gives better accuracy in comparison with other models. It is possible in RMSE analysis results. Buckingham theorem was found useful in estimating solar radiation. In terms of long term performances and percentage errors, the model has given good results.

  7. Good Practices in Free-energy Calculations

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew; Jarzynski, Christopher; Chipot, Christopher

    2013-01-01

    As access to computational resources continues to increase, free-energy calculations have emerged as a powerful tool that can play a predictive role in drug design. Yet, in a number of instances, the reliability of these calculations can be improved significantly if a number of precepts, or good practices are followed. For the most part, the theory upon which these good practices rely has been known for many years, but often overlooked, or simply ignored. In other cases, the theoretical developments are too recent for their potential to be fully grasped and merged into popular platforms for the computation of free-energy differences. The current best practices for carrying out free-energy calculations will be reviewed demonstrating that, at little to no additional cost, free-energy estimates could be markedly improved and bounded by meaningful error estimates. In energy perturbation and nonequilibrium work methods, monitoring the probability distributions that underlie the transformation between the states of interest, performing the calculation bidirectionally, stratifying the reaction pathway and choosing the most appropriate paradigms and algorithms for transforming between states offer significant gains in both accuracy and precision. In thermodynamic integration and probability distribution (histogramming) methods, properly designed adaptive techniques yield nearly uniform sampling of the relevant degrees of freedom and, by doing so, could markedly improve efficiency and accuracy of free energy calculations without incurring any additional computational expense.

  8. How good is "very good"? Translation effect in the racial/ethnic variation in self-rated health status.

    PubMed

    Seo, Sukyong; Chung, Sukyung; Shumway, Martha

    2014-03-01

    To examine the influence of translation when measuring and comparing self-rated health (SRH) measured with five response categories (excellent, very good, good, fair, and poor), across racial/ethnic groups. Using data from the California Health Interview Survey, which were administered in five languages, we analyzed variations in the five-category SRH across five racial/ethnic groups: non-Hispanic white, Latino, Chinese, Vietnamese, and Korean. Logistic regression was used to estimate independent effects of race/ethnicity, culture, and translation on SRH, after controlling for risk factors and other measures of health status. Latinos, Chinese, Vietnamese, and Koreans were less likely than non-Hispanic whites to rate their health as excellent or very good and more likely to rate it as good, fair, or poor. This racial/ethnic difference diminished when adjusting for acculturation. Independently of race/ethnicity, respondents using non-English surveys were less likely to answer excellent (OR = 0.24-0.55) and very good (OR = 0.30-0.34) and were more likely to answer fair (OR = 2.48-4.10) or poor (OR = 2.87-3.51), even after controlling for other measures of SRH. Responses to the five-category SRH question depend on interview language. When responding in Spanish, Chinese, Korean, or Vietnamese, respondents are more likely to choose a lower level SRH category, "fair" in particular. If each SRH category measured in different languages is treated as equivalent, racial/ethnic disparities in SRH among Latinos and Asian subgroups, as compared to non-Hispanic whites, may be exaggerated.

  9. Comparison of Sun-Induced Chlorophyll Fluorescence Estimates Obtained from Four Portable Field Spectroradiometers

    NASA Technical Reports Server (NTRS)

    Julitta, Tommaso; Corp, Lawrence A.; Rossini, Micol; Burkart, Andreas; Cogliati, Sergio; Davies, Neville; Hom, Milton; Mac Arthur, Alasdair; Middleton, Elizabeth M.; Rascher, Uwe; hide

    2016-01-01

    Remote Sensing of Sun-Induced Chlorophyll Fluorescence (SIF) is a research field of growing interest because it offers the potential to quantify actual photosynthesis and to monitor plant status. New satellite missions from the European Space Agency, such as the Earth Explorer 8 FLuorescence EXplorer (FLEX) mission-scheduled to launch in 2022 and aiming at SIF mapping-and from the National Aeronautics and Space Administration (NASA) such as the Orbiting Carbon Observatory-2 (OCO-2) sampling mission launched in July 2014, provide the capability to estimate SIF from space. The detection of the SIF signal from airborne and satellite platform is difficult and reliable ground level data are needed for calibration/validation. Several commercially available spectroradiometers are currently used to retrieve SIF in the field. This study presents a comparison exercise for evaluating the capability of four spectroradiometers to retrieve SIF. The results show that an accurate far-red SIF estimation can be achieved using spectroradiometers with an ultrafine resolution (less than 1 nm), while the red SIF estimation requires even higher spectral resolution (less than 0.5 nm). Moreover, it is shown that the Signal to Noise Ratio (SNR) plays a significant role in the precision of the far-red SIF measurements.

  10. Goodness-Of-Fit Test for Nonparametric Regression Models: Smoothing Spline ANOVA Models as Example.

    PubMed

    Teran Hidalgo, Sebastian J; Wu, Michael C; Engel, Stephanie M; Kosorok, Michael R

    2018-06-01

    Nonparametric regression models do not require the specification of the functional form between the outcome and the covariates. Despite their popularity, the amount of diagnostic statistics, in comparison to their parametric counter-parts, is small. We propose a goodness-of-fit test for nonparametric regression models with linear smoother form. In particular, we apply this testing framework to smoothing spline ANOVA models. The test can consider two sources of lack-of-fit: whether covariates that are not currently in the model need to be included, and whether the current model fits the data well. The proposed method derives estimated residuals from the model. Then, statistical dependence is assessed between the estimated residuals and the covariates using the HSIC. If dependence exists, the model does not capture all the variability in the outcome associated with the covariates, otherwise the model fits the data well. The bootstrap is used to obtain p-values. Application of the method is demonstrated with a neonatal mental development data analysis. We demonstrate correct type I error as well as power performance through simulations.

  11. Robust ridge regression estimators for nonlinear models with applications to high throughput screening assay data.

    PubMed

    Lim, Changwon

    2015-03-30

    Nonlinear regression is often used to evaluate the toxicity of a chemical or a drug by fitting data from a dose-response study. Toxicologists and pharmacologists may draw a conclusion about whether a chemical is toxic by testing the significance of the estimated parameters. However, sometimes the null hypothesis cannot be rejected even though the fit is quite good. One possible reason for such cases is that the estimated standard errors of the parameter estimates are extremely large. In this paper, we propose robust ridge regression estimation procedures for nonlinear models to solve this problem. The asymptotic properties of the proposed estimators are investigated; in particular, their mean squared errors are derived. The performances of the proposed estimators are compared with several standard estimators using simulation studies. The proposed methodology is also illustrated using high throughput screening assay data obtained from the National Toxicology Program. Copyright © 2014 John Wiley & Sons, Ltd.

  12. Which response format reveals the truth about donations to a public good?

    Treesearch

    Thomas C. Brown; Patricia A. Champ; Richard C. Bishop; Daniel W. McCollum

    1996-01-01

    Seceral contingent valuation studies hace found that the open-ended format yields lower estimates of willingness to pay (WTP) than does the closed-ended, or dichotomous choice, format. In this study, WTP for a public encironmental good was estimated under four conditions: actual payment in response to open-ended and closed-ended requests, and hypothetical payment in...

  13. Fast noise level estimation algorithm based on principal component analysis transform and nonlinear rectification

    NASA Astrophysics Data System (ADS)

    Xu, Shaoping; Zeng, Xiaoxia; Jiang, Yinnan; Tang, Yiling

    2018-01-01

    We proposed a noniterative principal component analysis (PCA)-based noise level estimation (NLE) algorithm that addresses the problem of estimating the noise level with a two-step scheme. First, we randomly extracted a number of raw patches from a given noisy image and took the smallest eigenvalue of the covariance matrix of the raw patches as the preliminary estimation of the noise level. Next, the final estimation was directly obtained with a nonlinear mapping (rectification) function that was trained on some representative noisy images corrupted with different known noise levels. Compared with the state-of-art NLE algorithms, the experiment results show that the proposed NLE algorithm can reliably infer the noise level and has robust performance over a wide range of image contents and noise levels, showing a good compromise between speed and accuracy in general.

  14. Ice Cloud Optical Thickness and Extinction Estimates from Radar Measurements.

    NASA Astrophysics Data System (ADS)

    Matrosov, Sergey Y.; Shupe, Matthew D.; Heymsfield, Andrew J.; Zuidema, Paquita

    2003-11-01

    A remote sensing method is proposed to derive vertical profiles of the visible extinction coefficients in ice clouds from measurements of the radar reflectivity and Doppler velocity taken by a vertically pointing 35-GHz cloud radar. The extinction coefficient and its vertical integral, optical thickness τ, are among the fundamental cloud optical parameters that, to a large extent, determine the radiative impact of clouds. The results obtained with this method could be used as input for different climate and radiation models and for comparisons with parameterizations that relate cloud microphysical parameters and optical properties. An important advantage of the proposed method is its potential applicability to multicloud situations and mixed-phase conditions. In the latter case, it might be able to provide the information on the ice component of mixed-phase clouds if the radar moments are dominated by this component. The uncertainties of radar-based retrievals of cloud visible optical thickness are estimated by comparing retrieval results with optical thicknesses obtained independently from radiometric measurements during the yearlong Surface Heat Budget of the Arctic Ocean (SHEBA) field experiment. The radiometric measurements provide a robust way to estimate τ but are applicable only to optically thin ice clouds without intervening liquid layers. The comparisons of cloud optical thicknesses retrieved from radar and from radiometer measurements indicate an uncertainty of about 77% and a bias of about -14% in the radar estimates of τ relative to radiometric retrievals. One possible explanation of the negative bias is an inherently low sensitivity of radar measurements to smaller cloud particles that still contribute noticeably to the cloud extinction. This estimate of the uncertainty is in line with simple theoretical considerations, and the associated retrieval accuracy should be considered good for a nonoptical instrument, such as radar. This paper also

  15. Age estimation using exfoliative cytology and radiovisiography: A comparative study.

    PubMed

    Nallamala, Shilpa; Guttikonda, Venkateswara Rao; Manchikatla, Praveen Kumar; Taneeru, Sravya

    2017-01-01

    Age estimation is one of the essential factors in establishing the identity of an individual. Among various methods, exfoliative cytology (EC) is a unique, noninvasive technique, involving simple, and pain-free collection of intact cells from the oral cavity for microscopic examination. The study was undertaken with an aim to estimate the age of an individual from the average cell size of their buccal smears calculated using image analysis morphometric software and the pulp-tooth area ratio in mandibular canine of the same individual using radiovisiography (RVG). Buccal smears were collected from 100 apparently healthy individuals. After fixation in 95% alcohol, the smears were stained using Papanicolaou stain. The average cell size was measured using image analysis software (Image-Pro Insight 8.0). The RVG images of mandibular canines were obtained, pulp and tooth areas were traced using AutoCAD 2010 software, and area ratio was calculated. The estimated age was then calculated using regression analysis. The paired t -test between chronological age and estimated age by cell size and pulp-tooth area ratio was statistically nonsignificant ( P > 0.05). In the present study, age estimated by pulp-tooth area ratio and EC yielded good results.

  16. Estimating linear-nonlinear models using Renyi divergences.

    PubMed

    Kouh, Minjoon; Sharpee, Tatyana O

    2009-01-01

    This article compares a family of methods for characterizing neural feature selectivity using natural stimuli in the framework of the linear-nonlinear model. In this model, the spike probability depends in a nonlinear way on a small number of stimulus dimensions. The relevant stimulus dimensions can be found by optimizing a Rényi divergence that quantifies a change in the stimulus distribution associated with the arrival of single spikes. Generally, good reconstructions can be obtained based on optimization of Rényi divergence of any order, even in the limit of small numbers of spikes. However, the smallest error is obtained when the Rényi divergence of order 1 is optimized. This type of optimization is equivalent to information maximization, and is shown to saturate the Cramer-Rao bound describing the smallest error allowed for any unbiased method. We also discuss conditions under which information maximization provides a convenient way to perform maximum likelihood estimation of linear-nonlinear models from neural data.

  17. An improved nonparametric lower bound of species richness via a modified good-turing frequency formula.

    PubMed

    Chiu, Chun-Huo; Wang, Yi-Ting; Walther, Bruno A; Chao, Anne

    2014-09-01

    It is difficult to accurately estimate species richness if there are many almost undetectable species in a hyper-diverse community. Practically, an accurate lower bound for species richness is preferable to an inaccurate point estimator. The traditional nonparametric lower bound developed by Chao (1984, Scandinavian Journal of Statistics 11, 265-270) for individual-based abundance data uses only the information on the rarest species (the numbers of singletons and doubletons) to estimate the number of undetected species in samples. Applying a modified Good-Turing frequency formula, we derive an approximate formula for the first-order bias of this traditional lower bound. The approximate bias is estimated by using additional information (namely, the numbers of tripletons and quadrupletons). This approximate bias can be corrected, and an improved lower bound is thus obtained. The proposed lower bound is nonparametric in the sense that it is universally valid for any species abundance distribution. A similar type of improved lower bound can be derived for incidence data. We test our proposed lower bounds on simulated data sets generated from various species abundance models. Simulation results show that the proposed lower bounds always reduce bias over the traditional lower bounds and improve accuracy (as measured by mean squared error) when the heterogeneity of species abundances is relatively high. We also apply the proposed new lower bounds to real data for illustration and for comparisons with previously developed estimators. © 2014, The International Biometric Society.

  18. Quantum chi-squared and goodness of fit testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Temme, Kristan; Verstraete, Frank

    2015-01-15

    A quantum mechanical hypothesis test is presented for the hypothesis that a certain setup produces a given quantum state. Although the classical and the quantum problems are very much related to each other, the quantum problem is much richer due to the additional optimization over the measurement basis. A goodness of fit test for i.i.d quantum states is developed and a max-min characterization for the optimal measurement is introduced. We find the quantum measurement which leads both to the maximal Pitman and Bahadur efficiencies, and determine the associated divergence rates. We discuss the relationship of the quantum goodness of fitmore » test to the problem of estimating multiple parameters from a density matrix. These problems are found to be closely related and we show that the largest error of an optimal strategy, determined by the smallest eigenvalue of the Fisher information matrix, is given by the divergence rate of the goodness of fit test.« less

  19. Cooperation and the common good.

    PubMed

    Johnstone, Rufus A; Rodrigues, António M M

    2016-02-05

    In this paper, we draw the attention of biologists to a result from the economic literature, which suggests that when individuals are engaged in a communal activity of benefit to all, selection may favour cooperative sharing of resources even among non-relatives. Provided that group members all invest some resources in the public good, they should refrain from conflict over the division of these resources. The reason is that, given diminishing returns on investment in public and private goods, claiming (or ceding) a greater share of total resources only leads to the actor (or its competitors) investing more in the public good, such that the marginal costs and benefits of investment remain in balance. This cancels out any individual benefits of resource competition. We illustrate how this idea may be applied in the context of biparental care, using a sequential game in which parents first compete with one another over resources, and then choose how to allocate the resources they each obtain to care of their joint young (public good) versus their own survival and future reproductive success (private good). We show that when the two parents both invest in care to some extent, they should refrain from any conflict over the division of resources. The same effect can also support asymmetric outcomes in which one parent competes for resources and invests in care, whereas the other does not invest but refrains from competition. The fact that the caring parent gains higher fitness pay-offs at these equilibria suggests that abandoning a partner is not always to the latter's detriment, when the potential for resource competition is taken into account, but may instead be of benefit to the 'abandoned' mate. © 2016 The Author(s).

  20. Cooperation and the common good

    PubMed Central

    Johnstone, Rufus A.; Rodrigues, António M. M.

    2016-01-01

    In this paper, we draw the attention of biologists to a result from the economic literature, which suggests that when individuals are engaged in a communal activity of benefit to all, selection may favour cooperative sharing of resources even among non-relatives. Provided that group members all invest some resources in the public good, they should refrain from conflict over the division of these resources. The reason is that, given diminishing returns on investment in public and private goods, claiming (or ceding) a greater share of total resources only leads to the actor (or its competitors) investing more in the public good, such that the marginal costs and benefits of investment remain in balance. This cancels out any individual benefits of resource competition. We illustrate how this idea may be applied in the context of biparental care, using a sequential game in which parents first compete with one another over resources, and then choose how to allocate the resources they each obtain to care of their joint young (public good) versus their own survival and future reproductive success (private good). We show that when the two parents both invest in care to some extent, they should refrain from any conflict over the division of resources. The same effect can also support asymmetric outcomes in which one parent competes for resources and invests in care, whereas the other does not invest but refrains from competition. The fact that the caring parent gains higher fitness pay-offs at these equilibria suggests that abandoning a partner is not always to the latter's detriment, when the potential for resource competition is taken into account, but may instead be of benefit to the ‘abandoned’ mate. PMID:26729926

  1. Paired comparison estimates of willingness to accept versus contingent valuation estimates of willingness to pay

    Treesearch

    John B. Loomis; George Peterson; Patricia A. Champ; Thomas C. Brown; Beatrice Lucero

    1998-01-01

    Estimating empirical measures of an individual's willingness to accept that are consistent with conventional economic theory, has proven difficult. The method of paired comparison offers a promising approach to estimate willingness to accept. This method involves having individuals make binary choices between receiving a particular good or a sum of money....

  2. Estimation of Inertial Parameters of Rigid Body Links of Manipulators.

    DTIC Science & Technology

    1986-02-01

    H AN ET RL. FED 86 UNCLRSSIFIED Al-H-88? NSSI4-8- C -O5OS F/ O 13/13 ML mmmmmmmmuhmhEMENOMONEE 1248 = . I 2.2. 36I W 11111 1.0 112.0 ~ Lm 11111 1111 25l...good match was obtained between joint [lror uesq’pre;Act om the estimated parameters and the joint torques computed A" rn fu~ S. C b.. .:. Massachusetts...value o , which if not zero indicates that linear combination of parameters, vYO, is identifiable. Since K is a function only of the geometry of the

  3. Estimating Arrhenius parameters using temperature programmed molecular dynamics.

    PubMed

    Imandi, Venkataramana; Chatterjee, Abhijit

    2016-07-21

    Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.

  4. Estimating Arrhenius parameters using temperature programmed molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imandi, Venkataramana; Chatterjee, Abhijit, E-mail: abhijit@che.iitb.ac.in

    2016-07-21

    Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight variousmore » aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.« less

  5. Optimal estimation of recurrence structures from time series

    NASA Astrophysics Data System (ADS)

    beim Graben, Peter; Sellers, Kristin K.; Fröhlich, Flavio; Hutt, Axel

    2016-05-01

    Recurrent temporal dynamics is a phenomenon observed frequently in high-dimensional complex systems and its detection is a challenging task. Recurrence quantification analysis utilizing recurrence plots may extract such dynamics, however it still encounters an unsolved pertinent problem: the optimal selection of distance thresholds for estimating the recurrence structure of dynamical systems. The present work proposes a stochastic Markov model for the recurrent dynamics that allows for the analytical derivation of a criterion for the optimal distance threshold. The goodness of fit is assessed by a utility function which assumes a local maximum for that threshold reflecting the optimal estimate of the system's recurrence structure. We validate our approach by means of the nonlinear Lorenz system and its linearized stochastic surrogates. The final application to neurophysiological time series obtained from anesthetized animals illustrates the method and reveals novel dynamic features of the underlying system. We propose the number of optimal recurrence domains as a statistic for classifying an animals' state of consciousness.

  6. Thermal Effusivity of Vegetable Oils Obtained by a Photothermal Technique

    NASA Astrophysics Data System (ADS)

    Cervantes-Espinosa, L. M.; de L. Castillo-Alvarado, F.; Lara-Hernández, G.; Cruz-Orea, A.; Hernández-Aguilar, C.; Domínguez-Pacheco, A.

    2014-10-01

    Thermal properties of several vegetable oils such as soy, corn, and avocado commercial oils were obtained by using a photopyroelectric technique. The inverse photopyroelectric configuration was used in order to obtain the thermal effusivity of the oil samples. The theoretical equation for the photopyroelectric signal in this configuration, as a function of the incident light modulation frequency, was fitted to the experimental data in order to obtain the thermal effusivity of these samples. The obtained results are in good agreement with the thermal effusivity reported for other vegetable oils. All measurements were done at room temperature.

  7. Time series models of environmental exposures: Good predictions or good understanding.

    PubMed

    Barnett, Adrian G; Stephen, Dimity; Huang, Cunrui; Wolkewitz, Martin

    2017-04-01

    Time series data are popular in environmental epidemiology as they make use of the natural experiment of how changes in exposure over time might impact on disease. Many published time series papers have used parameter-heavy models that fully explained the second order patterns in disease to give residuals that have no short-term autocorrelation or seasonality. This is often achieved by including predictors of past disease counts (autoregression) or seasonal splines with many degrees of freedom. These approaches give great residuals, but add little to our understanding of cause and effect. We argue that modelling approaches should rely more on good epidemiology and less on statistical tests. This includes thinking about causal pathways, making potential confounders explicit, fitting a limited number of models, and not over-fitting at the cost of under-estimating the true association between exposure and disease. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. SU-E-T-365: Estimation of Neutron Ambient Dose Equivalents for Radioprotection Exposed Workers in Radiotherapy Facilities Based On Characterization Patient Risk Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Irazola, L; Terron, J; Sanchez-Doblado, F

    2015-06-15

    Purpose: Previous measurements with Bonner spheres{sup 1} showed that normalized neutron spectra are equal for the majority of the existing linacs{sup 2}. This information, in addition to thermal neutron fluences obtained in the characterization procedure{sup 3}3, would allow to estimate neutron doses accidentally received by exposed workers, without the need of an extra experimental measurement. Methods: Monte Carlo (MC) simulations demonstrated that the thermal neutron fluence distribution inside the bunker is quite uniform, as a consequence of multiple scatter in the walls{sup 4}. Although inverse square law is approximately valid for the fast component, a more precise calculation could bemore » obtained with a generic fast fluence distribution map around the linac, from MC simulations{sup 4}. Thus, measurements of thermal neutron fluences performed during the characterization procedure{sup 3}, together with a generic unitary spectra{sup 2}, would allow to estimate the total neutron fluences and H*(10) at any point{sup 5}. As an example, we compared estimations with Bonner sphere measurements{sup 1}, for two points in five facilities: 3 Siemens (15–23 MV), Elekta (15 MV) and Varian (15 MV). Results: Thermal neutron fluences obtained from characterization, are within (0.2–1.6×10{sup 6}) cm−{sup 2}•Gy{sup −1} for the five studied facilities. This implies ambient equivalent doses ranging from (0.27–2.01) mSv/Gy 50 cm far from the isocenter and (0.03–0.26) mSv/Gy at detector location with an average deviation of ±12.1% respect to Bonner measurements. Conclusion: The good results obtained demonstrate that neutron fluence and H*(10) can be estimated based on: (a) characterization procedure established for patient risk estimation in each facility, (b) generic unitary neutron spectrum and (c) generic MC map distribution of the fast component. [1] Radiat. Meas (2010) 45: 1391 – 1397; [2] Phys. Med. Biol (2012) 5 7:6167–6191; [3] Med. Phys (2015) 42

  9. Deciphering the enigma of undetected species, phylogenetic, and functional diversity based on Good-Turing theory.

    PubMed

    Chao, Anne; Chiu, Chun-Huo; Colwell, Robert K; Magnago, Luiz Fernando S; Chazdon, Robin L; Gotelli, Nicholas J

    2017-11-01

    Estimating the species, phylogenetic, and functional diversity of a community is challenging because rare species are often undetected, even with intensive sampling. The Good-Turing frequency formula, originally developed for cryptography, estimates in an ecological context the true frequencies of rare species in a single assemblage based on an incomplete sample of individuals. Until now, this formula has never been used to estimate undetected species, phylogenetic, and functional diversity. Here, we first generalize the Good-Turing formula to incomplete sampling of two assemblages. The original formula and its two-assemblage generalization provide a novel and unified approach to notation, terminology, and estimation of undetected biological diversity. For species richness, the Good-Turing framework offers an intuitive way to derive the non-parametric estimators of the undetected species richness in a single assemblage, and of the undetected species shared between two assemblages. For phylogenetic diversity, the unified approach leads to an estimator of the undetected Faith's phylogenetic diversity (PD, the total length of undetected branches of a phylogenetic tree connecting all species), as well as a new estimator of undetected PD shared between two phylogenetic trees. For functional diversity based on species traits, the unified approach yields a new estimator of undetected Walker et al.'s functional attribute diversity (FAD, the total species-pairwise functional distance) in a single assemblage, as well as a new estimator of undetected FAD shared between two assemblages. Although some of the resulting estimators have been previously published (but derived with traditional mathematical inequalities), all taxonomic, phylogenetic, and functional diversity estimators are now derived under the same framework. All the derived estimators are theoretically lower bounds of the corresponding undetected diversities; our approach reveals the sufficient conditions under

  10. A radiographic method to estimate lung volume and its use in small mammals.

    PubMed

    Canals, Mauricio; Olivares, Ricardo; Rosenmann, Mario

    2005-01-01

    In this paper we develop a method to estimate lung volume using chest x-rays of small mammals. We applied this method to assess the lung volume of several rodents. We showed that a good estimator of the lung volume is: V*L = 0.496 x VRX approximately equal to 1/2 x VRX, where VRX is a measurement obtained from the x-ray that represents the volume of a rectangular box containing the lungs and mediastinum organs. The proposed formula may be interpreted as the volume of an ellipsoid formed by both lungs joined at their bases. When that relationship was used to estimate lung volume, values similar to those expected from allometric relationship were found in four rodents. In two others, M. musculus and R. norvegicus, lung volume was similar to reported data, although values were lower than expected.

  11. A Height Estimation Approach for Terrain Following Flights from Monocular Vision.

    PubMed

    Campos, Igor S G; Nascimento, Erickson R; Freitas, Gustavo M; Chaimowicz, Luiz

    2016-12-06

    In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80 % for positives and 90 % for negatives, while the height estimation algorithm presented good accuracy.

  12. A Height Estimation Approach for Terrain Following Flights from Monocular Vision

    PubMed Central

    Campos, Igor S. G.; Nascimento, Erickson R.; Freitas, Gustavo M.; Chaimowicz, Luiz

    2016-01-01

    In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80% for positives and 90% for negatives, while the height estimation algorithm presented good accuracy. PMID:27929424

  13. Estimation of additive forces and moments for supersonic inlets

    NASA Technical Reports Server (NTRS)

    Perkins, Stanley C., Jr.; Dillenius, Marnix F. E.

    1991-01-01

    A technique for estimating the additive forces and moments associated with supersonic, external compression inlets as a function of mass flow ratio has been developed. The technique makes use of a low order supersonic paneling method for calculating minimum additive forces at maximum mass flow conditions. A linear relationship between the minimum additive forces and the maximum values for fully blocked flow is employed to obtain the additive forces at a specified mass flow ratio. The method is applicable to two-dimensional inlets at zero or nonzero angle of attack, and to axisymmetric inlets at zero angle of attack. Comparisons with limited available additive drag data indicate fair to good agreement.

  14. Neuro-genetic non-invasive temperature estimation: intensity and spatial prediction.

    PubMed

    Teixeira, César A; Ruano, M Graça; Ruano, António E; Pereira, Wagner C A

    2008-06-01

    The existence of proper non-invasive temperature estimators is an essential aspect when thermal therapy applications are envisaged. These estimators must be good predictors to enable temperature estimation at different operational situations, providing better control of the therapeutic instrumentation. In this work, radial basis functions artificial neural networks were constructed to access temperature evolution on an ultrasound insonated medium. The employed models were radial basis functions neural networks with external dynamics induced by their inputs. Both the most suited set of model inputs and number of neurons in the network were found using the multi-objective genetic algorithm. The neural models were validated in two situations: the operating ones, as used in the construction of the network; and in 11 unseen situations. The new data addressed two new spatial locations and a new intensity level, assessing the intensity and space prediction capacity of the proposed model. Good performance was obtained during the validation process both in terms of the spatial points considered and whenever the new intensity level was within the range of applied intensities. A maximum absolute error of 0.5 degrees C+/-10% (0.5 degrees C is the gold-standard threshold in hyperthermia/diathermia) was attained with low computationally complex models. The results confirm that the proposed neuro-genetic approach enables foreseeing temperature propagation, in connection to intensity and space parameters, thus enabling the assessment of different operating situations with proper temperature resolution.

  15. Unifying distance-based goodness-of-fit indicators for hydrologic model assessment

    NASA Astrophysics Data System (ADS)

    Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim

    2014-05-01

    high flow and second the derivative of GED probability density function at zero is zero as β >1, but discontinuous as β ≤ 1, and even infinity as β < 1 with which the maximum likelihood estimation can guarantee the model errors approach zero as well as possible. The BC-GED that estimates the parameters (i.e. λ and β) of BC-GED model as well as hydrologic model parameters is the best distance-based goodness-of-fit indicator because not only the model validation using groundwater levels is very good, but also the model errors fulfill the statistic assumption best. However, in some cases of model calibration with a few observations e.g. calibration of single-event model, for avoiding estimation of the parameters of BC-GED model the MAE i.e. the boundary indicator (β = 1) of the two classes, can replace the BC-GED, because the model validation of MAE is best.

  16. Estimates of cancer burden in Sardinia.

    PubMed

    Budroni, Mario; Sechi, Ornelia; Cossu, Antonio; Palmieri, Giuseppe; Tanda, Francesco; Foschi, Roberto; Rossi, Silvia

    2013-01-01

    Cancer registration in Sardinia covers 43% of the population and started in 1992 in the Sassari province. The aim of this paper is to provide estimates of the incidence, mortality and prevalence of seven major cancers for the entire region in the period 1970-2015. The estimates were obtained by applying the MIAMOD method, a statistical back-calculation approach to derive incidence and prevalence figures starting from mortality and relative survival data. Estimates were compared with the available observed data. In 2012 the lowest incidence was estimated for stomach cancer and melanoma among men, with 140 and 74 new cases, respectively, per 100,000. The mortality rates were highest for lung cancer and were very close to the incidence rates (77 and 95 per 100,000, respectively). In women, breast was by far the most frequent cancer site both in terms of incidence (1,512 new cases) and mortality (295 deaths), followed by colon-rectum (493 cases and 201 deaths), lung (205 cases and 167 deaths), melanoma (106 cases and 15 deaths), stomach (82 cases and 61 deaths), and uterine cervix (36 cases and 19 deaths). The highest prevalence was estimated for breast cancer (15,180 cases), followed by colorectal cancer with about 7,300 prevalent cases in both sexes. This paper provides a description of the burden of the major cancers in Sardinia until 2015. The comparisons between the estimated age-standardized incidence rates and those observed in the Sassari registry indicate good agreement. The estimates show a general decrease in cancer mortality, with the exception of female lung cancer. By contrast, the prevalence is steeply increasing for all considered cancers (with the only exception of cancer of the uterine cervix). This points to the need for more strongly supporting evidence-based prevention campaigns focused on contrasting female smoking, unhealthy nutrition and sun exposure.

  17. Good Education, the Good Teacher, and a Practical Art of Living a Good Life: A Catholic Perspective

    ERIC Educational Resources Information Center

    Hermans, Chris

    2017-01-01

    What is good education? We value education for reasons connected to the good provided by education in society. This good is connected to be the pedagogical aim of education. This article distinguishes five criteria for good education based on the concept of "Bildung". Next, these five criteria are used to develop the idea of the good…

  18. Features of the normal choriocapillaris with OCT-angiography: Density estimation and textural properties.

    PubMed

    Montesano, Giovanni; Allegrini, Davide; Colombo, Leonardo; Rossetti, Luca M; Pece, Alfredo

    2017-01-01

    The main objective of our work is to perform an in depth analysis of the structural features of normal choriocapillaris imaged with OCT Angiography. Specifically, we provide an optimal radius for a circular Region of Interest (ROI) to obtain a stable estimate of the subfoveal choriocapillaris density and characterize its textural properties using Markov Random Fields. On each binarized image of the choriocapillaris OCT Angiography we performed simulated measurements of the subfoveal choriocapillaris densities with circular Regions of Interest (ROIs) of different radii and with small random displacements from the center of the Foveal Avascular Zone (FAZ). We then calculated the variability of the density measure with different ROI radii. We then characterized the textural features of choriocapillaris binary images by estimating the parameters of an Ising model. For each image we calculated the Optimal Radius (OR) as the minimum ROI radius required to obtain a standard deviation in the simulation below 0.01. The density measured with the individual OR was 0.52 ± 0.07 (mean ± STD). Similar density values (0.51 ± 0.07) were obtained using a fixed ROI radius of 450 μm. The Ising model yielded two parameter estimates (β = 0.34 ± 0.03; γ = 0.003 ± 0.012; mean ± STD), characterizing pixel clustering and white pixel density respectively. Using the estimated parameters to synthetize new random textures via simulation we obtained a good reproduction of the original choriocapillaris structural features and density. In conclusion, we developed an extensive characterization of the normal subfoveal choriocapillaris that might be used for flow analysis and applied to the investigation pathological alterations.

  19. Dynamic State Estimation for Multi-Machine Power System by Unscented Kalman Filter With Enhanced Numerical Stability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Junjian; Sun, Kai; Wang, Jianhui

    In this paper, in order to enhance the numerical stability of the unscented Kalman filter (UKF) used for power system dynamic state estimation, a new UKF with guaranteed positive semidifinite estimation error covariance (UKFGPS) is proposed and compared with five existing approaches, including UKFschol, UKF-kappa, UKFmodified, UKF-Delta Q, and the squareroot UKF (SRUKF). These methods and the extended Kalman filter (EKF) are tested by performing dynamic state estimation on WSCC 3-machine 9-bus system and NPCC 48-machine 140-bus system. For WSCC system, all methods obtain good estimates. However, for NPCC system, both EKF and the classic UKF fail. It is foundmore » that UKFschol, UKF-kappa, and UKF-Delta Q do not work well in some estimations while UKFGPS works well in most cases. UKFmodified and SRUKF can always work well, indicating their better scalability mainly due to the enhanced numerical stability.« less

  20. The influence of random element displacement on DOA estimates obtained with (Khatri-Rao-)root-MUSIC.

    PubMed

    Inghelbrecht, Veronique; Verhaevert, Jo; van Hecke, Tanja; Rogier, Hendrik

    2014-11-11

    Although a wide range of direction of arrival (DOA) estimation algorithms has been described for a diverse range of array configurations, no specific stochastic analysis framework has been established to assess the probability density function of the error on DOA estimates due to random errors in the array geometry. Therefore, we propose a stochastic collocation method that relies on a generalized polynomial chaos expansion to connect the statistical distribution of random position errors to the resulting distribution of the DOA estimates. We apply this technique to the conventional root-MUSIC and the Khatri-Rao-root-MUSIC methods. According to Monte-Carlo simulations, this novel approach yields a speedup by a factor of more than 100 in terms of CPU-time for a one-dimensional case and by a factor of 56 for a two-dimensional case.

  1. Estimating linear-nonlinear models using Rényi divergences

    PubMed Central

    Kouh, Minjoon; Sharpee, Tatyana O.

    2009-01-01

    This paper compares a family of methods for characterizing neural feature selectivity using natural stimuli in the framework of the linear-nonlinear model. In this model, the spike probability depends in a nonlinear way on a small number of stimulus dimensions. The relevant stimulus dimensions can be found by optimizing a Rényi divergence that quantifies a change in the stimulus distribution associated with the arrival of single spikes. Generally, good reconstructions can be obtained based on optimization of Rényi divergence of any order, even in the limit of small numbers of spikes. However, the smallest error is obtained when the Rényi divergence of order 1 is optimized. This type of optimization is equivalent to information maximization, and is shown to saturate the Cramér-Rao bound describing the smallest error allowed for any unbiased method. We also discuss conditions under which information maximization provides a convenient way to perform maximum likelihood estimation of linear-nonlinear models from neural data. PMID:19568981

  2. A weighted belief-propagation algorithm for estimating volume-related properties of random polytopes

    NASA Astrophysics Data System (ADS)

    Font-Clos, Francesc; Massucci, Francesco Alessandro; Pérez Castillo, Isaac

    2012-11-01

    In this work we introduce a novel weighted message-passing algorithm based on the cavity method for estimating volume-related properties of random polytopes, properties which are relevant in various research fields ranging from metabolic networks, to neural networks, to compressed sensing. We propose, as opposed to adopting the usual approach consisting in approximating the real-valued cavity marginal distributions by a few parameters, using an algorithm to faithfully represent the entire marginal distribution. We explain various alternatives for implementing the algorithm and benchmarking the theoretical findings by showing concrete applications to random polytopes. The results obtained with our approach are found to be in very good agreement with the estimates produced by the Hit-and-Run algorithm, known to produce uniform sampling.

  3. W-phase estimation of first-order rupture distribution for megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2014-05-01

    Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of

  4. Age estimation using exfoliative cytology and radiovisiography: A comparative study

    PubMed Central

    Nallamala, Shilpa; Guttikonda, Venkateswara Rao; Manchikatla, Praveen Kumar; Taneeru, Sravya

    2017-01-01

    Introduction: Age estimation is one of the essential factors in establishing the identity of an individual. Among various methods, exfoliative cytology (EC) is a unique, noninvasive technique, involving simple, and pain-free collection of intact cells from the oral cavity for microscopic examination. Objective: The study was undertaken with an aim to estimate the age of an individual from the average cell size of their buccal smears calculated using image analysis morphometric software and the pulp–tooth area ratio in mandibular canine of the same individual using radiovisiography (RVG). Materials and Methods: Buccal smears were collected from 100 apparently healthy individuals. After fixation in 95% alcohol, the smears were stained using Papanicolaou stain. The average cell size was measured using image analysis software (Image-Pro Insight 8.0). The RVG images of mandibular canines were obtained, pulp and tooth areas were traced using AutoCAD 2010 software, and area ratio was calculated. The estimated age was then calculated using regression analysis. Results: The paired t-test between chronological age and estimated age by cell size and pulp–tooth area ratio was statistically nonsignificant (P > 0.05). Conclusion: In the present study, age estimated by pulp–tooth area ratio and EC yielded good results. PMID:29657491

  5. Solar Tyrol project: using climate data for energy production estimation. The good practice of Tyrol in conceptualizing climate services.

    NASA Astrophysics Data System (ADS)

    Petitta, Marcello; Wagner, Jochen; Costa, Armin; Monsorno, Roberto; Innerebner, Markus; Moser, David; Zebisch, Marc

    2014-05-01

    The scientific community in the last years is largely discussing the concept of "Climate services". Several definitions have been used, but it still remains a rather open concept. We used climate data from analysis and reanalysis to create a daily and hourly model of atmospheric turbidity in order to account the effect of the atmosphere on incoming solar radiation with the final aim of estimating electric production from Photovoltaic (PV) Modules in the Alps. Renewable Energy production in the Alpine Region is dominated by hydroelectricity, but the potential for photovoltaic energy production is gaining momentum. Especially the southern part of the Alps and inner Alpine regions offer good conditions for PV energy production. The combination of high irradiance values and cold air temperature in mountainous regions is well suited for solar cells. To enable more widespread currency of PV plants, PV has to become an important part in regional planning. To provide regional authorities and also private stakeholders with high quality PV energy yield climatology in the provinces of Bolzano/Bozen South Tirol (Italy) and Tyrol (Austria), the research project Solar Tyrol was inaugurated in 2012. Several methods are used to calculate very high resolution maps of solar radiation. Most of these approaches use climatological values. In this project we reconstructed the last 10 years of atmospheric turbidity using reanalysis and operational data in order to better estimate incoming solar radiation in the alpine region. Our method is divided into three steps: i) clear sky radiation: to estimate the atmospheric effect on solar radiation we calculated Linke Turbidity factor using aerosols optical depth (AOD), surface albedo, atmospheric pressure, and total water content from ECMWF and MACC analysis. ii) shadows: we calculated shadows of mountains and buildings using a 2 meter-resolution digital elevation model of the area and GIS module r.sun modified to fit our specific needs. iii

  6. Soybean Crop Area Estimation and Mapping in Mato Grosso State, Brazil

    NASA Astrophysics Data System (ADS)

    Gusso, A.; Ducati, J. R.

    2012-07-01

    Evaluation of the MODIS Crop Detection Algorithm (MCDA) procedure for estimating historical planted soybean crop areas was done on fields in Mato Grosso State, Brazil. MCDA is based on temporal profiles of EVI (Enhanced Vegetation Index) derived from satellite data of the MODIS (Moderate Resolution Imaging Spectroradiometer) imager, and was previously developed for soybean area estimation in Rio Grande do Sul State, Brazil. According to the MCDA approach, in Mato Grosso soybean area estimates can be provided in December (1st forecast), using images from the sowing period, and in February (2nd forecast), using images from sowing and maximum crop development period. The results obtained by the MCDA were compared with Brazilian Institute of Geography and Statistics (IBGE) official estimates of soybean area at municipal level. Coefficients of determination were between 0.93 and 0.98, indicating a good agreement, and also the suitability of MCDA to estimations performed in Mato Grosso State. On average, the MCDA results explained 96% of the variation of the data estimated by the IBGE. In this way, MCDA calibration was able to provide annual thematic soybean maps, forecasting the planted area in the State, with results which are comparable to the official agricultural statistics.

  7. Good News for Borehole Climatology

    NASA Astrophysics Data System (ADS)

    Rath, Volker; Fidel Gonzalez-Rouco, J.; Goosse, Hugues

    2010-05-01

    Though the investigation of observed borehole temperatures has proved to be a valuable tool for the reconstruction of ground surface temperature histories, there are many open questions concerning the significance and accuracy of the reconstructions from these data. In particular, the temperature signal of the warming after the Last glacial Maximum (LGM) is still present in borehole temperature profiles. It influences the relatively shallow boreholes used in current paleoclimate inversions to estimate temperature changes in the last centuries. This is shown using Monte Carlo experiments on past surface temperature change, using plausible distributions for the most important parameters, i.e.,amplitude and timing of the glacial-interglacial transition, the prior average temperature, and petrophysical properties. It has been argued that the signature of the last glacial-interglacial transition could be responsible for the high amplitudes of millennial temperature reconstructions. However, in shallow boreholes the additional effect of past climate can reasonably approximated by a linear variation of temperature with depth, and thus be accommodated by a "biased" background heat flow. This is good news for borehole climate, but implies that the geological heat flow values have to be interpreted accordingly. Borehole climate reconstructions from these shallow are most probably underestimating past variability due to the diffusive character of the heat conduction process, and the smoothness constraints necessary for obtaining stable solutions of this ill-posed inverse problem. A simple correction based on subtracting an appropriate prior surface temperature history shows promising results reducing these errors considerably, also with deeper boreholes, where the heat flow signal can not be approximated linearly, and improves the comparisons with AOGCM modeling results.

  8. Do good actions inspire good actions in others?

    PubMed

    Capraro, Valerio; Marcelletti, Alessandra

    2014-12-12

    Actions such as sharing food and cooperating to reach a common goal have played a fundamental role in the evolution of human societies. Despite the importance of such good actions, little is known about if and how they can spread from person to person to person. For instance, does being recipient of an altruistic act increase your probability of being cooperative with a third party? We have conducted an experiment on Amazon Mechanical Turk to test this mechanism using economic games. We have measured willingness to be cooperative through a standard Prisoner's dilemma and willingness to act altruistically using a binary Dictator game. In the baseline treatments, the endowments needed to play were given by the experimenters, as usual; in the control treatments, they came from a good action made by someone else. Across four different comparisons and a total of 572 subjects, we have never found a significant increase of cooperation or altruism when the endowment came from a good action. We conclude that good actions do not necessarily inspire good actions in others. While this is consistent with the theoretical prediction, it challenges the majority of other experimental studies.

  9. Radiation-force-based estimation of acoustic attenuation using harmonic motion imaging (HMI) in phantoms and in vitro livers before and after HIFU ablation.

    PubMed

    Chen, Jiangang; Hou, Gary Y; Marquet, Fabrice; Han, Yang; Camarena, Francisco; Konofagou, Elisa

    2015-10-07

    Acoustic attenuation represents the energy loss of the propagating wave through biological tissues and plays a significant role in both therapeutic and diagnostic ultrasound applications. Estimation of acoustic attenuation remains challenging but critical for tissue characterization. In this study, an attenuation estimation approach was developed using the radiation-force-based method of harmonic motion imaging (HMI). 2D tissue displacement maps were acquired by moving the transducer in a raster-scan format. A linear regression model was applied on the logarithm of the HMI displacements at different depths in order to estimate the acoustic attenuation. Commercially available phantoms with known attenuations (n = 5) and in vitro canine livers (n = 3) were tested, as well as HIFU lesions in in vitro canine livers (n = 5). Results demonstrated that attenuations obtained from the phantoms showed a good correlation (R² = 0.976) with the independently obtained values reported by the manufacturer with an estimation error (compared to the values independently measured) varying within the range of 15-35%. The estimated attenuation in the in vitro canine livers was equal to 0.32   ±   0.03 dB cm(-1) MHz(-1), which is in good agreement with the existing literature. The attenuation in HIFU lesions was found to be higher (0.58   ±   0.06 dB cm(-1) MHz(-1)) than that in normal tissues, also in agreement with the results from previous publications. Future potential applications of the proposed method include estimation of attenuation in pathological tissues before and after thermal ablation.

  10. Radiation-force-based Estimation of Acoustic Attenuation Using Harmonic Motion Imaging (HMI) in Phantoms and in vitro Livers Before and After HIFU Ablation

    PubMed Central

    Chen, Jiangang; Hou, Gary Y.; Marquet, Fabrice; Han, Yang; Camarena, Francisco

    2015-01-01

    Acoustic attenuation represents the energy loss of the propagating wave through biological tissues and plays a significant role in both therapeutic and diagnostic ultrasound applications. Estimation of acoustic attenuation remains challenging but critical for tissue characterization. In this study, an attenuation estimation approach was developed using the radiation-force-based method of Harmonic Motion Imaging (HMI). 2D tissue displacement maps were acquired by moving the transducer in a raster-scan format. A linear regression model was applied on the logarithm of the HMI displacements at different depths in order to estimate the acoustic attenuation. Commercially available phantoms with known attenuations (n=5) and in vitro canine livers (n=3) were tested, as well as HIFU lesions in in vitro canine livers (n=5). Results demonstrated that attenuations obtained from the phantoms showed a good correlation (R2=0.976) with the independently obtained values reported by the manufacturer with an estimation error (compared to the values independently measured) varying within the range of 15-35%. The estimated attenuation in the in vitro canine livers was equal to 0.32±0.03 dB/cm/MHz, which is in good agreement with the existing literature. The attenuation in HIFU lesions was found to be higher (0.58±0.06 dB/cm/MHz) than that in normal tissues, also in agreement with the results from previous publications. Future potential applications of the proposed method include estimation of attenuation in pathological tissues before and after thermal ablation. PMID:26371501

  11. Intra-class correlation estimates for assessment of vitamin A intake in children.

    PubMed

    Agarwal, Girdhar G; Awasthi, Shally; Walter, Stephen D

    2005-03-01

    In many community-based surveys, multi-level sampling is inherent in the design. In the design of these studies, especially to calculate the appropriate sample size, investigators need good estimates of intra-class correlation coefficient (ICC), along with the cluster size, to adjust for variation inflation due to clustering at each level. The present study used data on the assessment of clinical vitamin A deficiency and intake of vitamin A-rich food in children in a district in India. For the survey, 16 households were sampled from 200 villages nested within eight randomly-selected blocks of the district. ICCs and components of variances were estimated from a three-level hierarchical random effects analysis of variance model. Estimates of ICCs and variance components were obtained at village and block levels. Between-cluster variation was evident at each level of clustering. In these estimates, ICCs were inversely related to cluster size, but the design effect could be substantial for large clusters. At the block level, most ICC estimates were below 0.07. At the village level, many ICC estimates ranged from 0.014 to 0.45. These estimates may provide useful information for the design of epidemiological studies in which the sampled (or allocated) units range in size from households to large administrative zones.

  12. Accurate estimation of object location in an image sequence using helicopter flight data

    NASA Technical Reports Server (NTRS)

    Tang, Yuan-Liang; Kasturi, Rangachar

    1994-01-01

    In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.

  13. Estimating Photosynthetically Available Radiation (PAR) at the Earth's surface from satellite observations

    NASA Technical Reports Server (NTRS)

    Frouin, Robert

    1993-01-01

    Current satellite algorithms to estimate photosynthetically available radiation (PAR) at the earth' s surface are reviewed. PAR is deduced either from an insolation estimate or obtained directly from top-of-atmosphere solar radiances. The characteristics of both approaches are contrasted and typical results are presented. The inaccuracies reported, about 10 percent and 6 percent on daily and monthly time scales, respectively, are useful to model oceanic and terrestrial primary productivity. At those time scales variability due to clouds in the ratio of PAR and insolation is reduced, making it possible to deduce PAR directly from insolation climatologies (satellite or other) that are currently available or being produced. Improvements, however, are needed in conditions of broken cloudiness and over ice/snow. If not addressed properly, calibration/validation issues may prevent quantitative use of the PAR estimates in studies of climatic change. The prospects are good for an accurate, long-term climatology of PAR over the globe.

  14. Evaluating the Good Ontology Design Guideline (GoodOD) with the Ontology Quality Requirements and Evaluation Method and Metrics (OQuaRE)

    PubMed Central

    Duque-Ramos, Astrid; Boeker, Martin; Jansen, Ludger; Schulz, Stefan; Iniesta, Miguela; Fernández-Breis, Jesualdo Tomás

    2014-01-01

    Objective To (1) evaluate the GoodOD guideline for ontology development by applying the OQuaRE evaluation method and metrics to the ontology artefacts that were produced by students in a randomized controlled trial, and (2) informally compare the OQuaRE evaluation method with gold standard and competency questions based evaluation methods, respectively. Background In the last decades many methods for ontology construction and ontology evaluation have been proposed. However, none of them has become a standard and there is no empirical evidence of comparative evaluation of such methods. This paper brings together GoodOD and OQuaRE. GoodOD is a guideline for developing robust ontologies. It was previously evaluated in a randomized controlled trial employing metrics based on gold standard ontologies and competency questions as outcome parameters. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies and has been successfully used for evaluating the quality of ontologies. Methods In this paper, we evaluate the effect of training in ontology construction based on the GoodOD guideline within the OQuaRE quality evaluation framework and compare the results with those obtained for the previous studies based on the same data. Results Our results show a significant effect of the GoodOD training over developed ontologies by topics: (a) a highly significant effect was detected in three topics from the analysis of the ontologies of untrained and trained students; (b) both positive and negative training effects with respect to the gold standard were found for five topics. Conclusion The GoodOD guideline had a significant effect over the quality of the ontologies developed. Our results show that GoodOD ontologies can be effectively evaluated using OQuaRE and that OQuaRE is able to provide additional useful information about the quality of the GoodOD ontologies. PMID:25148262

  15. Evaluating the Good Ontology Design Guideline (GoodOD) with the ontology quality requirements and evaluation method and metrics (OQuaRE).

    PubMed

    Duque-Ramos, Astrid; Boeker, Martin; Jansen, Ludger; Schulz, Stefan; Iniesta, Miguela; Fernández-Breis, Jesualdo Tomás

    2014-01-01

    To (1) evaluate the GoodOD guideline for ontology development by applying the OQuaRE evaluation method and metrics to the ontology artefacts that were produced by students in a randomized controlled trial, and (2) informally compare the OQuaRE evaluation method with gold standard and competency questions based evaluation methods, respectively. In the last decades many methods for ontology construction and ontology evaluation have been proposed. However, none of them has become a standard and there is no empirical evidence of comparative evaluation of such methods. This paper brings together GoodOD and OQuaRE. GoodOD is a guideline for developing robust ontologies. It was previously evaluated in a randomized controlled trial employing metrics based on gold standard ontologies and competency questions as outcome parameters. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies and has been successfully used for evaluating the quality of ontologies. In this paper, we evaluate the effect of training in ontology construction based on the GoodOD guideline within the OQuaRE quality evaluation framework and compare the results with those obtained for the previous studies based on the same data. Our results show a significant effect of the GoodOD training over developed ontologies by topics: (a) a highly significant effect was detected in three topics from the analysis of the ontologies of untrained and trained students; (b) both positive and negative training effects with respect to the gold standard were found for five topics. The GoodOD guideline had a significant effect over the quality of the ontologies developed. Our results show that GoodOD ontologies can be effectively evaluated using OQuaRE and that OQuaRE is able to provide additional useful information about the quality of the GoodOD ontologies.

  16. Compound estimation procedures in reliability

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1990-01-01

    At NASA, components and subsystems of components in the Space Shuttle and Space Station generally go through a number of redesign stages. While data on failures for various design stages are sometimes available, the classical procedures for evaluating reliability only utilize the failure data on the present design stage of the component or subsystem. Often, few or no failures have been recorded on the present design stage. Previously, Bayesian estimators for the reliability of a single component, conditioned on the failure data for the present design, were developed. These new estimators permit NASA to evaluate the reliability, even when few or no failures have been recorded. Point estimates for the latter evaluation were not possible with the classical procedures. Since different design stages of a component (or subsystem) generally have a good deal in common, the development of new statistical procedures for evaluating the reliability, which consider the entire failure record for all design stages, has great intuitive appeal. A typical subsystem consists of a number of different components and each component has evolved through a number of redesign stages. The present investigations considered compound estimation procedures and related models. Such models permit the statistical consideration of all design stages of each component and thus incorporate all the available failure data to obtain estimates for the reliability of the present version of the component (or subsystem). A number of models were considered to estimate the reliability of a component conditioned on its total failure history from two design stages. It was determined that reliability estimators for the present design stage, conditioned on the complete failure history for two design stages have lower risk than the corresponding estimators conditioned only on the most recent design failure data. Several models were explored and preliminary models involving bivariate Poisson distribution and the

  17. Bayesian-MCMC-based parameter estimation of stealth aircraft RCS models

    NASA Astrophysics Data System (ADS)

    Xia, Wei; Dai, Xiao-Xia; Feng, Yuan

    2015-12-01

    When modeling a stealth aircraft with low RCS (Radar Cross Section), conventional parameter estimation methods may cause a deviation from the actual distribution, owing to the fact that the characteristic parameters are estimated via directly calculating the statistics of RCS. The Bayesian-Markov Chain Monte Carlo (Bayesian-MCMC) method is introduced herein to estimate the parameters so as to improve the fitting accuracies of fluctuation models. The parameter estimations of the lognormal and the Legendre polynomial models are reformulated in the Bayesian framework. The MCMC algorithm is then adopted to calculate the parameter estimates. Numerical results show that the distribution curves obtained by the proposed method exhibit improved consistence with the actual ones, compared with those fitted by the conventional method. The fitting accuracy could be improved by no less than 25% for both fluctuation models, which implies that the Bayesian-MCMC method might be a good candidate among the optimal parameter estimation methods for stealth aircraft RCS models. Project supported by the National Natural Science Foundation of China (Grant No. 61101173), the National Basic Research Program of China (Grant No. 613206), the National High Technology Research and Development Program of China (Grant No. 2012AA01A308), the State Scholarship Fund by the China Scholarship Council (CSC), and the Oversea Academic Training Funds, and University of Electronic Science and Technology of China (UESTC).

  18. Estimating the R-curve from residual strength data

    NASA Technical Reports Server (NTRS)

    Orange, T. W.

    1985-01-01

    A method is presented for estimating the crack-extension resistance curve (R-curve) from residual-strength (maximum load against original crack length) data for precracked fracture specimens. The method allows additional information to be inferred from simple test results, and that information can be used to estimate the failure loads of more complicated structures of the same material and thickness. The fundamentals of the R-curve concept are reviewed first. Then the analytical basis for the estimation method is presented. The estimation method has been verified in two ways. Data from the literature (involving several materials and different types of specimens) are used to show that the estimated R-curve is in good agreement with the measured R-curve. A recent predictive blind round-robin program offers a more crucial test. When the actual failure loads are disclosed, the predictions are found to be in good agreement.

  19. Estimation of Rainfall Rates from Passive Microwave Remote Sensing.

    NASA Astrophysics Data System (ADS)

    Sharma, Awdhesh Kumar

    Rainfall rates have been estimated using the passive microwave and visible/infrared remote sensing techniques. Data of September 14, 1978 from the Scanning Multichannel Microwave Radiometer (SMMR) on board SEA SAT-A and the Visible and Infrared Spin Scan Radiometer (VISSR) on board GOES-W (Geostationary Operational Environmental Satellite - West) was obtained and analyzed for rainfall rate retrieval. Microwave brightness temperatures (MBT) are simulated, using the microwave radiative transfer model (MRTM) and atmospheric scattering models. These MBT were computed as a function of rates of rainfall from precipitating clouds which are in a combined phase of ice and water. Microwave extinction due to ice and liquid water are calculated using Mie-theory and Gamma drop size distributions. Microwave absorption due to oxygen and water vapor are based on the schemes given by Rosenkranz, and Barret and Chung. The scattering phase matrix involved in the MRTM is found using Eddington's two stream approximation. The surface effects due to winds and foam are included through the ocean surface emissivity model. Rainfall rates are then inverted from MBT using the optimization technique "Leaps and Bounds" and multiple linear regression leading to a relationship between the rainfall rates and MBT. This relationship has been used to infer the oceanic rainfall rates from SMMR data. The VISSR data has been inverted for the rainfall rates using Griffith's scheme. This scheme provides an independent means of estimating rainfall rates for cross checking SMMR estimates. The inferred rainfall rates from both techniques have been plotted on a world map for comparison. A reasonably good correlation has been obtained between the two estimates.

  20. Application of Model Based Parameter Estimation for RCS Frequency Response Calculations Using Method of Moments

    NASA Technical Reports Server (NTRS)

    Reddy, C. J.

    1998-01-01

    An implementation of the Model Based Parameter Estimation (MBPE) technique is presented for obtaining the frequency response of the Radar Cross Section (RCS) of arbitrarily shaped, three-dimensional perfect electric conductor (PEC) bodies. An Electric Field Integral Equation (EFTE) is solved using the Method of Moments (MoM) to compute the RCS. The electric current is expanded in a rational function and the coefficients of the rational function are obtained using the frequency derivatives of the EFIE. Using the rational function, the electric current on the PEC body is obtained over a frequency band. Using the electric current at different frequencies, RCS of the PEC body is obtained over a wide frequency band. Numerical results for a square plate, a cube, and a sphere are presented over a bandwidth. Good agreement between MBPE and the exact solution over the bandwidth is observed.

  1. Assessing the sense of `good at' and `not good at' toward learning topics of mathematics with conjoint analysis

    NASA Astrophysics Data System (ADS)

    Izuta, Giido; Nishikawa, Tomoko

    2017-05-01

    Over the past years, educational psychology and pedagogy communities have focused on the metacognition formalism as a helpful approach to carry out investigations on the feeling of difficulty in mastering some classroom materials that students acquire through their subjective experiences of learning in schools. Motivated by hitherto studies, this work deals with the assessment of the awareness of `good at' and `not good at' that Japanese junior high school students have towards the main learning modules in their three years of mathematics. More specifically, the aims here are (i) to shed some light into how the awareness varies across the grades and gender; (ii) to get some insights into the extent to what the conjoint analysis can be applied to understand the students' feelings toward learning activities. To accomplish them, a conjoint analysis survey with three conjoint attributes, each with two levels, were designed to assess the learners' perceptions of `good at' and `not good at' with respect to arithmetic (algebraic operations), geometry and functions, which make up the three major modules of their curricula. The measurements took place in a public junior high school with 616 school children. It turned out that the conjoint analyses for boys and girls of each grade generated the partial utility and importance graphs which along with a pre-established precision of measurement allowed us to form groups of pupils according to their `sense of being good at' characteristics. Moreover, the results showed that the number of groups obtained differed for boys and girls as well as grades when the gender and school years were considered for comparisons. These findings suggesting that female students outnumbers their peers in number of `good at' despite the low number of females pursuing careers in mathematics and related fields imply that investigation on the causes of this juxtaposition has to be taken into account in the future.

  2. Accurate position estimation methods based on electrical impedance tomography measurements

    NASA Astrophysics Data System (ADS)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less

  3. Mathematics skills in good readers with hydrocephalus.

    PubMed

    Barnes, Marcia A; Pengelly, Sarah; Dennis, Maureen; Wilkinson, Margaret; Rogers, Tracey; Faulkner, Heather

    2002-01-01

    Children with hydrocephalus have poor math skills. We investigated the nature of their arithmetic computation errors by comparing written subtraction errors in good readers with hydrocephalus, typically developing good readers of the same age, and younger children matched for math level to the children with hydrocephalus. Children with hydrocephalus made more procedural errors (although not more fact retrieval or visual-spatial errors) than age-matched controls; they made the same number of procedural errors as younger, math-level matched children. We also investigated a broad range of math abilities, and found that children with hydrocephalus performed more poorly than age-matched controls on tests of geometry and applied math skills such as estimation and problem solving. Computation deficits in children with hydrocephalus reflect delayed development of procedural knowledge. Problems in specific math domains such as geometry and applied math, were associated with deficits in constituent cognitive skills such as visual spatial competence, memory, and general knowledge.

  4. Do Liberal Arts Colleges Really Foster Good Practices in Undergraduate Education?

    ERIC Educational Resources Information Center

    Pascarella, Ernest T.; Cruce, Ty M.; Wolniak, Gregory C.; Blaich, Charles F.

    2004-01-01

    Researchers estimated the net effects of liberal arts colleges on 19 measures of good practices in undergraduate education grouped into seven categories. Analyses of 3-year longitudinal data from five liberal arts colleges, four research universities, and seven regional universities were conducted. Net of a battery of student precollege…

  5. A modified blade element theory for estimation of forces generated by a beetle-mimicking flapping wing system.

    PubMed

    Truong, Q T; Nguyen, Q V; Truong, V T; Park, H C; Byun, D Y; Goo, N S

    2011-09-01

    We present an unsteady blade element theory (BET) model to estimate the aerodynamic forces produced by a freely flying beetle and a beetle-mimicking flapping wing system. Added mass and rotational forces are included to accommodate the unsteady force. In addition to the aerodynamic forces needed to accurately estimate the time history of the forces, the inertial forces of the wings are also calculated. All of the force components are considered based on the full three-dimensional (3D) motion of the wing. The result obtained by the present BET model is validated with the data which were presented in a reference paper. The difference between the averages of the estimated forces (lift and drag) and the measured forces in the reference is about 5.7%. The BET model is also used to estimate the force produced by a freely flying beetle and a beetle-mimicking flapping wing system. The wing kinematics used in the BET calculation of a real beetle and the flapping wing system are captured using high-speed cameras. The results show that the average estimated vertical force of the beetle is reasonably close to the weight of the beetle, and the average estimated thrust of the beetle-mimicking flapping wing system is in good agreement with the measured value. Our results show that the unsteady lift and drag coefficients measured by Dickinson et al are still useful for relatively higher Reynolds number cases, and the proposed BET can be a good way to estimate the force produced by a flapping wing system.

  6. Obtaining appropriate interval estimates for age when multiple indicators are used: evaluation of an ad-hoc procedure.

    PubMed

    Fieuws, Steffen; Willems, Guy; Larsen-Tangmose, Sara; Lynnerup, Niels; Boldsen, Jesper; Thevissen, Patrick

    2016-03-01

    When an estimate of age is needed, typically multiple indicators are present as found in skeletal or dental information. There exists a vast literature on approaches to estimate age from such multivariate data. Application of Bayes' rule has been proposed to overcome drawbacks of classical regression models but becomes less trivial as soon as the number of indicators increases. Each of the age indicators can lead to a different point estimate ("the most plausible value for age") and a prediction interval ("the range of possible values"). The major challenge in the combination of multiple indicators is not the calculation of a combined point estimate for age but the construction of an appropriate prediction interval. Ignoring the correlation between the age indicators results in intervals being too small. Boldsen et al. (2002) presented an ad-hoc procedure to construct an approximate confidence interval without the need to model the multivariate correlation structure between the indicators. The aim of the present paper is to bring under attention this pragmatic approach and to evaluate its performance in a practical setting. This is all the more needed since recent publications ignore the need for interval estimation. To illustrate and evaluate the method, Köhler et al. (1995) third molar scores are used to estimate the age in a dataset of 3200 male subjects in the juvenile age range.

  7. Equations for estimating Clark Unit-hydrograph parameters for small rural watersheds in Illinois

    USGS Publications Warehouse

    Straub, Timothy D.; Melching, Charles S.; Kocher, Kyle E.

    2000-01-01

    Simulation of the measured discharge hydrographs for the verification storms utilizing TC and R obtained from the estimation equations yielded good results. The error in peak discharge for 21 of the 29 verification storms was less than 25 percent, and the error in time-to-peak discharge for 18 of the 29 verification storms also was less than 25 percent. Therefore, applying the estimation equations to determine TC and R for design-storm simulation may result in reliable design hydrographs, as long as the physical characteristics of the watersheds under consideration are within the range of those characteristics for the watersheds in this study [area: 0.02-2.3 mi2, main-channel length: 0.17-3.4 miles, main-channel slope: 10.5-229 feet per mile, and insignificant percentage of impervious cover].

  8. 19 CFR 10.521 - Goods eligible for tariff preference level claims.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... States-Singapore Free Trade Agreement Tariff Preference Level § 10.521 Goods eligible for tariff... assembled in Singapore from fabric or yarn produced or obtained outside the territory of Singapore or the...

  9. Estimating the time evolution of NMR systems via a quantum-speed-limit-like expression

    NASA Astrophysics Data System (ADS)

    Villamizar, D. V.; Duzzioni, E. I.; Leal, A. C. S.; Auccaise, R.

    2018-05-01

    Finding the solutions of the equations that describe the dynamics of a given physical system is crucial in order to obtain important information about its evolution. However, by using estimation theory, it is possible to obtain, under certain limitations, some information on its dynamics. The quantum-speed-limit (QSL) theory was originally used to estimate the shortest time in which a Hamiltonian drives an initial state to a final one for a given fidelity. Using the QSL theory in a slightly different way, we are able to estimate the running time of a given quantum process. For that purpose, we impose the saturation of the Anandan-Aharonov bound in a rotating frame of reference where the state of the system travels slower than in the original frame (laboratory frame). Through this procedure it is possible to estimate the actual evolution time in the laboratory frame of reference with good accuracy when compared to previous methods. Our method is tested successfully to predict the time spent in the evolution of nuclear spins 1/2 and 3/2 in NMR systems. We find that the estimated time according to our method is better than previous approaches by up to four orders of magnitude. One disadvantage of our method is that we need to solve a number of transcendental equations, which increases with the system dimension and parameter discretization used to solve such equations numerically.

  10. Population Estimates for Chum Salmon Spawning in the Mainstem Columbia River, 2002 Technical Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rawding, Dan; Hillson, Todd D.

    2003-11-15

    Accurate and precise population estimates of chum salmon (Oncorhynchus keta) spawning in the mainstem Columbia River are needed to provide a basis for informed water allocation decisions, to determine the status of chum salmon listed under the Endangered Species Act, and to evaluate the contribution of the Duncan Creek re-introduction program to mainstem spawners. Currently, mark-recapture experiments using the Jolly-Seber model provide the only framework for this type of estimation. In 2002, a study was initiated to estimate mainstem Columbia River chum salmon populations using seining data collected while capturing broodstock as part of the Duncan Creek re-introduction. The fivemore » assumptions of the Jolly-Seber model were examined using hypothesis testing within a statistical framework, including goodness of fit tests and secondary experiments. We used POPAN 6, an integrated computer system for the analysis of capture-recapture data, to obtain maximum likelihood estimates of standard model parameters, derived estimates, and their precision. A more parsimonious final model was selected using Akaike Information Criteria. Final chum salmon escapement estimates and (standard error) from seining data for the Ives Island, Multnomah, and I-205 sites are 3,179 (150), 1,269 (216), and 3,468 (180), respectively. The Ives Island estimate is likely lower than the total escapement because only the largest two of four spawning sites were sampled. The accuracy and precision of these estimates would improve if seining was conducted twice per week instead of weekly, and by incorporating carcass recoveries into the analysis. Population estimates derived from seining mark-recapture data were compared to those obtained using the current mainstem Columbia River salmon escapement methodologies. The Jolly-Seber population estimate from carcass tagging in the Ives Island area was 4,232 adults with a standard error of 79. This population estimate appears reasonable and precise but

  11. Good practices for quantitative bias analysis.

    PubMed

    Lash, Timothy L; Fox, Matthew P; MacLehose, Richard F; Maldonado, George; McCandless, Lawrence C; Greenland, Sander

    2014-12-01

    Quantitative bias analysis serves several objectives in epidemiological research. First, it provides a quantitative estimate of the direction, magnitude and uncertainty arising from systematic errors. Second, the acts of identifying sources of systematic error, writing down models to quantify them, assigning values to the bias parameters and interpreting the results combat the human tendency towards overconfidence in research results, syntheses and critiques and the inferences that rest upon them. Finally, by suggesting aspects that dominate uncertainty in a particular research result or topic area, bias analysis can guide efficient allocation of sparse research resources. The fundamental methods of bias analyses have been known for decades, and there have been calls for more widespread use for nearly as long. There was a time when some believed that bias analyses were rarely undertaken because the methods were not widely known and because automated computing tools were not readily available to implement the methods. These shortcomings have been largely resolved. We must, therefore, contemplate other barriers to implementation. One possibility is that practitioners avoid the analyses because they lack confidence in the practice of bias analysis. The purpose of this paper is therefore to describe what we view as good practices for applying quantitative bias analysis to epidemiological data, directed towards those familiar with the methods. We focus on answering questions often posed to those of us who advocate incorporation of bias analysis methods into teaching and research. These include the following. When is bias analysis practical and productive? How does one select the biases that ought to be addressed? How does one select a method to model biases? How does one assign values to the parameters of a bias model? How does one present and interpret a bias analysis?. We hope that our guide to good practices for conducting and presenting bias analyses will encourage

  12. Joint Estimation of Source Range and Depth Using a Bottom-Deployed Vertical Line Array in Deep Water

    PubMed Central

    Li, Hui; Yang, Kunde; Duan, Rui; Lei, Zhixiong

    2017-01-01

    This paper presents a joint estimation method of source range and depth using a bottom-deployed vertical line array (VLA). The method utilizes the information on the arrival angle of direct (D) path in space domain and the interference characteristic of D and surface-reflected (SR) paths in frequency domain. The former is related to a ray tracing technique to backpropagate the rays and produces an ambiguity surface of source range. The latter utilizes Lloyd’s mirror principle to obtain an ambiguity surface of source depth. The acoustic transmission duct is the well-known reliable acoustic path (RAP). The ambiguity surface of the combined estimation is a dimensionless ad hoc function. Numerical efficiency and experimental verification show that the proposed method is a good candidate for initial coarse estimation of source position. PMID:28590442

  13. Depreciation of public goods in spatial public goods games

    NASA Astrophysics Data System (ADS)

    Shi, Dong-Mei; Zhuang, Yong; Li, Yu-Jian; Wang, Bing-Hong

    2011-10-01

    In real situations, the value of public goods will be reduced or even lost because of external factors or for intrinsic reasons. In this work, we investigate the evolution of cooperation by considering the effect of depreciation of public goods in spatial public goods games on a square lattice. It is assumed that each individual gains full advantage if the number of the cooperators nc within a group centered on that individual equals or exceeds the critical mass (CM). Otherwise, there is depreciation of the public goods, which is realized by rescaling the multiplication factor r to (nc/CM)r. It is shown that the emergence of cooperation is remarkably promoted for CM > 1 even at small values of r, and a global cooperative level is achieved at an intermediate value of CM = 4 at a small r. We further study the effect of depreciation of public goods on different topologies of a regular lattice, and find that the system always reaches global cooperation at a moderate value of CM = G - 1 regardless of whether or not there exist overlapping triangle structures on the regular lattice, where G is the group size of the associated regular lattice.

  14. Modeling study of air pollution due to the manufacture of export goods in China's Pearl River Delta.

    PubMed

    Streets, David G; Yu, Carolyne; Bergin, Michael H; Wang, Xuemei; Carmichael, Gregory R

    2006-04-01

    The Pearl River Delta is a major manufacturing region on the south coast of China that produces more than dollar 100 billion of goods annually for export to North America, Europe, and other parts of Asia. Considerable air pollution is caused by the manufacturing industries themselves and by the power plants, trucks, and ships that support them. We estimate that 10-40% of emissions of primary SO2, NO(x), RSP, and VOC in the region are caused by export-related activities. Using the STEM-2K1 atmospheric transport model, we estimate that these emissions contribute 5-30% of the ambient concentrations of SO2, NO(x), NO(z), and VOC in the region. One reason that the exported goods are cheap and therefore attractive to consumers in developed countries is that emission controls are lacking or of low performance. We estimate that state-of-the-art controls could be installed at an annualized cost of dollar 0.3-3 billion, representing 0.3-3% of the value of the goods produced. We conclude that mitigation measures could be adopted without seriously affecting the prices of exported goods and would achieve considerable human health and other benefits in the form of reduced air pollutant concentrations in densely populated urban areas.

  15. Estimating SPT-N Value Based on Soil Resistivity using Hybrid ANN-PSO Algorithm

    NASA Astrophysics Data System (ADS)

    Nur Asmawisham Alel, Mohd; Ruben Anak Upom, Mark; Asnida Abdullah, Rini; Hazreek Zainal Abidin, Mohd

    2018-04-01

    Standard Penetration Resistance (N value) is used in many empirical geotechnical engineering formulas. Meanwhile, soil resistivity is a measure of soil’s resistance to electrical flow. For a particular site, usually, only a limited N value data are available. In contrast, resistivity data can be obtained extensively. Moreover, previous studies showed evidence of a correlation between N value and resistivity value. Yet, no existing method is able to interpret resistivity data for estimation of N value. Thus, the aim is to develop a method for estimating N-value using resistivity data. This study proposes a hybrid Artificial Neural Network-Particle Swarm Optimization (ANN-PSO) method to estimate N value using resistivity data. Five different ANN-PSO models based on five boreholes were developed and analyzed. The performance metrics used were the coefficient of determination, R2 and mean absolute error, MAE. Analysis of result found that this method can estimate N value (R2 best=0.85 and MAEbest=0.54) given that the constraint, Δ {\\bar{l}}ref, is satisfied. The results suggest that ANN-PSO method can be used to estimate N value with good accuracy.

  16. Side-by-side ANFIS as a useful tool for estimating correlated thermophysical properties

    NASA Astrophysics Data System (ADS)

    Grieu, Stéphane; Faugeroux, Olivier; Traoré, Adama; Claudet, Bernard; Bodnar, Jean-Luc

    2015-12-01

    In the present paper, an artificial intelligence-based approach dealing with the estimation of correlated thermophysical properties is designed and evaluated. This new and "intelligent" approach makes use of photothermal responses obtained when homogeneous materials are subjected to a light flux. Commonly, gradient-based algorithms are used as parameter estimation techniques. Unfortunately, such algorithms show instabilities leading to non-convergence in case of correlated properties to be estimated from a rebuilt impulse response. So, the main objective of the present work was to simultaneously estimate both the thermal diffusivity and conductivity of homogeneous materials, from front-face or rear-face photothermal responses to pseudo random binary signals. To this end, we used side-by-side neuro-fuzzy systems (adaptive network-based fuzzy inference systems) trained with a hybrid algorithm. We focused on the impact on generalization of both the examples used during training and the fuzzification process. In addition, computation time was a key point to consider. That is why the developed algorithm is computationally tractable and allows both the thermal diffusivity and conductivity of homogeneous materials to be simultaneously estimated with very good accuracy (the generalization error ranges between 4.6% and 6.2%).

  17. Radiation-force-based estimation of acoustic attenuation using harmonic motion imaging (HMI) in phantoms and in vitro livers before and after HIFU ablation

    NASA Astrophysics Data System (ADS)

    Chen, Jiangang; Hou, Gary Y.; Marquet, Fabrice; Han, Yang; Camarena, Francisco; Konofagou, Elisa

    2015-10-01

    Acoustic attenuation represents the energy loss of the propagating wave through biological tissues and plays a significant role in both therapeutic and diagnostic ultrasound applications. Estimation of acoustic attenuation remains challenging but critical for tissue characterization. In this study, an attenuation estimation approach was developed using the radiation-force-based method of harmonic motion imaging (HMI). 2D tissue displacement maps were acquired by moving the transducer in a raster-scan format. A linear regression model was applied on the logarithm of the HMI displacements at different depths in order to estimate the acoustic attenuation. Commercially available phantoms with known attenuations (n=5 ) and in vitro canine livers (n=3 ) were tested, as well as HIFU lesions in in vitro canine livers (n=5 ). Results demonstrated that attenuations obtained from the phantoms showed a good correlation ({{R}2}=0.976 ) with the independently obtained values reported by the manufacturer with an estimation error (compared to the values independently measured) varying within the range of 15-35%. The estimated attenuation in the in vitro canine livers was equal to 0.32   ±   0.03 dB cm-1 MHz-1, which is in good agreement with the existing literature. The attenuation in HIFU lesions was found to be higher (0.58   ±   0.06 dB cm-1 MHz-1) than that in normal tissues, also in agreement with the results from previous publications. Future potential applications of the proposed method include estimation of attenuation in pathological tissues before and after thermal ablation.

  18. Neural Models: An Option to Estimate Seismic Parameters of Accelerograms

    NASA Astrophysics Data System (ADS)

    Alcántara, L.; García, S.; Ovando-Shelley, E.; Macías, M. A.

    2014-12-01

    Seismic instrumentation for recording strong earthquakes, in Mexico, goes back to the 60´s due the activities carried out by the Institute of Engineering at Universidad Nacional Autónoma de México. However, it was after the big earthquake of September 19, 1985 (M=8.1) when the project of seismic instrumentation assumes a great importance. Currently, strong ground motion networks have been installed for monitoring seismic activity mainly along the Mexican subduction zone and in Mexico City. Nevertheless, there are other major regions and cities that can be affected by strong earthquakes and have not yet begun their seismic instrumentation program or this is still in development.Because of described situation some relevant earthquakes (e.g. Huajuapan de León Oct 24, 1980 M=7.1, Tehuacán Jun 15, 1999 M=7 and Puerto Escondido Sep 30, 1999 M= 7.5) have not been registered properly in some cities, like Puebla and Oaxaca, and that were damaged during those earthquakes. Fortunately, the good maintenance work carried out in the seismic network has permitted the recording of an important number of small events in those cities. So in this research we present a methodology based on the use of neural networks to estimate significant duration and in some cases the response spectra for those seismic events. The neural model developed predicts significant duration in terms of magnitude, epicenter distance, focal depth and soil characterization. Additionally, for response spectra we used a vector of spectral accelerations. For training the model we selected a set of accelerogram records obtained from the small events recorded in the strong motion instruments installed in the cities of Puebla and Oaxaca. The final results show that neural networks as a soft computing tool that use a multi-layer feed-forward architecture provide good estimations of the target parameters and they also have a good predictive capacity to estimate strong ground motion duration and response spectra.

  19. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  20. Evaluation of estimation methods for organic carbon normalized sorption coefficients

    USGS Publications Warehouse

    Baker, James R.; Mihelcic, James R.; Luehrs, Dean C.; Hickey, James P.

    1997-01-01

    A critically evaluated set of 94 soil water partition coefficients normalized to soil organic carbon content (Koc) is presented for 11 classes of organic chemicals. This data set is used to develop and evaluate Koc estimation methods using three different descriptors. The three types of descriptors used in predicting Koc were octanol/water partition coefficient (Kow), molecular connectivity (mXt) and linear solvation energy relationships (LSERs). The best results were obtained estimating Koc from Kow, though a slight improvement in the correlation coefficient was obtained by using a two-parameter regression with Kow and the third order difference term from mXt. Molecular connectivity correlations seemed to be best suited for use with specific chemical classes. The LSER provided a better fit than mXt but not as good as the correlation with Koc. The correlation to predict Koc from Kow was developed for 72 chemicals; log Koc = 0.903* log Kow + 0.094. This correlation accounts for 91% of the variability in the data for chemicals with log Kow ranging from 1.7 to 7.0. The expression to determine the 95% confidence interval on the estimated Koc is provided along with an example for two chemicals of different hydrophobicity showing the confidence interval of the retardation factor determined from the estimated Koc. The data showed that Koc is not likely to be applicable for chemicals with log Kow < 1.7. Finally, the Koc correlation developed using Kow as a descriptor was compared with three nonclass-specific correlations and two 'commonly used' class-specific correlations to determine which method(s) are most suitable.

  1. A new estimator method for GARCH models

    NASA Astrophysics Data System (ADS)

    Onody, R. N.; Favaro, G. M.; Cazaroto, E. R.

    2007-06-01

    The GARCH (p, q) model is a very interesting stochastic process with widespread applications and a central role in empirical finance. The Markovian GARCH (1, 1) model has only 3 control parameters and a much discussed question is how to estimate them when a series of some financial asset is given. Besides the maximum likelihood estimator technique, there is another method which uses the variance, the kurtosis and the autocorrelation time to determine them. We propose here to use the standardized 6th moment. The set of parameters obtained in this way produces a very good probability density function and a much better time autocorrelation function. This is true for both studied indexes: NYSE Composite and FTSE 100. The probability of return to the origin is investigated at different time horizons for both Gaussian and Laplacian GARCH models. In spite of the fact that these models show almost identical performances with respect to the final probability density function and to the time autocorrelation function, their scaling properties are, however, very different. The Laplacian GARCH model gives a better scaling exponent for the NYSE time series, whereas the Gaussian dynamics fits better the FTSE scaling exponent.

  2. VizieR Online Data Catalog: GOODS-S CANDELS multiwavelength catalog (Guo+, 2013)

    NASA Astrophysics Data System (ADS)

    Guo, Y.; Ferguson, H. C.; Giavalisco, M.; Barro, G.; Willner, S. P.; Ashby, M. L. N.; Dahlen, T.; Donley, J. L.; Faber, S. M.; Fontana, A.; Galametz, A.; Grazian, A.; Huang, K.-H.; Kocevski, D. D.; Koekemoer, A. M.; Koo, D. C.; McGrath, E. J.; Peth, M.; Salvato, M.; Wuyts, S.; Castellano, M.; Cooray, A. R.; Dickinson, M. E.; Dunlop, J. S.; Fazio, G. G.; Gardner, J. P.; Gawiser, E.; Grogin, N. A.; Hathi, N. P.; Hsu, L.-T.; Lee, K.-S.; Lucas, R. A.; Mobasher, B.; Nandra, K.; Newman, J. A.; van der Wel, A.

    2014-04-01

    The Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS; Grogin et al. 2011ApJS..197...35G; Koekemoer et al. 2011ApJS..197...36K) is designed to document galaxy formation and evolution over the redshift range of z=1.5-8. The core of CANDELS is to use the revolutionary near-infrared HST/WFC3 camera, installed on HST in 2009 May, to obtain deep imaging of faint and faraway objects. The GOODS-S field, centered at RAJ2000=03:32:30 and DEJ2000=-27:48:20 and located within the Chandra Deep Field South (CDFS; Giacconi et al. 2002, Cat. J/ApJS/139/369), is a sky region of about 170arcmin2 which has been targeted for some of the deepest observations ever taken by NASA's Great Observatories, HST, Spitzer, and Chandra as well as by other world-class telescopes. The field has been (among others) imaged in the optical wavelength with HST/ACS in F435W, F606W, F775W, and F850LP bands as part of the HST Treasury Program: the Great Observatories Origins Deep Survey (GOODS; Giavalisco et al. 2004, Cat. II/261); in the mid-IR (3.6-24um) wavelength with Spitzer as part of the GOODS Spitzer Legacy Program (PI: M. Dickinson). The CDF-S/GOODS field was observed by the MOSAIC II imager on the CTIO 4m Blanco telescope to obtain deep U-band observations in 2001 September. Another U-band survey in GOODS-S was carried out using the VIMOS instrument mounted at the Melipal Unit Telescope of the VLT at ESO's Cerro Paranal Observatory, Chile. This large program of ESO (168.A-0485; PI: C. Cesarsky) was obtained in service mode observations in UT3 between 2004 August and fall 2006. In the ground-based NIR, imaging observations of the CDFS were carried out in J, H, Ks bands using the ISAAC instrument mounted at the Antu Unit Telescope of the VLT. Data were obtained as part of the ESO Large Programme 168.A-0485 (PI: C. Cesarsky) as well as ESO Programmes 64.O-0643, 66.A-0572, and 68.A-0544 (PI: E. Giallongo) with a total allocation time of ~500 hr from 1999 October to 2007 January

  3. Parameter estimation by Differential Search Algorithm from horizontal loop electromagnetic (HLEM) data

    NASA Astrophysics Data System (ADS)

    Alkan, Hilal; Balkaya, Çağlayan

    2018-02-01

    We present an efficient inversion tool for parameter estimation from horizontal loop electromagnetic (HLEM) data using Differential Search Algorithm (DSA) which is a swarm-intelligence-based metaheuristic proposed recently. The depth, dip, and origin of a thin subsurface conductor causing the anomaly are the parameters estimated by the HLEM method commonly known as Slingram. The applicability of the developed scheme was firstly tested on two synthetically generated anomalies with and without noise content. Two control parameters affecting the convergence characteristic to the solution of the algorithm were tuned for the so-called anomalies including one and two conductive bodies, respectively. Tuned control parameters yielded more successful statistical results compared to widely used parameter couples in DSA applications. Two field anomalies measured over a dipping graphitic shale from Northern Australia were then considered, and the algorithm provided the depth estimations being in good agreement with those of previous studies and drilling information. Furthermore, the efficiency and reliability of the results obtained were investigated via probability density function. Considering the results obtained, we can conclude that DSA characterized by the simple algorithmic structure is an efficient and promising metaheuristic for the other relatively low-dimensional geophysical inverse problems. Finally, the researchers after being familiar with the content of developed scheme displaying an easy to use and flexible characteristic can easily modify and expand it for their scientific optimization problems.

  4. The public goods game with a new form of shared reward

    NASA Astrophysics Data System (ADS)

    Zhang, Chunyan; Chen, Zengqiang

    2016-10-01

    Altruistic contribution to a common good evenly enjoyed by all group members is hard to explain because of the greater benefits obtained by a defector than a cooperator. A variety of mechanisms have been proposed to resolve the collective dilemma over the years, including rewards for altruism. An underrated and easily ignored phenomenon is that the altruistic behaviors of cooperators not only directly enhance the benefits of their game opponents, but also indirectly produce good influences to other allied members in their surroundings (e.g. relatives or friends). Here we propose a shared reward, in the form of extensive benefits, to extend the traditional definition of the public goods game. Mathematical analysis using the Moran process helps us to obtain the fixation probability for one ‘mutant’ cooperator to invade and dominate the whole defecting population. Results suggest that a tunable parameter exists, above a certain critical value of which natural selection favors cooperation over defection. In addition, analytical results with replicator dynamics show that this critical value influencing the evolution of altruism is closely correlated with the population size, the gaming group size and the synergy factor of the public goods game. These results, based on an extended notion of shared reward and extensive benefits, are expected to provide novel explanations for the emergence of altruistic behaviors.

  5. High Spatio-Temporal Resolution Bathymetry Estimation and Morphology

    NASA Astrophysics Data System (ADS)

    Bergsma, E. W. J.; Conley, D. C.; Davidson, M. A.; O'Hare, T. J.

    2015-12-01

    In recent years, bathymetry estimates using video images have become increasingly accurate. With the cBathy code (Holman et al., 2013) fully operational, bathymetry results with 0.5 metres accuracy have been regularly obtained at Duck, USA. cBathy is based on observations of the dominant frequencies and wavelengths of surface wave motions and estimates the depth (and hence allows inference of bathymetry profiles) based on linear wave theory. Despite the good performance at Duck, large discrepancies were found related to tidal elevation and camera height (Bergsma et al., 2014) and on the camera boundaries. A tide dependent floating pixel and camera boundary solution have been proposed to overcome these issues (Bergsma et al., under review). The video-data collection is set estimate depths hourly on a grid with resolution in the order of 10x25 meters. Here, the application of the cBathy at Porthtowan in the South-West of England is presented. Hourly depth estimates are combined and analysed over a period of 1.5 years (2013-2014). In this work the focus is on the sub-tidal region, where the best cBathy results are achieved. The morphology of the sub-tidal bar is tracked with high spatio-temporal resolution on short and longer time scales. Furthermore, the impact of the storm and reset (sudden and large changes in bathymetry) of the sub-tidal area is clearly captured with the depth estimations. This application shows that the high spatio-temporal resolution of cBathy makes it a powerful tool for coastal research and coastal zone management.

  6. Estimation of time-dependent Hurst exponents with variational smoothing and application to forecasting foreign exchange rates

    NASA Astrophysics Data System (ADS)

    Garcin, Matthieu

    2017-10-01

    Hurst exponents depict the long memory of a time series. For human-dependent phenomena, as in finance, this feature may vary in the time. It justifies modelling dynamics by multifractional Brownian motions, which are consistent with time-dependent Hurst exponents. We improve the existing literature on estimating time-dependent Hurst exponents by proposing a smooth estimate obtained by variational calculus. This method is very general and not restricted to the sole Hurst framework. It is globally more accurate and easier than other existing non-parametric estimation techniques. Besides, in the field of Hurst exponents, it makes it possible to make forecasts based on the estimated multifractional Brownian motion. The application to high-frequency foreign exchange markets (GBP, CHF, SEK, USD, CAD, AUD, JPY, CNY and SGD, all against EUR) shows significantly good forecasts. When the Hurst exponent is higher than 0.5, what depicts a long-memory feature, the accuracy is higher.

  7. Force estimation from OCT volumes using 3D CNNs.

    PubMed

    Gessert, Nils; Beringhoff, Jens; Otte, Christoph; Schlaefer, Alexander

    2018-07-01

    Estimating the interaction forces of instruments and tissue is of interest, particularly to provide haptic feedback during robot-assisted minimally invasive interventions. Different approaches based on external and integrated force sensors have been proposed. These are hampered by friction, sensor size, and sterilizability. We investigate a novel approach to estimate the force vector directly from optical coherence tomography image volumes. We introduce a novel Siamese 3D CNN architecture. The network takes an undeformed reference volume and a deformed sample volume as an input and outputs the three components of the force vector. We employ a deep residual architecture with bottlenecks for increased efficiency. We compare the Siamese approach to methods using difference volumes and two-dimensional projections. Data were generated using a robotic setup to obtain ground-truth force vectors for silicon tissue phantoms as well as porcine tissue. Our method achieves a mean average error of [Formula: see text] when estimating the force vector. Our novel Siamese 3D CNN architecture outperforms single-path methods that achieve a mean average error of [Formula: see text]. Moreover, the use of volume data leads to significantly higher performance compared to processing only surface information which achieves a mean average error of [Formula: see text]. Based on the tissue dataset, our methods shows good generalization in between different subjects. We propose a novel image-based force estimation method using optical coherence tomography. We illustrate that capturing the deformation of subsurface structures substantially improves force estimation. Our approach can provide accurate force estimates in surgical setups when using intraoperative optical coherence tomography.

  8. Local Estimators for Spacecraft Formation Flying

    NASA Technical Reports Server (NTRS)

    Fathpour, Nanaz; Hadaegh, Fred Y.; Mesbahi, Mehran; Nabi, Marzieh

    2011-01-01

    A formation estimation architecture for formation flying builds upon the local information exchange among multiple local estimators. Spacecraft formation flying involves the coordination of states among multiple spacecraft through relative sensing, inter-spacecraft communication, and control. Most existing formation flying estimation algorithms can only be supported via highly centralized, all-to-all, static relative sensing. New algorithms are needed that are scalable, modular, and robust to variations in the topology and link characteristics of the formation exchange network. These distributed algorithms should rely on a local information-exchange network, relaxing the assumptions on existing algorithms. In this research, it was shown that only local observability is required to design a formation estimator and control law. The approach relies on breaking up the overall information-exchange network into sequence of local subnetworks, and invoking an agreement-type filter to reach consensus among local estimators within each local network. State estimates were obtained by a set of local measurements that were passed through a set of communicating Kalman filters to reach an overall state estimation for the formation. An optimization approach was also presented by means of which diffused estimates over the network can be incorporated in the local estimates obtained by each estimator via local measurements. This approach compares favorably with that obtained by a centralized Kalman filter, which requires complete knowledge of the raw measurement available to each estimator.

  9. Quadratic Zeeman effect in hydrogen Rydberg states: Rigorous error estimates for energy eigenvalues, energy eigenfunctions, and oscillator strengths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falsaperla, P.; Fonte, G.

    1994-10-01

    A variational method, based on some results due to T. Kato [Proc. Phys. Soc. Jpn. 4, 334 (1949)], and previously discussed is here applied to the hydrogen atom in uniform magnetic fields of tesla in order to calculate, with a rigorous error estimate, energy eigenvalues, energy eigenfunctions, and oscillator strengths relative to Rydberg states up to just below the field-free ionization threshold. Making use of a basis (parabolic Sturmian basis) with a size varying from 990 up to 5050, we obtain, over the energy range of [minus]190 to [minus]24 cm[sup [minus]1], all of the eigenvalues and a good part ofmore » the oscillator strengths with a remarkable accuracy. This, however, decreases with increasing excitation energy and, thus, above [similar to][minus]24 cm[sup [minus]1], we obtain results of good accuracy only for eigenvalues ranging up to [similar to][minus]12 cm[sup [minus]1].« less

  10. 19 CFR 10.605 - Goods classifiable as goods put up in sets.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...-Central America-United States Free Trade Agreement Rules of Origin § 10.605 Goods classifiable as goods... 19 Customs Duties 1 2010-04-01 2010-04-01 false Goods classifiable as goods put up in sets. 10.605 Section 10.605 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY...

  11. Mapping to Estimate Health-State Utility from Non-Preference-Based Outcome Measures: An ISPOR Good Practices for Outcomes Research Task Force Report.

    PubMed

    Wailoo, Allan J; Hernandez-Alava, Monica; Manca, Andrea; Mejia, Aurelio; Ray, Joshua; Crawford, Bruce; Botteman, Marc; Busschbach, Jan

    2017-01-01

    Economic evaluation conducted in terms of cost per quality-adjusted life-year (QALY) provides information that decision makers find useful in many parts of the world. Ideally, clinical studies designed to assess the effectiveness of health technologies would include outcome measures that are directly linked to health utility to calculate QALYs. Often this does not happen, and even when it does, clinical studies may be insufficient for a cost-utility assessment. Mapping can solve this problem. It uses an additional data set to estimate the relationship between outcomes measured in clinical studies and health utility. This bridges the evidence gap between available evidence on the effect of a health technology in one metric and the requirement for decision makers to express it in a different one (QALYs). In 2014, ISPOR established a Good Practices for Outcome Research Task Force for mapping studies. This task force report provides recommendations to analysts undertaking mapping studies, those that use the results in cost-utility analysis, and those that need to critically review such studies. The recommendations cover all areas of mapping practice: the selection of data sets for the mapping estimation, model selection and performance assessment, reporting standards, and the use of results including the appropriate reflection of variability and uncertainty. This report is unique because it takes an international perspective, is comprehensive in its coverage of the aspects of mapping practice, and reflects the current state of the art. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  12. Goodness-of-Fit Assessment of Item Response Theory Models

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto

    2013-01-01

    The article provides an overview of goodness-of-fit assessment methods for item response theory (IRT) models. It is now possible to obtain accurate "p"-values of the overall fit of the model if bivariate information statistics are used. Several alternative approaches are described. As the validity of inferences drawn on the fitted model…

  13. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation

    PubMed Central

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2013-01-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method. PMID:23750314

  14. Adaptive distributed video coding with correlation estimation using expectation propagation

    NASA Astrophysics Data System (ADS)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  15. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    PubMed

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  16. 21 CFR 1315.34 - Obtaining an import quota.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Obtaining an import quota. 1315.34 Section 1315.34 Food and Drugs DRUG ENFORCEMENT ADMINISTRATION, DEPARTMENT OF JUSTICE IMPORTATION AND PRODUCTION QUOTAS... imports, the estimated medical, scientific, and industrial needs of the United States, the establishment...

  17. Spectrum-based estimators of the bivariate Hurst exponent

    NASA Astrophysics Data System (ADS)

    Kristoufek, Ladislav

    2014-12-01

    We discuss two alternate spectrum-based estimators of the bivariate Hurst exponent in the power-law cross-correlations setting, the cross-periodogram and local X -Whittle estimators, as generalizations of their univariate counterparts. As the spectrum-based estimators are dependent on a part of the spectrum taken into consideration during estimation, a simulation study showing performance of the estimators under varying bandwidth parameter as well as correlation between processes and their specification is provided as well. These estimators are less biased than the already existent averaged periodogram estimator, which, however, has slightly lower variance. The spectrum-based estimators can serve as a good complement to the popular time domain estimators.

  18. Inverse Kinematics for Upper Limb Compound Movement Estimation in Exoskeleton-Assisted Rehabilitation.

    PubMed

    Cortés, Camilo; de Los Reyes-Guzmán, Ana; Scorza, Davide; Bertelsen, Álvaro; Carrasco, Eduardo; Gil-Agudo, Ángel; Ruiz-Salguero, Oscar; Flórez, Julián

    2016-01-01

    Robot-Assisted Rehabilitation (RAR) is relevant for treating patients affected by nervous system injuries (e.g., stroke and spinal cord injury). The accurate estimation of the joint angles of the patient limbs in RAR is critical to assess the patient improvement. The economical prevalent method to estimate the patient posture in Exoskeleton-based RAR is to approximate the limb joint angles with the ones of the Exoskeleton. This approximation is rough since their kinematic structures differ. Motion capture systems (MOCAPs) can improve the estimations, at the expenses of a considerable overload of the therapy setup. Alternatively, the Extended Inverse Kinematics Posture Estimation (EIKPE) computational method models the limb and Exoskeleton as differing parallel kinematic chains. EIKPE has been tested with single DOF movements of the wrist and elbow joints. This paper presents the assessment of EIKPE with elbow-shoulder compound movements (i.e., object prehension). Ground-truth for estimation assessment is obtained from an optical MOCAP (not intended for the treatment stage). The assessment shows EIKPE rendering a good numerical approximation of the actual posture during the compound movement execution, especially for the shoulder joint angles. This work opens the horizon for clinical studies with patient groups, Exoskeleton models, and movements types.

  19. Inverse Kinematics for Upper Limb Compound Movement Estimation in Exoskeleton-Assisted Rehabilitation

    PubMed Central

    Cortés, Camilo; de los Reyes-Guzmán, Ana; Scorza, Davide; Bertelsen, Álvaro; Carrasco, Eduardo; Gil-Agudo, Ángel; Ruiz-Salguero, Oscar; Flórez, Julián

    2016-01-01

    Robot-Assisted Rehabilitation (RAR) is relevant for treating patients affected by nervous system injuries (e.g., stroke and spinal cord injury). The accurate estimation of the joint angles of the patient limbs in RAR is critical to assess the patient improvement. The economical prevalent method to estimate the patient posture in Exoskeleton-based RAR is to approximate the limb joint angles with the ones of the Exoskeleton. This approximation is rough since their kinematic structures differ. Motion capture systems (MOCAPs) can improve the estimations, at the expenses of a considerable overload of the therapy setup. Alternatively, the Extended Inverse Kinematics Posture Estimation (EIKPE) computational method models the limb and Exoskeleton as differing parallel kinematic chains. EIKPE has been tested with single DOF movements of the wrist and elbow joints. This paper presents the assessment of EIKPE with elbow-shoulder compound movements (i.e., object prehension). Ground-truth for estimation assessment is obtained from an optical MOCAP (not intended for the treatment stage). The assessment shows EIKPE rendering a good numerical approximation of the actual posture during the compound movement execution, especially for the shoulder joint angles. This work opens the horizon for clinical studies with patient groups, Exoskeleton models, and movements types. PMID:27403420

  20. A Sensor Fusion Method Based on an Integrated Neural Network and Kalman Filter for Vehicle Roll Angle Estimation.

    PubMed

    Vargas-Meléndez, Leandro; Boada, Beatriz L; Boada, María Jesús L; Gauchía, Antonio; Díaz, Vicente

    2016-08-31

    This article presents a novel estimator based on sensor fusion, which combines the Neural Network (NN) with a Kalman filter in order to estimate the vehicle roll angle. The NN estimates a "pseudo-roll angle" through variables that are easily measured from Inertial Measurement Unit (IMU) sensors. An IMU is a device that is commonly used for vehicle motion detection, and its cost has decreased during recent years. The pseudo-roll angle is introduced in the Kalman filter in order to filter noise and minimize the variance of the norm and maximum errors' estimation. The NN has been trained for J-turn maneuvers, double lane change maneuvers and lane change maneuvers at different speeds and road friction coefficients. The proposed method takes into account the vehicle non-linearities, thus yielding good roll angle estimation. Finally, the proposed estimator has been compared with one that uses the suspension deflections to obtain the pseudo-roll angle. Experimental results show the effectiveness of the proposed NN and Kalman filter-based estimator.

  1. A Sensor Fusion Method Based on an Integrated Neural Network and Kalman Filter for Vehicle Roll Angle Estimation

    PubMed Central

    Vargas-Meléndez, Leandro; Boada, Beatriz L.; Boada, María Jesús L.; Gauchía, Antonio; Díaz, Vicente

    2016-01-01

    This article presents a novel estimator based on sensor fusion, which combines the Neural Network (NN) with a Kalman filter in order to estimate the vehicle roll angle. The NN estimates a “pseudo-roll angle” through variables that are easily measured from Inertial Measurement Unit (IMU) sensors. An IMU is a device that is commonly used for vehicle motion detection, and its cost has decreased during recent years. The pseudo-roll angle is introduced in the Kalman filter in order to filter noise and minimize the variance of the norm and maximum errors’ estimation. The NN has been trained for J-turn maneuvers, double lane change maneuvers and lane change maneuvers at different speeds and road friction coefficients. The proposed method takes into account the vehicle non-linearities, thus yielding good roll angle estimation. Finally, the proposed estimator has been compared with one that uses the suspension deflections to obtain the pseudo-roll angle. Experimental results show the effectiveness of the proposed NN and Kalman filter-based estimator. PMID:27589763

  2. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  3. Shelf life of packaged bakery goods--a review.

    PubMed

    Galić, K; Curić, D; Gabrić, D

    2009-05-01

    Packaging requirements for fresh bakery goods are often minimal as many of the products are for immediate consumption. However, packaging can be an important factor in extending the shelf life of other cereal-based goods (toast, frozen products, biscuits, cakes, pastas). Some amount of the texture changes and flavor loss manifest over the shelf life of a soft-baked good can usually be minimized or delayed by effective use of packaging materials. The gains in the extension of shelf life will be application specific. It is recognized that defining the shelf life of a food is a difficult task and is an area of intense research for food product development scientists (food technologists, microbiologists, packaging experts). Proper application of chemical kinetic principles to food quality loss allows for efficiently designing appropriate shelf-life tests and maximizing the useful information that can be obtained from the resulting data. In the development of any new food product including reformulating, change of packaging, or storage/distribution condition (to penetrate into a new market), one important aspect is the knowledge of shelf life.

  4. Modeling Speed-Accuracy Tradeoff in Adaptive System for Practicing Estimation

    ERIC Educational Resources Information Center

    Nižnan, Juraj

    2015-01-01

    Estimation is useful in situations where an exact answer is not as important as a quick answer that is good enough. A web-based adaptive system for practicing estimates is currently being developed. We propose a simple model for estimating student's latent skill of estimation. This model combines a continuous measure of correctness and response…

  5. Good Agreements Make Good Friends

    PubMed Central

    Han, The Anh; Pereira, Luís Moniz; Santos, Francisco C.; Lenaerts, Tom

    2013-01-01

    When starting a new collaborative endeavor, it pays to establish upfront how strongly your partner commits to the common goal and what compensation can be expected in case the collaboration is violated. Diverse examples in biological and social contexts have demonstrated the pervasiveness of making prior agreements on posterior compensations, suggesting that this behavior could have been shaped by natural selection. Here, we analyze the evolutionary relevance of such a commitment strategy and relate it to the costly punishment strategy, where no prior agreements are made. We show that when the cost of arranging a commitment deal lies within certain limits, substantial levels of cooperation can be achieved. Moreover, these levels are higher than that achieved by simple costly punishment, especially when one insists on sharing the arrangement cost. Not only do we show that good agreements make good friends, agreements based on shared costs result in even better outcomes. PMID:24045873

  6. "Good mothering" or "good citizenship"?

    PubMed

    Porter, Maree; Kerridge, Ian H; Jordens, Christopher F C

    2012-03-01

    Umbilical cord blood banking is one of many biomedical innovations that confront pregnant women with new choices about what they should do to secure their own and their child's best interests. Many mothers can now choose to donate their baby's umbilical cord blood (UCB) to a public cord blood bank or pay to store it in a private cord blood bank. Donation to a public bank is widely regarded as an altruistic act of civic responsibility. Paying to store UCB may be regarded as a "unique opportunity" to provide "insurance" for the child's future. This paper reports findings from a survey of Australian women that investigated the decision to either donate or store UCB. We conclude that mothers are faced with competing discourses that force them to choose between being a "good mother" and fulfilling their role as a "good citizen." We discuss this finding with reference to the concept of value pluralism.

  7. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  8. Linear Estimation of Particle Bulk Parameters from Multi-Wavelength Lidar Measurements

    NASA Technical Reports Server (NTRS)

    Veselovskii, Igor; Dubovik, Oleg; Kolgotin, A.; Korenskiy, M.; Whiteman, D. N.; Allakhverdiev, K.; Huseyinoglu, F.

    2012-01-01

    An algorithm for linear estimation of aerosol bulk properties such as particle volume, effective radius and complex refractive index from multiwavelength lidar measurements is presented. The approach uses the fact that the total aerosol concentration can well be approximated as a linear combination of aerosol characteristics measured by multiwavelength lidar. Therefore, the aerosol concentration can be estimated from lidar measurements without the need to derive the size distribution, which entails more sophisticated procedures. The definition of the coefficients required for the linear estimates is based on an expansion of the particle size distribution in terms of the measurement kernels. Once the coefficients are established, the approach permits fast retrieval of aerosol bulk properties when compared with the full regularization technique. In addition, the straightforward estimation of bulk properties stabilizes the inversion making it more resistant to noise in the optical data. Numerical tests demonstrate that for data sets containing three aerosol backscattering and two extinction coefficients (so called 3 + 2 ) the uncertainties in the retrieval of particle volume and surface area are below 45% when input data random uncertainties are below 20 %. Moreover, using linear estimates allows reliable retrievals even when the number of input data is reduced. To evaluate the approach, the results obtained using this technique are compared with those based on the previously developed full inversion scheme that relies on the regularization procedure. Both techniques were applied to the data measured by multiwavelength lidar at NASA/GSFC. The results obtained with both methods using the same observations are in good agreement. At the same time, the high speed of the retrieval using linear estimates makes the method preferable for generating aerosol information from extended lidar observations. To demonstrate the efficiency of the method, an extended time series of

  9. The ACCE method: an approach for obtaining quantitative or qualitative estimates of residual confounding that includes unmeasured confounding

    PubMed Central

    Smith, Eric G.

    2015-01-01

    Background:  Nonrandomized studies typically cannot account for confounding from unmeasured factors.  Method:  A method is presented that exploits the recently-identified phenomenon of  “confounding amplification” to produce, in principle, a quantitative estimate of total residual confounding resulting from both measured and unmeasured factors.  Two nested propensity score models are constructed that differ only in the deliberate introduction of an additional variable(s) that substantially predicts treatment exposure.  Residual confounding is then estimated by dividing the change in treatment effect estimate between models by the degree of confounding amplification estimated to occur, adjusting for any association between the additional variable(s) and outcome. Results:  Several hypothetical examples are provided to illustrate how the method produces a quantitative estimate of residual confounding if the method’s requirements and assumptions are met.  Previously published data is used to illustrate that, whether or not the method routinely provides precise quantitative estimates of residual confounding, the method appears to produce a valuable qualitative estimate of the likely direction and general size of residual confounding. Limitations:  Uncertainties exist, including identifying the best approaches for: 1) predicting the amount of confounding amplification, 2) minimizing changes between the nested models unrelated to confounding amplification, 3) adjusting for the association of the introduced variable(s) with outcome, and 4) deriving confidence intervals for the method’s estimates (although bootstrapping is one plausible approach). Conclusions:  To this author’s knowledge, it has not been previously suggested that the phenomenon of confounding amplification, if such amplification is as predictable as suggested by a recent simulation, provides a logical basis for estimating total residual confounding. The method's basic approach is

  10. Methodology for Estimating ton-Miles of Goods Movements for U.S. Freight Mulitimodal Network System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveira Neto, Francisco Moraes; Chin, Shih-Miao; Hwang, Ho-Ling

    2013-01-01

    Ton-miles is a commonly used measure of freight transportation output. Estimation of ton-miles in the U.S. transportation system requires freight flow data at disaggregated level (either by link flow, path flows or origin-destination flows between small geographic areas). However, the sheer magnitude of the freight data system as well as industrial confidentiality concerns in Census survey, limit the freight data which is made available to the public. Through the years, the Center for Transportation Analysis (CTA) of the Oak Ridge National Laboratory (ORNL) has been working in the development of comprehensive national and regional freight databases and network flow models.more » One of the main products of this effort is the Freight Analysis Framework (FAF), a public database released by the ORNL. FAF provides to the general public a multidimensional matrix of freight flows (weight and dollar value) on the U.S. transportation system between states, major metropolitan areas, and remainder of states. Recently, the CTA research team has developed a methodology to estimate ton-miles by mode of transportation between the 2007 FAF regions. This paper describes the data disaggregation methodology. The method relies on the estimation of disaggregation factors that are related to measures of production, attractiveness and average shipments distances by mode service. Production and attractiveness of counties are captured by the total employment payroll. Likely mileages for shipments between counties are calculated by using a geographic database, i.e. the CTA multimodal network system. Results of validation experiments demonstrate the validity of the method. Moreover, 2007 FAF ton-miles estimates are consistent with the major freight data programs for rail and water movements.« less

  11. A method for estimating abundance of mobile populations using telemetry and counts of unmarked animals

    USGS Publications Warehouse

    Clement, Matthew; O'Keefe, Joy M; Walters, Brianne

    2015-01-01

    While numerous methods exist for estimating abundance when detection is imperfect, these methods may not be appropriate due to logistical difficulties or unrealistic assumptions. In particular, if highly mobile taxa are frequently absent from survey locations, methods that estimate a probability of detection conditional on presence will generate biased abundance estimates. Here, we propose a new estimator for estimating abundance of mobile populations using telemetry and counts of unmarked animals. The estimator assumes that the target population conforms to a fission-fusion grouping pattern, in which the population is divided into groups that frequently change in size and composition. If assumptions are met, it is not necessary to locate all groups in the population to estimate abundance. We derive an estimator, perform a simulation study, conduct a power analysis, and apply the method to field data. The simulation study confirmed that our estimator is asymptotically unbiased with low bias, narrow confidence intervals, and good coverage, given a modest survey effort. The power analysis provided initial guidance on survey effort. When applied to small data sets obtained by radio-tracking Indiana bats, abundance estimates were reasonable, although imprecise. The proposed method has the potential to improve abundance estimates for mobile species that have a fission-fusion social structure, such as Indiana bats, because it does not condition detection on presence at survey locations and because it avoids certain restrictive assumptions.

  12. Real-Time Parameter Estimation Method Applied to a MIMO Process and its Comparison with an Offline Identification Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaplanoglu, Erkan; Safak, Koray K.; Varol, H. Selcuk

    2009-01-12

    An experiment based method is proposed for parameter estimation of a class of linear multivariable systems. The method was applied to a pressure-level control process. Experimental time domain input/output data was utilized in a gray-box modeling approach. Prior knowledge of the form of the system transfer function matrix elements is assumed to be known. Continuous-time system transfer function matrix parameters were estimated in real-time by the least-squares method. Simulation results of experimentally determined system transfer function matrix compare very well with the experimental results. For comparison and as an alternative to the proposed real-time estimation method, we also implemented anmore » offline identification method using artificial neural networks and obtained fairly good results. The proposed methods can be implemented conveniently on a desktop PC equipped with a data acquisition board for parameter estimation of moderately complex linear multivariable systems.« less

  13. Single snapshot DOA estimation

    NASA Astrophysics Data System (ADS)

    Häcker, P.; Yang, B.

    2010-10-01

    In array signal processing, direction of arrival (DOA) estimation has been studied for decades. Many algorithms have been proposed and their performance has been studied thoroughly. Yet, most of these works are focused on the asymptotic case of a large number of snapshots. In automotive radar applications like driver assistance systems, however, only a small number of snapshots of the radar sensor array or, in the worst case, a single snapshot is available for DOA estimation. In this paper, we investigate and compare different DOA estimators with respect to their single snapshot performance. The main focus is on the estimation accuracy and the angular resolution in multi-target scenarios including difficult situations like correlated targets and large target power differences. We will show that some algorithms lose their ability to resolve targets or do not work properly at all. Other sophisticated algorithms do not show a superior performance as expected. It turns out that the deterministic maximum likelihood estimator is a good choice under these hard conditions.

  14. Good vaccination practice: it all starts with a good vaccine storage temperature.

    PubMed

    Vangroenweghe, Frédéric

    2017-01-01

    Recent introduction of strategies to reduce antibiotic use in food animal production implies an increased use of vaccines in order to prevent the economic impact of several important diseases in swine. Good Vaccination Practice (GVP) is an overall approach on the swine farm aiming to obtain maximal efficacy of vaccination through good storage, preparation and finally correct application to the target animals. In order to have a better insight into GVP on swine farms and the vaccine storage conditions, a survey on vaccination practices was performed on a farmers' fair and temperatures in the vaccine storage refrigerators were measured during farm visits over a period of 1 year. The survey revealed that knowledge on GVP, such as vaccine storage and handling, needle management and injection location could be improved. Less than 10% had a thermometer in their vaccine storage refrigerator on the moment of the visit. Temperature measurement revealed that only 71% of the measured refrigerators were in line with the recommended temperature range of +2 °C to +8 °C. Both below +2 °C and above +8 °C temperatures were registered during all seasons of the year. Compliance was lower during summer with an average temperature of 9.2 °C while only 43% of the measured temperatures were within the recommended range. The present study clearly showed the need for continuous education on GVP for swine veterinarians, swine farmers and their farm personnel in general and vaccine storage management in particular. In veterinary medicine, the correct storage of vaccines is crucial since both too low and too high temperatures can provoke damage to specific vaccine types. Adjuvanted killed or subunit vaccines can be damaged (e.g. structure of aluminiumhydroxide in adjuvans) by too low temperatures (below 0 °C), whereas lyophilized live vaccines are susceptible (e.g. loss of vaccine potency) to heat damage by temperatures above +8 °C. In conclusion, knowledge and awareness of GVP

  15. Estimation of leaf area index and foliage clumping in deciduous forests using digital photography

    NASA Astrophysics Data System (ADS)

    Chianucci, Francesco; Cutini, Andrea

    2013-04-01

    Rapid, reliable and meaningful estimates of leaf area index (LAI) are essential to the characterization of forest ecosystems. In this contribution the accuracy of both fisheye and non-fisheye digital photography for the estimation of forest leaf area in deciduous stands was evaluated. We compared digital hemispherical photography (DHP), the most widely used technique that measures the gap fraction at multiple zenith angles, with methods that measure the gap fraction at a single zenith angle, namely 57.5 degree photography and cover photography (DCP). Comparison with other different gap fraction methods used to calculate LAI such as canopy transmittance measurements from AccuPAR ceptometer and LAI- 2000 Plant Canopy Analyzer (PCA) were also performed. LAI estimated from all these indirect methods were compared with direct measurements obtained by litter traps (LAILT). We applied these methods in 10 deciduous stands of Quercus cerris, Castanea sativa and Fagus sylvatica, the most common deciduous species in Italy, where LAILT ranged from 3.9 to 7.3. DHP and DCP provided good indirect estimates of LAILT, and outperformed the other indirect methods. The DCP method provided estimates of crown porosity, crown cover, foliage cover and the clumping index at the zenith, but required assumptions about the light extinction coefficient at the zenith (k), to accurately estimate LAI. Cover photography provided good indirect estimates of LAI assuming a spherical leaf angle distribution, even though k appeared to decrease as LAI increased, thus affecting the accuracy of LAI estimates in DCP. In contrast, the accuracy of LAI estimates in DHP appeared insensitive to LAILT values, but the method was sensitive to photographic exposure, gamma-correction and was more time-consuming than DCP. Foliage clumping was estimated from all the photographic methods by analyzing either gap size distribution (DCP) or gap fraction distribution (DHP). Foliage clumping was also calculated from PCA and

  16. VizieR Online Data Catalog: GOODS-MUSIC sample: multicolour catalog (Grazian+, 2006)

    NASA Astrophysics Data System (ADS)

    Grazian, A.; Fontana, A.; de Santis, C.; Nonino, M.; Salimbeni, S.; Giallongo, E.; Cristiani, S.; Gallozzi, S.; Vanzella, E.

    2006-02-01

    The GOODS-MUSIC multi-wavelength catalog provides photometric and spectroscopic information for galaxies in the GOODS Southern field. It includes two U images obtained with the ESO 2.2m telescope and one U band image from VLT-VIMOS, the ACS-HST images in four optical (B,V,i,z) bands, the VLT-ISAAC J, H, and Ks bands as well as the Spitzer images in at 3.5, 4.5, 5.8, and 8 micron. Most of these images have been made publicly available in the coadded version by the GOODS team, while the U band data were retrieved in raw format and reduced by our team. We also collected all the available spectroscopic information from public spectroscopic surveys and cross-correlated the spectroscopic redshifts with our photometric catalog. For the unobserved fraction of the objects, we applied our photometric redshift code to obtain well-calibrated photometric redshifts. The final catalog is made up of 14847 objects, with at least 72 known stars, 68 AGNs, and 928 galaxies with spectroscopic redshift (668 galaxies with reliable redshift determination). (3 data files).

  17. Effect of survey design and catch rate estimation on total catch estimates in Chinook salmon fisheries

    USGS Publications Warehouse

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2012-01-01

    Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.

  18. 77 FR 15187 - Released Rates of Motor Common Carriers of Household Goods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-14

    ... available cargo-liability options \\2\\ on the written estimate form--the first form that a moving company...-goods freight forwarders. Finally, the Board established April 2, 2011, as the effective date for moving companies to comply with the changes outlined in the two decisions. These Board decisions are available on...

  19. Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models

    ERIC Educational Resources Information Center

    Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning

    2012-01-01

    The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…

  20. Respiratory rate estimation from the built-in cameras of smartphones and tablets.

    PubMed

    Nam, Yunyoung; Lee, Jinseok; Chon, Ki H

    2014-04-01

    This paper presents a method for respiratory rate estimation using the camera of a smartphone, an MP3 player or a tablet. The iPhone 4S, iPad 2, iPod 5, and Galaxy S3 were used to estimate respiratory rates from the pulse signal derived from a finger placed on the camera lens of these devices. Prior to estimation of respiratory rates, we systematically investigated the optimal signal quality of these 4 devices by dividing the video camera's resolution into 12 different pixel regions. We also investigated the optimal signal quality among the red, green and blue color bands for each of these 12 pixel regions for all four devices. It was found that the green color band provided the best signal quality for all 4 devices and that the left half VGA pixel region was found to be the best choice only for iPhone 4S. For the other three devices, smaller 50 × 50 pixel regions were found to provide better or equally good signal quality than the larger pixel regions. Using the green signal and the optimal pixel regions derived from the four devices, we then investigated the suitability of the smartphones, the iPod 5 and the tablet for respiratory rate estimation using three different computational methods: the autoregressive (AR) model, variable-frequency complex demodulation (VFCDM), and continuous wavelet transform (CWT) approaches. Specifically, these time-varying spectral techniques were used to identify the frequency and amplitude modulations as they contain respiratory rate information. To evaluate the performance of the three computational methods and the pixel regions for the optimal signal quality, data were collected from 10 healthy subjects. It was found that the VFCDM method provided good estimates of breathing rates that were in the normal range (12-24 breaths/min). Both CWT and VFCDM methods provided reasonably good estimates for breathing rates that were higher than 26 breaths/min but their accuracy degraded concomitantly with increased respiratory rates

  1. Local Intrinsic Dimension Estimation by Generalized Linear Modeling.

    PubMed

    Hino, Hideitsu; Fujiki, Jun; Akaho, Shotaro; Murata, Noboru

    2017-07-01

    We propose a method for intrinsic dimension estimation. By fitting the power of distance from an inspection point and the number of samples included inside a ball with a radius equal to the distance, to a regression model, we estimate the goodness of fit. Then, by using the maximum likelihood method, we estimate the local intrinsic dimension around the inspection point. The proposed method is shown to be comparable to conventional methods in global intrinsic dimension estimation experiments. Furthermore, we experimentally show that the proposed method outperforms a conventional local dimension estimation method.

  2. Improved variance estimation of classification performance via reduction of bias caused by small sample size.

    PubMed

    Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders

    2006-03-13

    Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.

  3. AGN Variability in the GOODS Fields

    NASA Astrophysics Data System (ADS)

    Sarajedini, Vicki

    2007-07-01

    Variability is a proven method to identify intrinsically faint active nuclei in galaxies found in deep HST surveys. We propose to extend our short-term variability study of the GOODS fields to include the more recent epochs obtained via supernovae searchers, increasing the overall time baseline from 6 months to 2.5 years. Based on typical AGN lightcurves, we expect to detect 70% more AGN by including these more recent epochs. Variable-detected AGN samples complement current X-ray and mid-IR surveys for AGN by providing unambigous evidence of nuclear activity. Additionallty, a significant number of variable nuclei are not associated with X-ray or mid-IR sources and would thus go undetected. With the increased time baseline, we will be able to construct the structure function {variability amplitude vs. time} for low-luminosity AGN to z 1. The inclusion of the longer time interval will allow for better descrimination among the various models describing the nature of AGN variability. The variability survey will be compared against spectroscopically selected AGN from the Team Keck Redshift Survey of the GOODS-N and the upcoming Flamingos-II NIR survey of the GOODS-S. The high-resolution ACS images will be used to separate the AGN from the host galaxy light and study the morphology, size and environment of the host galaxy. These studies will address questions concerning the nature of low-luminosity AGN evolution and variability at z 1.

  4. Behavioral Patterns in Special Education. Good Teaching Practices.

    PubMed

    Rodríguez-Dorta, Manuela; Borges, África

    2017-01-01

    Providing quality education means to respond to the diversity in the classroom. The teacher is a key figure in responding to the various educational needs presented by students. Specifically, special education professionals are of great importance as they are the ones who lend their support to regular classroom teachers and offer specialized educational assistance to students who require it. Therefore, special education is different from what takes place in the regular classroom, demanding greater commitment by the teacher. There are certain behaviors, considered good teaching practices, which teachers have always been connected with to achieve good teaching and good learning. To ensure that these teachers are carrying out their educational work properly it is necessary to evaluate. This means having appropriate instruments. The Observational Protocol for Teaching Functions in Primary School and Special Education (PROFUNDO-EPE, v.3., in Spanish) allows to capture behaviors from these professionals and behavioral patterns that correspond to good teaching practices. This study evaluates the behavior of two special education teachers who work with students from different educational stages and educational needs. It reveals that the analyzed teachers adapt their behavior according the needs and characteristics of their students to the students responding more adequately to the needs presented by the students and showing good teaching practices. The patterns obtained indicate that they offer support, help and clear guidelines to perform the tasks. They motivate them toward learning by providing positive feedback and they check that students have properly assimilated the contents through questions or non-verbal supervision. Also, they provide a safe and reliable climate for learning.

  5. Analysis of percent density estimates from digital breast tomosynthesis projection images

    NASA Astrophysics Data System (ADS)

    Bakic, Predrag R.; Kontos, Despina; Zhang, Cuiping; Yaffe, Martin J.; Maidment, Andrew D. A.

    2007-03-01

    Women with dense breasts have an increased risk of breast cancer. Breast density is typically measured as the percent density (PD), the percentage of non-fatty (i.e., dense) tissue in breast images. Mammographic PD estimates vary, in part, due to the projective nature of mammograms. Digital breast tomosynthesis (DBT) is a novel radiographic method in which 3D images of the breast are reconstructed from a small number of projection (source) images, acquired at different positions of the x-ray focus. DBT provides superior visualization of breast tissue and has improved sensitivity and specificity as compared to mammography. Our long-term goal is to test the hypothesis that PD obtained from DBT is superior in estimating cancer risk compared with other modalities. As a first step, we have analyzed the PD estimates from DBT source projections since the results would be independent of the reconstruction method. We estimated PD from MLO mammograms (PD M) and from individual DBT projections (PD T). We observed good agreement between PD M and PD T from the central projection images of 40 women. This suggests that variations in breast positioning, dose, and scatter between mammography and DBT do not negatively affect PD estimation. The PD T estimated from individual DBT projections of nine women varied with the angle between the projections. This variation is caused by the 3D arrangement of the breast dense tissue and the acquisition geometry.

  6. New GRACE-Derived Storage Change Estimates Using Empirical Mode Extraction

    NASA Astrophysics Data System (ADS)

    Aierken, A.; Lee, H.; Yu, H.; Ate, P.; Hossain, F.; Basnayake, S. B.; Jayasinghe, S.; Saah, D. S.; Shum, C. K.

    2017-12-01

    Estimated mass change from GRACE spherical harmonic solutions have north/south stripes and east/west banded errors due to random noise and modeling errors. Low pass filters like decorrelation and Gaussian smoothing are typically applied to reduce noise and errors. However, these filters introduce leakage errors that need to be addressed. GRACE mascon estimates (JPL and CSR mascon solutions) do not need decorrelation or Gaussian smoothing and offer larger signal magnitudes compared to the GRACE spherical harmonics (SH) filtered results. However, a recent study [Chen et al., JGR, 2017] demonstrated that both JPL and CSR mascon solutions also have leakage errors. We developed a new postprocessing method based on empirical mode decomposition to estimate mass change from GRACE SH solutions without decorrelation and Gaussian smoothing, the two main sources of leakage errors. We found that, without any post processing, the noise and errors in spherical harmonic solutions introduced very clear high frequency components in the spatial domain. By removing these high frequency components and reserve the overall pattern of the signal, we obtained better mass estimates with minimum leakage errors. The new global mass change estimates captured all the signals observed by GRACE without the stripe errors. Results were compared with traditional methods over the Tonle Sap Basin in Cambodia, Northwestern India, Central Valley in California, and the Caspian Sea. Our results provide larger signal magnitudes which are in good agreement with the leakage corrected (forward modeled) SH results.

  7. GPS Water Vapor Tomography Based on Accurate Estimations of the GPS Tropospheric Parameters

    NASA Astrophysics Data System (ADS)

    Champollion, C.; Masson, F.; Bock, O.; Bouin, M.; Walpersdorf, A.; Doerflinger, E.; van Baelen, J.; Brenot, H.

    2003-12-01

    The Global Positioning System (GPS) is now a common technique for the retrieval of zenithal integrated water vapor (IWV). Further applications in meteorology need also slant integrated water vapor (SIWV) which allow to precisely define the high variability of tropospheric water vapor at different temporal and spatial scales. Only precise estimations of IWV and horizontal gradients allow the estimation of accurate SIWV. We present studies developed to improve the estimation of tropospheric water vapor from GPS data. Results are obtained from several field experiments (MAP, ESCOMPTE, OHM-CV, IHOP, .). First IWV are estimated using different GPS processing strategies and results are compared to radiosondes. The role of the reference frame and the a priori constraints on the coordinates of the fiducial and local stations is generally underestimated. It seems to be of first order in the estimation of the IWV. Second we validate the estimated horizontal gradients comparing zenith delay gradients and single site gradients. IWV, gradients and post-fit residuals are used to construct slant integrated water delays. Validation of the SIWV is under progress comparing GPS SIWV, Lidar measurements and high resolution meteorological models (Meso-NH). A careful analysis of the post-fit residuals is needed to separate tropospheric signal from multipaths. The slant tropospheric delays are used to study the 3D heterogeneity of the troposphere. We develop a tomographic software to model the three-dimensional distribution of the tropospheric water vapor from GPS data. The software is applied to the ESCOMPTE field experiment, a dense network of 17 dual frequency GPS receivers operated in southern France. Three inversions have been successfully compared to three successive radiosonde launches. Good resolution is obtained up to heights of 3000 m.

  8. Malaria transmission rates estimated from serological data.

    PubMed Central

    Burattini, M. N.; Massad, E.; Coutinho, F. A.

    1993-01-01

    A mathematical model was used to estimate malaria transmission rates based on serological data. The model is minimally stochastic and assumes an age-dependent force of infection for malaria. The transmission rates estimated were applied to a simple compartmental model in order to mimic the malaria transmission. The model has shown a good retrieving capacity for serological and parasite prevalence data. PMID:8270011

  9. A new slit lamp-based technique for anterior chamber angle estimation.

    PubMed

    Gispets, Joan; Cardona, Genís; Tomàs, Núria; Fusté, Cèlia; Binns, Alison; Fortes, Miguel A

    2014-06-01

    To design and test a new noninvasive method for anterior chamber angle (ACA) estimation based on the slit lamp that is accessible to all eye-care professionals. A new technique (slit lamp anterior chamber estimation [SLACE]) that aims to overcome some of the limitations of the van Herick procedure was designed. The technique, which only requires a slit lamp, was applied to estimate the ACA of 50 participants (100 eyes) using two different slit lamp models, and results were compared with gonioscopy as the clinical standard. The Spearman nonparametric correlation between ACA values as determined by gonioscopy and SLACE were 0.81 (p < 0.001) and 0.79 (p < 0.001) for each slit lamp. Sensitivity values of 100 and 87.5% and specificity values of 75 and 81.2%, depending on the slit lamp used, were obtained for the SLACE technique as compared with gonioscopy (Spaeth classification). The SLACE technique, when compared with gonioscopy, displayed good accuracy in the detection of narrow angles, and it may be useful for eye-care clinicians without access to expensive alternative equipment or those who cannot perform gonioscopy because of legal constraints regarding the use of diagnostic drugs.

  10. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    ERIC Educational Resources Information Center

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  11. Properties of young massive clusters obtained with different massive-star evolutionary models

    NASA Astrophysics Data System (ADS)

    Wofford, Aida; Charlot, Stéphane

    We undertake a comprehensive comparative test of seven widely-used spectral synthesis models using multi-band HST photometry of a sample of eight YMCs in two galaxies. We provide a first quantitative estimate of the accuracies and uncertainties of new models, show the good progress of models in fitting high-quality observations, and highlight the need of further comprehensive comparative tests.

  12. Monetary and affective judgments of consumer goods: modes of evaluation matter.

    PubMed

    Seta, John J; Seta, Catherine E; McCormick, Michael; Gallagher, Ashleigh H

    2014-01-01

    Participants who evaluated 2 positively valued items separately reported more positive attraction (using affective and monetary measures) than those who evaluated the same two items as a unit. In Experiments 1-3, this separate/unitary evaluation effect was obtained when participants evaluated products that they were purchasing for a friend. Similar findings were obtained in Experiments 4 and 5 when we considered the amount participants were willing to spend to purchase insurance for items that they currently owned. The averaging/summation model was contrasted with several theoretical perspectives and implicated averaging and summation integration processes in how items are evaluated. The procedural and theoretical similarities and differences between this work and related research on unpacking, comparison processes, public goods, and price bundling are discussed. Overall, the results support the operation of integration processes and contribute to an understanding of how these processes influence the evaluation and valuation of private goods.

  13. Reparametrization-based estimation of genetic parameters in multi-trait animal model using Integrated Nested Laplace Approximation.

    PubMed

    Mathew, Boby; Holand, Anna Marie; Koistinen, Petri; Léon, Jens; Sillanpää, Mikko J

    2016-02-01

    A novel reparametrization-based INLA approach as a fast alternative to MCMC for the Bayesian estimation of genetic parameters in multivariate animal model is presented. Multi-trait genetic parameter estimation is a relevant topic in animal and plant breeding programs because multi-trait analysis can take into account the genetic correlation between different traits and that significantly improves the accuracy of the genetic parameter estimates. Generally, multi-trait analysis is computationally demanding and requires initial estimates of genetic and residual correlations among the traits, while those are difficult to obtain. In this study, we illustrate how to reparametrize covariance matrices of a multivariate animal model/animal models using modified Cholesky decompositions. This reparametrization-based approach is used in the Integrated Nested Laplace Approximation (INLA) methodology to estimate genetic parameters of multivariate animal model. Immediate benefits are: (1) to avoid difficulties of finding good starting values for analysis which can be a problem, for example in Restricted Maximum Likelihood (REML); (2) Bayesian estimation of (co)variance components using INLA is faster to execute than using Markov Chain Monte Carlo (MCMC) especially when realized relationship matrices are dense. The slight drawback is that priors for covariance matrices are assigned for elements of the Cholesky factor but not directly to the covariance matrix elements as in MCMC. Additionally, we illustrate the concordance of the INLA results with the traditional methods like MCMC and REML approaches. We also present results obtained from simulated data sets with replicates and field data in rice.

  14. Tropical forest plantation biomass estimation using RADARSAT-SAR and TM data of south china

    NASA Astrophysics Data System (ADS)

    Wang, Chenli; Niu, Zheng; Gu, Xiaoping; Guo, Zhixing; Cong, Pifu

    2005-10-01

    Forest biomass is one of the most important parameters for global carbon stock model yet can only be estimated with great uncertainties. Remote sensing, especially SAR data can offers the possibility of providing relatively accurate forest biomass estimations at a lower cost than inventory in study tropical forest. The goal of this research was to compare the sensitivity of forest biomass to Landsat TM and RADARSAT-SAR data and to assess the efficiency of NDVI, EVI and other vegetation indices in study forest biomass based on the field survey date and GIS in south china. Based on vegetation indices and factor analysis, multiple regression and neural networks were developed for biomass estimation for each species of the plantation. For each species, the better relationships between the biomass predicted and that measured from field survey was obtained with a neural network developed for the species. The relationship between predicted and measured biomass derived from vegetation indices differed between species. This study concludes that single band and many vegetation indices are weakly correlated with selected forest biomass. RADARSAT-SAR Backscatter coefficient has a relatively good logarithmic correlation with forest biomass, but neither TM spectral bands nor vegetation indices alone are sufficient to establish an efficient model for biomass estimation due to the saturation of bands and vegetation indices, multiple regression models that consist of spectral and environment variables improve biomass estimation performance. Comparing with TM, a relatively well estimation result can be achieved by RADARSAT-SAR, but all had limitations in tropical forest biomass estimation. The estimation results obtained are not accurate enough for forest management purposes at the forest stand level. However, the approximate volume estimates derived by the method can be useful in areas where no other forest information is available. Therefore, this paper provides a better

  15. Estimating the vibration level of an L-shaped beam using power flow techniques

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.; Mccollum, M.; Rassineux, J. L.; Gilbert, T.

    1986-01-01

    The response of one component of an L-shaped beam, with point force excitation on the other component, is estimated using the power flow method. The transmitted power from the source component to the receiver component is expressed in terms of the transfer and input mobilities at the excitation point and the joint. The response is estimated both in narrow frequency bands, using the exact geometry of the beams, and as a frequency averaged response using infinite beam models. The results using this power flow technique are compared to the results obtained using finite element analysis (FEA) of the L-shaped beam for the low frequency response and to results obtained using statistical energy analysis (SEA) for the high frequencies. The agreement between the FEA results and the power flow method results at low frequencies is very good. SEA results are in terms of frequency averaged levels and these are in perfect agreement with the results obtained using the infinite beam models in the power flow method. The narrow frequency band results from the power flow method also converge to the SEA results at high frequencies. The advantage of the power flow method is that detail of the response can be retained while reducing computation time, which will allow the narrow frequency band analysis of the response to be extended to higher frequencies.

  16. Sequential selection of economic good and action in medial frontal cortex of macaques during value-based decisions

    PubMed Central

    Chen, Xiaomo; Stuphorn, Veit

    2015-01-01

    Value-based decisions could rely either on the selection of desired economic goods or on the selection of the actions that will obtain the goods. We investigated this question by recording from the supplementary eye field (SEF) of monkeys during a gambling task that allowed us to distinguish chosen good from chosen action signals. Analysis of the individual neuron activity, as well as of the population state-space dynamic, showed that SEF encodes first the chosen gamble option (the desired economic good) and only ~100 ms later the saccade that will obtain it (the chosen action). The action selection is likely driven by inhibitory interactions between different SEF neurons. Our results suggest that during value-based decisions, the selection of economic goods precedes and guides the selection of actions. The two selection steps serve different functions and can therefore not compensate for each other, even when information guiding both processes is given simultaneously. DOI: http://dx.doi.org/10.7554/eLife.09418.001 PMID:26613409

  17. Optimal Bandwidth for Multitaper Spectrum Estimation

    DOE PAGES

    Haley, Charlotte L.; Anitescu, Mihai

    2017-07-04

    A systematic method for bandwidth parameter selection is desired for Thomson multitaper spectrum estimation. We give a method for determining the optimal bandwidth based on a mean squared error (MSE) criterion. When the true spectrum has a second-order Taylor series expansion, one can express quadratic local bias as a function of the curvature of the spectrum, which can be estimated by using a simple spline approximation. This is combined with a variance estimate, obtained by jackknifing over individual spectrum estimates, to produce an estimated MSE for the log spectrum estimate for each choice of time-bandwidth product. The bandwidth that minimizesmore » the estimated MSE then gives the desired spectrum estimate. Additionally, the bandwidth obtained using our method is also optimal for cepstrum estimates. We give an example of a damped oscillatory (Lorentzian) process in which the approximate optimal bandwidth can be written as a function of the damping parameter. Furthermore, the true optimal bandwidth agrees well with that given by minimizing estimated the MSE in these examples.« less

  18. Size and shape of soil humic acids estimated by viscosity and molecular weight.

    PubMed

    Kawahigashi, Masayuki; Sumida, Hiroaki; Yamamoto, Kazuhiko

    2005-04-15

    Ultrafiltration fractions of three soil humic acids were characterized by viscometry and high performance size-exclusion chromatography (HPSEC) in order to estimate shapes and hydrodynamic sizes. Intrinsic viscosities under given solute/solvent/temperature conditions were obtained by extrapolating the concentration dependence of reduced viscosities to zero concentration. Molecular mass (weight average molecular weight (M (w)) and number average molecular weight (M (n))) and hydrodynamic radius (R(H)) were determined by HPSEC using pullulan as calibrant. Values of M (w) and M (n) ranged from 15 to 118 x 10(3) and from 9 to 50 x 10(3) (g mol(-1)), respectively. Polydispersity, as indicated by M (w)/M (n), increased with increasing filter size from 1.5 to 2.4. The hydrodynamic radii (R(H)) ranged between 2.2 and 6.4 nm. For each humic acid, M (w) and [eta] were related. Mark-Houwink coefficients calculated on the basis of the M (w)-[eta] relationships suggested restricted flexible chains for two of the humic acids and a branched structure for the third humic acid. Those structures probably behave as hydrated sphere colloids in a good solvent. Hydrodynamic radii of fractions calculated from [eta] using Einstein's equation, which is applicable to hydrated sphere colloids, ranged from 2.2 to 7.1 nm. These dimensions are fit to the size of nanospaces on and between clay minerals and micropores in soil particle aggregates. On the other hand, the good agreement of R(H) values obtained by applying Einstein's equation with those directly determined by HPSEC suggests that pullulan is a suitable calibrant for estimation of molecular mass and size of humic acids by HPSEC.

  19. Bayesian phylogenetic estimation of fossil ages.

    PubMed

    Drummond, Alexei J; Stadler, Tanja

    2016-07-19

    Recent advances have allowed for both morphological fossil evidence and molecular sequences to be integrated into a single combined inference of divergence dates under the rule of Bayesian probability. In particular, the fossilized birth-death tree prior and the Lewis-Mk model of discrete morphological evolution allow for the estimation of both divergence times and phylogenetic relationships between fossil and extant taxa. We exploit this statistical framework to investigate the internal consistency of these models by producing phylogenetic estimates of the age of each fossil in turn, within two rich and well-characterized datasets of fossil and extant species (penguins and canids). We find that the estimation accuracy of fossil ages is generally high with credible intervals seldom excluding the true age and median relative error in the two datasets of 5.7% and 13.2%, respectively. The median relative standard error (RSD) was 9.2% and 7.2%, respectively, suggesting good precision, although with some outliers. In fact, in the two datasets we analyse, the phylogenetic estimate of fossil age is on average less than 2 Myr from the mid-point age of the geological strata from which it was excavated. The high level of internal consistency found in our analyses suggests that the Bayesian statistical model employed is an adequate fit for both the geological and morphological data, and provides evidence from real data that the framework used can accurately model the evolution of discrete morphological traits coded from fossil and extant taxa. We anticipate that this approach will have diverse applications beyond divergence time dating, including dating fossils that are temporally unconstrained, testing of the 'morphological clock', and for uncovering potential model misspecification and/or data errors when controversial phylogenetic hypotheses are obtained based on combined divergence dating analyses.This article is part of the themed issue 'Dating species divergences using

  20. Bayesian phylogenetic estimation of fossil ages

    PubMed Central

    Drummond, Alexei J.; Stadler, Tanja

    2016-01-01

    Recent advances have allowed for both morphological fossil evidence and molecular sequences to be integrated into a single combined inference of divergence dates under the rule of Bayesian probability. In particular, the fossilized birth–death tree prior and the Lewis-Mk model of discrete morphological evolution allow for the estimation of both divergence times and phylogenetic relationships between fossil and extant taxa. We exploit this statistical framework to investigate the internal consistency of these models by producing phylogenetic estimates of the age of each fossil in turn, within two rich and well-characterized datasets of fossil and extant species (penguins and canids). We find that the estimation accuracy of fossil ages is generally high with credible intervals seldom excluding the true age and median relative error in the two datasets of 5.7% and 13.2%, respectively. The median relative standard error (RSD) was 9.2% and 7.2%, respectively, suggesting good precision, although with some outliers. In fact, in the two datasets we analyse, the phylogenetic estimate of fossil age is on average less than 2 Myr from the mid-point age of the geological strata from which it was excavated. The high level of internal consistency found in our analyses suggests that the Bayesian statistical model employed is an adequate fit for both the geological and morphological data, and provides evidence from real data that the framework used can accurately model the evolution of discrete morphological traits coded from fossil and extant taxa. We anticipate that this approach will have diverse applications beyond divergence time dating, including dating fossils that are temporally unconstrained, testing of the ‘morphological clock', and for uncovering potential model misspecification and/or data errors when controversial phylogenetic hypotheses are obtained based on combined divergence dating analyses. This article is part of the themed issue ‘Dating species divergences

  1. Uncertainty estimation of water levels for the Mitch flood event in Tegucigalpa

    NASA Astrophysics Data System (ADS)

    Fuentes Andino, D. C.; Halldin, S.; Lundin, L.; Xu, C.

    2012-12-01

    Hurricane Mitch in 1998 left a devastating flood in Tegucigalpa, the capital city of Honduras. Simulation of elevated water surfaces provides a good way to understand the hydraulic mechanism of large flood events. In this study the one-dimensional HEC-RAS model for steady flow conditions together with the two-dimensional Lisflood-fp model were used to estimate the water level for the Mitch event in the river reaches at Tegucigalpa. Parameters uncertainty of the model was investigated using the generalized likelihood uncertainty estimation (GLUE) framework. Because of the extremely large magnitude of the Mitch flood, no hydrometric measurements were taken during the event. However, post-event indirect measurements of discharge and observed water levels were obtained in previous works by JICA and USGS. To overcome the problem of lacking direct hydrometric measurement data, uncertainty in the discharge was estimated. Both models could well define the value for channel roughness, though more dispersion resulted from the floodplain value. Analysis of the data interaction showed that there was a tradeoff between discharge at the outlet and floodplain roughness for the 1D model. The estimated discharge range at the outlet of the study area encompassed the value indirectly estimated by JICA, however the indirect method used by the USGS overestimated the value. If behavioral parameter sets can well reproduce water surface levels for past events such as Mitch, more reliable predictions for future events can be expected. The results acquired in this research will provide guidelines to deal with the problem of modeling past floods when no direct data was measured during the event, and to predict future large events taking uncertainty into account. The obtained range of the uncertain flood extension will be an outcome useful for decision makers.

  2. Predicting critical temperatures of ionic and non-ionic fluids from thermophysical data obtained near the melting point.

    PubMed

    Weiss, Volker C

    2015-10-14

    In the correlation and prediction of thermophysical data of fluids based on a corresponding-states approach, the critical temperature Tc plays a central role. For some fluids, in particular ionic ones, however, the critical region is difficult or even impossible to access experimentally. For molten salts, Tc is on the order of 3000 K, which makes accurate measurements a challenging task. Room temperature ionic liquids (RTILs) decompose thermally between 400 K and 600 K due to their organic constituents; this range of temperatures is hundreds of degrees below recent estimates of their Tc. In both cases, reliable methods to deduce Tc based on extrapolations of experimental data recorded at much lower temperatures near the triple or melting points are needed and useful because the critical point influences the fluid's behavior in the entire liquid region. Here, we propose to employ the scaling approach leading to universal fluid behavior [Román et al., J. Chem. Phys. 123, 124512 (2005)] to derive a very simple expression that allows one to estimate Tc from the density of the liquid, the surface tension, or the enthalpy of vaporization measured in a very narrow range of low temperatures. We demonstrate the validity of the approach for simple and polar neutral fluids, for which Tc is known, and then use the methodology to obtain estimates of Tc for ionic fluids. When comparing these estimates to those reported in the literature, good agreement is found for RTILs, whereas the ones for the molten salts NaCl and KCl are lower than previous estimates by 10%. The coexistence curve for ionic fluids is found to be more adequately described by an effective exponent of βeff = 0.5 than by βeff = 0.33.

  3. Estimating corresponding locations in ipsilateral breast tomosynthesis views

    NASA Astrophysics Data System (ADS)

    van Schie, Guido; Tanner, Christine; Karssemeijer, Nico

    2011-03-01

    To improve cancer detection in mammography, breast exams usually consist of two views per breast. To combine information from both views, radiologists and multiview computer-aided detection (CAD) systems need to match corresponding regions in the two views. In digital breast tomosynthesis (DBT), finding corresponding regions in ipsilateral volumes may be a difficult and time-consuming task for radiologists, because many slices have to be inspected individually. In this study we developed a method to quickly estimate corresponding locations in ipsilateral tomosynthesis views by applying a mathematical transformation. First a compressed breast model is matched to the tomosynthesis view containing a point of interest. Then we decompress, rotate and compress again to estimate the location of the corresponding point in the ipsilateral view. In this study we use a simple elastically deformable sphere model to obtain an analytical solution for the transformation in a given DBT case. The model is matched to the volume by using automatic segmentation of the pectoral muscle, breast tissue and nipple. For validation we annotated 181 landmarks in both views and applied our method to each location. Results show a median 3D distance between the actual location and estimated location of 1.5 cm; a good starting point for a feature based local search method to link lesions for a multiview CAD system. Half of the estimated locations were at most 1 slice away from the actual location, making our method useful as a tool in mammographic workstations to interactively find corresponding locations in ipsilateral tomosynthesis views.

  4. Precision and accuracy of age estimates obtained from anal fin spines, dorsal fin spines, and sagittal otoliths for known-age largemouth bass

    USGS Publications Warehouse

    Klein, Zachary B.; Bonvechio, Timothy F.; Bowen, Bryant R.; Quist, Michael C.

    2017-01-01

    Sagittal otoliths are the preferred aging structure for Micropterus spp. (black basses) in North America because of the accurate and precise results produced. Typically, fisheries managers are hesitant to use lethal aging techniques (e.g., otoliths) to age rare species, trophy-size fish, or when sampling in small impoundments where populations are small. Therefore, we sought to evaluate the precision and accuracy of 2 non-lethal aging structures (i.e., anal fin spines, dorsal fin spines) in comparison to that of sagittal otoliths from known-age Micropterus salmoides (Largemouth Bass; n = 87) collected from the Ocmulgee Public Fishing Area, GA. Sagittal otoliths exhibited the highest concordance with true ages of all structures evaluated (coefficient of variation = 1.2; percent agreement = 91.9). Similarly, the low coefficient of variation (0.0) and high between-reader agreement (100%) indicate that age estimates obtained from sagittal otoliths were the most precise. Relatively high agreement between readers for anal fin spines (84%) and dorsal fin spines (81%) suggested the structures were relatively precise. However, age estimates from anal fin spines and dorsal fin spines exhibited low concordance with true ages. Although use of sagittal otoliths is a lethal technique, this method will likely remain the standard for aging Largemouth Bass and other similar black bass species.

  5. An Anisotropic A posteriori Error Estimator for CFD

    NASA Astrophysics Data System (ADS)

    Feijóo, Raúl A.; Padra, Claudio; Quintana, Fernando

    In this article, a robust anisotropic adaptive algorithm is presented, to solve compressible-flow equations using a stabilized CFD solver and automatic mesh generators. The association includes a mesh generator, a flow solver, and an a posteriori error-estimator code. The estimator was selected among several choices available (Almeida et al. (2000). Comput. Methods Appl. Mech. Engng, 182, 379-400; Borges et al. (1998). "Computational mechanics: new trends and applications". Proceedings of the 4th World Congress on Computational Mechanics, Bs.As., Argentina) giving a powerful computational tool. The main aim is to capture solution discontinuities, in this case, shocks, using the least amount of computational resources, i.e. elements, compatible with a solution of good quality. This leads to high aspect-ratio elements (stretching). To achieve this, a directional error estimator was specifically selected. The numerical results show good behavior of the error estimator, resulting in strongly-adapted meshes in few steps, typically three or four iterations, enough to capture shocks using a moderate and well-distributed amount of elements.

  6. Estimation of Electrically-Evoked Knee Torque from Mechanomyography Using Support Vector Regression.

    PubMed

    Ibitoye, Morufu Olusola; Hamzaid, Nur Azah; Abdul Wahab, Ahmad Khairi; Hasnan, Nazirah; Olatunji, Sunday Olusanya; Davis, Glen M

    2016-07-19

    The difficulty of real-time muscle force or joint torque estimation during neuromuscular electrical stimulation (NMES) in physical therapy and exercise science has motivated recent research interest in torque estimation from other muscle characteristics. This study investigated the accuracy of a computational intelligence technique for estimating NMES-evoked knee extension torque based on the Mechanomyographic signals (MMG) of contracting muscles that were recorded from eight healthy males. Simulation of the knee torque was modelled via Support Vector Regression (SVR) due to its good generalization ability in related fields. Inputs to the proposed model were MMG amplitude characteristics, the level of electrical stimulation or contraction intensity, and knee angle. Gaussian kernel function, as well as its optimal parameters were identified with the best performance measure and were applied as the SVR kernel function to build an effective knee torque estimation model. To train and test the model, the data were partitioned into training (70%) and testing (30%) subsets, respectively. The SVR estimation accuracy, based on the coefficient of determination (R²) between the actual and the estimated torque values was up to 94% and 89% during the training and testing cases, with root mean square errors (RMSE) of 9.48 and 12.95, respectively. The knee torque estimations obtained using SVR modelling agreed well with the experimental data from an isokinetic dynamometer. These findings support the realization of a closed-loop NMES system for functional tasks using MMG as the feedback signal source and an SVR algorithm for joint torque estimation.

  7. Optimization of planar PIV-based pressure estimates in laminar and turbulent wakes

    NASA Astrophysics Data System (ADS)

    McClure, Jeffrey; Yarusevych, Serhiy

    2017-05-01

    The performance of four pressure estimation techniques using Eulerian material acceleration estimates from planar, two-component Particle Image Velocimetry (PIV) data were evaluated in a bluff body wake. To allow for the ground truth comparison of the pressure estimates, direct numerical simulations of flow over a circular cylinder were used to obtain synthetic velocity fields. Direct numerical simulations were performed for Re_D = 100, 300, and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A parametric study encompassing a range of temporal and spatial resolutions was performed for each Re_D. The effect of random noise typical of experimental velocity measurements was also evaluated. The results identified optimal temporal and spatial resolutions that minimize the propagation of random and truncation errors to the pressure field estimates. A model derived from linear error propagation through the material acceleration central difference estimators was developed to predict these optima, and showed good agreement with the results from common pressure estimation techniques. The results of the model are also shown to provide acceptable first-order approximations for sampling parameters that reduce error propagation when Lagrangian estimations of material acceleration are employed. For pressure integration based on planar PIV, the effect of flow three-dimensionality was also quantified, and shown to be most pronounced at higher Reynolds numbers downstream of the vortex formation region, where dominant vortices undergo substantial three-dimensional deformations. The results of the present study provide a priori recommendations for the use of pressure estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.

  8. Global Marine Productivity and Living-Phytoplankton Carbon Biomass Estimated from a Physiological Growth Model

    NASA Astrophysics Data System (ADS)

    Arteaga, L.; Pahlow, M.; Oschlies, A.

    2016-02-01

    Primay production by marine phytoplankton essentially drives the oceanic biological carbon pump. Global productivity estimates are commonly founded on chlorophyll-based primary production models. However, a major drawback of most of these models is that variations in chlorophyll concentration do not necessarily account for changes in phytoplankton biomass resulting from the physiological regulation of the chlorophyll-to-carbon ratio (Chl:C). Here we present phytoplankton production rates and surface phytoplankton C concentrations for the global ocean for 2005-2010, obtained by combining satellite Chl observations with a mechanistic model for the acclimation of phytoplankton stoichiometry to variations in nutrients, light and temperature. We compare our inferred phytoplankton C concentrations with an independent estimate of surface particulate organic carbon (POC) to identify for the first time the global contribution of living phytoplankton to total POC in the surface ocean. Our annual primary production (46 Pg C yr-1) is in good agreement with other C-based model estimates obtained from satellite observations. We find that most of the oligotrophic surface ocean is dominated by living phytoplankton biomass (between 30-70% of total particulate carbon). Lower contributions are found in the tropical Pacific (10-30% phytoplankton) and the Southern Ocean (≈ 10%). Our method provides a novel analytical tool for identifying changes in marine plankton communities and carbon cycling.

  9. Behavioral Patterns in Special Education. Good Teaching Practices

    PubMed Central

    Rodríguez-Dorta, Manuela; Borges, África

    2017-01-01

    Providing quality education means to respond to the diversity in the classroom. The teacher is a key figure in responding to the various educational needs presented by students. Specifically, special education professionals are of great importance as they are the ones who lend their support to regular classroom teachers and offer specialized educational assistance to students who require it. Therefore, special education is different from what takes place in the regular classroom, demanding greater commitment by the teacher. There are certain behaviors, considered good teaching practices, which teachers have always been connected with to achieve good teaching and good learning. To ensure that these teachers are carrying out their educational work properly it is necessary to evaluate. This means having appropriate instruments. The Observational Protocol for Teaching Functions in Primary School and Special Education (PROFUNDO-EPE, v.3., in Spanish) allows to capture behaviors from these professionals and behavioral patterns that correspond to good teaching practices. This study evaluates the behavior of two special education teachers who work with students from different educational stages and educational needs. It reveals that the analyzed teachers adapt their behavior according the needs and characteristics of their students to the students responding more adequately to the needs presented by the students and showing good teaching practices. The patterns obtained indicate that they offer support, help and clear guidelines to perform the tasks. They motivate them toward learning by providing positive feedback and they check that students have properly assimilated the contents through questions or non-verbal supervision. Also, they provide a safe and reliable climate for learning. PMID:28512437

  10. Return on research investments: personal good versus public good

    NASA Astrophysics Data System (ADS)

    Fox, P. A.

    2017-12-01

    For some time the outputs, i.e. what's produced, of publicly and privately funded research while necessary, are far from sufficient, when considering an overall return on (research) investment. At the present time products such as peer-reviewed papers, websites, data, and software are recognized by funders on timescales related to research awards and reporting. However, from a consumer perspective impact and value are determined at the time a product is discovered, accessed, assessed and used. As is often the case, the perspectives of producer and consumer communities can be distinct and not intersect at all. We contrast personal good, i.e. credit, reputation, with that of public good, e.g. interest, leverage, exploitation, and more. This presentation will elaborate on both the metaphorical and idealogical aspects of applying a "return on investment" frame for the topic of assessing "good".

  11. Development and validation of a two-dimensional fast-response flood estimation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judi, David R; Mcpherson, Timothy N; Burian, Steven J

    2009-01-01

    A finite difference formulation of the shallow water equations using an upwind differencing method was developed maintaining computational efficiency and accuracy such that it can be used as a fast-response flood estimation tool. The model was validated using both laboratory controlled experiments and an actual dam breach. Through the laboratory experiments, the model was shown to give good estimations of depth and velocity when compared to the measured data, as well as when compared to a more complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. Themore » simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies show that a relatively numerical scheme used to solve the complete shallow water equations can be used to accurately estimate flood inundation. Future work will focus on further reducing the computation time needed to provide flood inundation estimates for fast-response analyses. This will be accomplished through the efficient use of multi-core, multi-processor computers coupled with an efficient domain-tracking algorithm, as well as an understanding of the impacts of grid resolution on model results.« less

  12. Fast analytical scatter estimation using graphics processing units.

    PubMed

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  13. An Inverse Modeling Approach to Estimating Phytoplankton Pigment Concentrations from Phytoplankton Absorption Spectra

    NASA Technical Reports Server (NTRS)

    Moisan, John R.; Moisan, Tiffany A. H.; Linkswiler, Matthew A.

    2011-01-01

    Phytoplankton absorption spectra and High-Performance Liquid Chromatography (HPLC) pigment observations from the Eastern U.S. and global observations from NASA's SeaBASS archive are used in a linear inverse calculation to extract pigment-specific absorption spectra. Using these pigment-specific absorption spectra to reconstruct the phytoplankton absorption spectra results in high correlations at all visible wavelengths (r(sup 2) from 0.83 to 0.98), and linear regressions (slopes ranging from 0.8 to 1.1). Higher correlations (r(sup 2) from 0.75 to 1.00) are obtained in the visible portion of the spectra when the total phytoplankton absorption spectra are unpackaged by multiplying the entire spectra by a factor that sets the total absorption at 675 nm to that expected from absorption spectra reconstruction using measured pigment concentrations and laboratory-derived pigment-specific absorption spectra. The derived pigment-specific absorption spectra were further used with the total phytoplankton absorption spectra in a second linear inverse calculation to estimate the various phytoplankton HPLC pigments. A comparison between the estimated and measured pigment concentrations for the 18 pigment fields showed good correlations (r(sup 2) greater than 0.5) for 7 pigments and very good correlations (r(sup 2) greater than 0.7) for chlorophyll a and fucoxanthin. Higher correlations result when the analysis is carried out at more local geographic scales. The ability to estimate phytoplankton pigments using pigment-specific absorption spectra is critical for using hyperspectral inverse models to retrieve phytoplankton pigment concentrations and other Inherent Optical Properties (IOPs) from passive remote sensing observations.

  14. Exploring Intergenerational Discontinuity in Problem Behavior: Bad Parents with Good Children.

    PubMed

    Dong, Beidi; Krohn, Marvin D

    2015-04-01

    Using data from the Rochester Youth Development Study, a series of regression models are estimated on offspring problem behavior with a focus on the interaction between parental history of delinquency and the parent-child relationship. Good parenting practices significantly interact with the particular shape of parental propensity of offending over time, functioning as protective factors to protect against problematic behaviors among those who are most at risk. The moderation effects vary slightly by the age of our subjects. Accordingly, it is important to distinguish the effect of not only the level of parental delinquency at one point in time, but also the shape of the delinquency trajectory on outcomes for their children. Good parenting holds the hope of breaking the vicious cycle of intergenerational transmission of delinquency.

  15. Method for rapid estimation of scour at highway bridges based on limited site data

    USGS Publications Warehouse

    Holnbeck, S.R.; Parrett, Charles

    1997-01-01

    Limited site data were used to develop a method for rapid estimation of scour at highway bridges. The estimates can be obtained in a matter of hours rather than several days as required by more-detailed methods. Such a method is important because scour assessments are needed to identify scour-critical bridges throughout the United States. Using detailed scour-analysis methods and scour-prediction equations recommended by the Federal Highway Administration, the U.S. Geological Survey, in cooperation with the Montana Department of Transportation, obtained contraction, pier, and abutment scour-depth data for sites from 10 States.The data were used to develop relations between scour depth and hydraulic variables that can be rapidly measured in the field. Relations between scour depth and hydraulic variables, in the form of envelope curves, were based on simpler forms of detailed scour-prediction equations. To apply the rapid-estimation method, a 100-year recurrence interval peak discharge is determined, and bridge- length data are used in the field with graphs relating unit discharge to velocity and velocity to bridge backwater as a basis for estimating flow depths and other hydraulic variables that can then be applied using the envelope curves. The method was tested in the field. Results showed good agreement among individuals involved and with results from more-detailed methods. Although useful for identifying potentially scour-critical bridges, themethod does not replace more-detailed methods used for design purposes. Use of the rapid- estimation method should be limited to individuals having experience in bridge scour, hydraulics, and flood hydrology, and some training in use of the method.

  16. Impacts of Good Practices on Cognitive Development, Learning Orientations, and Graduate Degree Plans during the First Year of College

    ERIC Educational Resources Information Center

    Cruce, Ty M.; Wolniak, Gregory C.; Seifert, Tricia A.; Pascarella, Ernest T.

    2006-01-01

    This study estimated separately the unique effects of three dimensions of good practice and the global effects of a composite measure of good practices on the cognitive development, orientations to learning, and educational aspirations of students during their first year of college. Analyses of longitudinal data from a representative sample of…

  17. Contour-based object orientation estimation

    NASA Astrophysics Data System (ADS)

    Alpatov, Boris; Babayan, Pavel

    2016-04-01

    Real-time object orientation estimation is an actual problem of computer vision nowadays. In this paper we propose an approach to estimate an orientation of objects lacking axial symmetry. Proposed algorithm is intended to estimate orientation of a specific known 3D object, so 3D model is required for learning. The proposed orientation estimation algorithm consists of 2 stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. It minimizes the training image set. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy (mean error value less than 6°) in all case studies. The real-time performance of the algorithm was also demonstrated.

  18. Conducting a Discrete-Choice Experiment Study Following Recommendations for Good Research Practices: An Application for Eliciting Patient Preferences for Diabetes Treatments.

    PubMed

    Janssen, Ellen M; Hauber, A Brett; Bridges, John F P

    2018-01-01

    To consolidate and illustrate good research practices in health care to the application and reporting of a study measuring patient preferences for type 2 diabetes mellitus medications, given recent methodological advances in stated-preference methods. The International Society for Pharmacoeconomics and Outcomes Research good research practices and other recommendations were used to conduct a discrete-choice experiment. Members of a US online panel with type 2 diabetes mellitus completed a Web-enabled, self-administered survey that elicited choices between treatment pairs with six attributes at three possible levels each. A D-efficient experimental design blocked 48 choice tasks into three 16-task surveys. Preference estimates were obtained using mixed logit estimation and were used to calculate choice probabilities. A total of 552 participants (51% males) completed the survey. Avoiding 90 minutes of nausea was valued the highest (mean -10.00; 95% confidence interval [CI] -10.53 to -9.47). Participants wanted to avoid low blood glucose during the day and/or night (mean -3.87; 95% CI -4.32 to -3.42) or one pill and one injection per day (mean -7.04; 95% CI -7.63 to -6.45). Participants preferred stable blood glucose 6 d/wk (mean 4.63; 95% CI 4.15 to 5.12) and a 1% decrease in glycated hemoglobin (mean 5.74; 95% CI 5.22 to 6.25). If cost increased by $1, the probability that a treatment profile would be chosen decreased by 1%. These results are consistent with the idea that people have strong preferences for immediate consequences of medication. Despite efforts to produce recommendations, ambiguity surrounding good practices remains and various judgments need to be made when conducting stated-preference studies. To ensure transparency, these judgments should be described and justified. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  19. Statistics of Sxy estimates

    NASA Technical Reports Server (NTRS)

    Freilich, M. H.; Pawka, S. S.

    1987-01-01

    The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.

  20. Estimation of suspended-sediment rating curves and mean suspended-sediment loads

    USGS Publications Warehouse

    Crawford, Charles G.

    1991-01-01

    A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.

  1. "Everyone just ate good food": 'Good food' in Islamabad, Pakistan.

    PubMed

    Hasnain, Saher

    2018-08-01

    In recent years, consumption of alternatively produced foods has increased in popularity in response to the deleterious effects of rapidly globalising and industrialised food systems. Concerns over food safety in relation to these changes may result from elevated levels of risk and changing perceptions associated with food production practices. This paper explores how the middle class residents of Islamabad, Pakistan, use the concept of 'good food' to reconnect themselves with nature, changing food systems, and traditional values. The paper also demonstrates how these ideas relate to those of organic, local, and traditional food consumption as currently used in more economically developed states in the Global North. Through research based on participant observation and semi-structured interviews, this paper illustrates that besides price and convenience, purity, freshness, association with specific places, and 'Pakistani-ness' were considered as the basis for making decisions about 'good food'. The results show that while individuals are aware of and have some access to imported organic and local food, they prefer using holistic and culturally informed concepts of 'good food' instead that reconnect them with food systems. I argue that through conceptualisations of 'good food', the urban middle class in Islamabad is reducing their disconnection and dis-embeddedness from nature, the food systems, and their social identities. The paper contributes to literature on food anxieties, reconnections in food geography, and 'good food' perceptions, with a focus on Pakistan. Copyright © 2018. Published by Elsevier Ltd.

  2. Circumpolar Estimates of Isopycnal Mixing in the ACC from Argo Floats

    NASA Astrophysics Data System (ADS)

    Roach, C. J.; Balwada, D.; Speer, K. G.

    2015-12-01

    There are few direct observations of cross-stream isopycnal mixing in the interior of the Southern Ocean, yet such measurements are needed to determine the role of eddies transporting properties across the ACC, and key to progress toward testing theories of meridional overturning. In light of this we examine if it is possible to obtain estimates of mixing from Argo float trajectories. We divided the Southern Ocean into overlapping 15ο longitude bins before estimating mixing. Resulting diffusivities ranged from 300 to 3000 m2s-1, with peaks corresponding to the Scotia Sea; Kerguelen and Campbell Plateaus. Comparison of our diffusivities with previous regional studies demonstrated good agreement. Tests of the methodology in the DIMES region found that mixing from Argo floats agreed closely with mixing from RAFOS floats. To further test the method we used the Southern Ocean State Estimate velocity fields to advect particles with Argo and RAFOS float like behaviours. Stirring estimates from the particles agreed well with each other in the Kerguelen Island region, South Pacific and Scotia Sea, despite the differences in the imposed behaviour. Finally, these estimates were compared to mixing length suppression theory presented in Ferrari and Nikurashin 2010. This mixing length suppression theory quantifies horizontal diffusivity similar to Prandtl (1925), but the mixing length is suppressed in the presence of mean flows and eddy phase speeds. Our results suggest that the theory can explain both the structure and magnitude of mixing using mean flow data. An exception is near the Kerguelen and Campbell Plateaus where theory under-estimates mixing relative to our results.

  3. EEG minimum-norm estimation compared with MEG dipole fitting in the localization of somatosensory sources at S1.

    PubMed

    Komssi, S; Huttunen, J; Aronen, H J; Ilmoniemi, R J

    2004-03-01

    Dipole models, which are frequently used in attempts to solve the electromagnetic inverse problem, require explicit a priori assumptions about the cerebral current sources. This is not the case for solutions based on minimum-norm estimates. In the present study, we evaluated the spatial accuracy of the L2 minimum-norm estimate (MNE) in realistic noise conditions by assessing its ability to localize sources of evoked responses at the primary somatosensory cortex (SI). Multichannel somatosensory evoked potentials (SEPs) and magnetic fields (SEFs) were recorded in 5 subjects while stimulating the median and ulnar nerves at the left wrist. A Tikhonov-regularized L2-MNE, constructed on a spherical surface from the SEP signals, was compared with an equivalent current dipole (ECD) solution obtained from the SEFs. Primarily tangential current sources accounted for both SEP and SEF distributions at around 20 ms (N20/N20m) and 70 ms (P70/P70m), which deflections were chosen for comparative analysis. The distances between the locations of the maximum current densities obtained from MNE and the locations of ECDs were on the average 12-13 mm for both deflections and nerves stimulated. In accordance with the somatotopical order of SI, both the MNE and ECD tended to localize median nerve activation more laterally than ulnar nerve activation for the N20/N20m deflection. Simulation experiments further indicated that, with a proper estimate of the source depth and with a good fit of the head model, the MNE can reach a mean accuracy of 5 mm in 0.2-microV root-mean-square noise. When compared with previously reported localizations based on dipole modelling of SEPs, it appears that equally accurate localization of S1 can be obtained with the MNE. MNE can be used to verify parametric source modelling results. Having a relatively good localization accuracy and requiring minimal assumptions, the MNE may be useful for the localization of poorly known activity distributions and for tracking

  4. Illness Mapping: a time and cost effective method to estimate healthcare data needed to establish community-based health insurance.

    PubMed

    Binnendijk, Erika; Gautham, Meenakshi; Koren, Ruth; Dror, David M

    2012-10-09

    Most healthcare spending in developing countries is private out-of-pocket. One explanation for low penetration of health insurance is that poorer individuals doubt their ability to enforce insurance contracts. Community-based health insurance schemes (CBHI) are a solution, but launching CBHI requires obtaining accurate local data on morbidity, healthcare utilization and other details to inform package design and pricing. We developed the "Illness Mapping" method (IM) for data collection (faster and cheaper than household surveys). IM is a modification of two non-interactive consensus group methods (Delphi and Nominal Group Technique) to operate as interactive methods. We elicited estimates from "Experts" in the target community on morbidity and healthcare utilization. Interaction between facilitator and experts became essential to bridge literacy constraints and to reach consensus.The study was conducted in Gaya District, Bihar (India) during April-June 2010. The intervention included the IM and a household survey (HHS). IM included 18 women's and 17 men's groups. The HHS was conducted in 50 villages with1,000 randomly selected households (6,656 individuals). We found good agreement between the two methods on overall prevalence of illness (IM: 25.9% ±3.6; HHS: 31.4%) and on prevalence of acute (IM: 76.9%; HHS: 69.2%) and chronic illnesses (IM: 20.1%; HHS: 16.6%). We also found good agreement on incidence of deliveries (IM: 3.9% ±0.4; HHS: 3.9%), and on hospital deliveries (IM: 61.0%. ± 5.4; HHS: 51.4%). For hospitalizations, we obtained a lower estimate from the IM (1.1%) than from the HHS (2.6%). The IM required less time and less person-power than a household survey, which translate into reduced costs. We have shown that our Illness Mapping method can be carried out at lower financial and human cost for sourcing essential local data, at acceptably accurate levels. In view of the good fit of results obtained, we assume that the method could work elsewhere

  5. Illness Mapping: a time and cost effective method to estimate healthcare data needed to establish community-based health insurance

    PubMed Central

    2012-01-01

    Background Most healthcare spending in developing countries is private out-of-pocket. One explanation for low penetration of health insurance is that poorer individuals doubt their ability to enforce insurance contracts. Community-based health insurance schemes (CBHI) are a solution, but launching CBHI requires obtaining accurate local data on morbidity, healthcare utilization and other details to inform package design and pricing. We developed the “Illness Mapping” method (IM) for data collection (faster and cheaper than household surveys). Methods IM is a modification of two non-interactive consensus group methods (Delphi and Nominal Group Technique) to operate as interactive methods. We elicited estimates from “Experts” in the target community on morbidity and healthcare utilization. Interaction between facilitator and experts became essential to bridge literacy constraints and to reach consensus. The study was conducted in Gaya District, Bihar (India) during April-June 2010. The intervention included the IM and a household survey (HHS). IM included 18 women’s and 17 men’s groups. The HHS was conducted in 50 villages with1,000 randomly selected households (6,656 individuals). Results We found good agreement between the two methods on overall prevalence of illness (IM: 25.9% ±3.6; HHS: 31.4%) and on prevalence of acute (IM: 76.9%; HHS: 69.2%) and chronic illnesses (IM: 20.1%; HHS: 16.6%). We also found good agreement on incidence of deliveries (IM: 3.9% ±0.4; HHS: 3.9%), and on hospital deliveries (IM: 61.0%. ± 5.4; HHS: 51.4%). For hospitalizations, we obtained a lower estimate from the IM (1.1%) than from the HHS (2.6%). The IM required less time and less person-power than a household survey, which translate into reduced costs. Conclusions We have shown that our Illness Mapping method can be carried out at lower financial and human cost for sourcing essential local data, at acceptably accurate levels. In view of the good fit of results

  6. Estimation of Enthalpy of Formation of Liquid Transition Metal Alloys: A Modified Prescription Based on Macroscopic Atom Model of Cohesion

    NASA Astrophysics Data System (ADS)

    Raju, Subramanian; Saibaba, Saroja

    2016-09-01

    The enthalpy of formation Δo H f is an important thermodynamic quantity, which sheds significant light on fundamental cohesive and structural characteristics of an alloy. However, being a difficult one to determine accurately through experiments, simple estimation procedures are often desirable. In the present study, a modified prescription for estimating Δo H f L of liquid transition metal alloys is outlined, based on the Macroscopic Atom Model of cohesion. This prescription relies on self-consistent estimation of liquid-specific model parameters, namely electronegativity ( ϕ L) and bonding electron density ( n b L ). Such unique identification is made through the use of well-established relationships connecting surface tension, compressibility, and molar volume of a metallic liquid with bonding charge density. The electronegativity is obtained through a consistent linear scaling procedure. The preliminary set of values for ϕ L and n b L , together with other auxiliary model parameters, is subsequently optimized to obtain a good numerical agreement between calculated and experimental values of Δo H f L for sixty liquid transition metal alloys. It is found that, with few exceptions, the use of liquid-specific model parameters in Macroscopic Atom Model yields a physically consistent methodology for reliable estimation of mixing enthalpies of liquid alloys.

  7. Measuring surface topography by scanning electron microscopy. II. Analysis of three estimators of surface roughness in second dimension and third dimension.

    PubMed

    Bonetto, Rita Dominga; Ladaga, Juan Luis; Ponz, Ezequiel

    2006-04-01

    Scanning electron microscopy (SEM) is widely used in surface studies and continuous efforts are carried out in the search of estimators of different surface characteristics. By using the variogram, we developed two of these estimators that were used to characterize the surface roughness from the SEM image texture. One of the estimators is related to the crossover between fractal region at low scale and the periodic region at high scale, whereas the other estimator characterizes the periodic region. In this work, a full study of these estimators and the fractal dimension in two dimensions (2D) and three dimensions (3D) was carried out for emery papers. We show that the obtained fractal dimension with only one image is good enough to characterize the roughness surface because its behavior is similar to those obtained with 3D height data. We show also that the estimator that indicates the crossover is related to the minimum cell size in 2D and to the average particle size in 3D. The other estimator has different values for the three studied emery papers in 2D but it does not have a clear meaning, and these values are similar for those studied samples in 3D. Nevertheless, it indicates the formation tendency of compound cells. The fractal dimension values from the variogram and from an area versus step log-log graph were studied with 3D data. Both methods yield different values corresponding to different information from the samples.

  8. Estimation of stochastic volatility by using Ornstein-Uhlenbeck type models

    NASA Astrophysics Data System (ADS)

    Mariani, Maria C.; Bhuiyan, Md Al Masum; Tweneboah, Osei K.

    2018-02-01

    In this study, we develop a technique for estimating the stochastic volatility (SV) of a financial time series by using Ornstein-Uhlenbeck type models. Using the daily closing prices from developed and emergent stock markets, we conclude that the incorporation of stochastic volatility into the time varying parameter estimation significantly improves the forecasting performance via Maximum Likelihood Estimation. Furthermore, our estimation algorithm is feasible with large data sets and have good convergence properties.

  9. Sensitivity of goodness-of-fit statistics to rainfall data rounding off

    NASA Astrophysics Data System (ADS)

    Deidda, Roberto; Puliga, Michelangelo

    An analysis based on the L-moments theory suggests of adopting the generalized Pareto distribution to interpret daily rainfall depths recorded by the rain-gauge network of the Hydrological Survey of the Sardinia Region. Nevertheless, a big problem, not yet completely resolved, arises in the estimation of a left-censoring threshold able to assure a good fitting of rainfall data with the generalized Pareto distribution. In order to detect an optimal threshold, keeping the largest possible number of data, we chose to apply a “failure-to-reject” method based on goodness-of-fit tests, as it was proposed by Choulakian and Stephens [Choulakian, V., Stephens, M.A., 2001. Goodness-of-fit tests for the generalized Pareto distribution. Technometrics 43, 478-484]. Unfortunately, the application of the test, using percentage points provided by Choulakian and Stephens (2001), did not succeed in detecting a useful threshold value in most analyzed time series. A deeper analysis revealed that these failures are mainly due to the presence of large quantities of rounding off values among sample data, affecting the distribution of goodness-of-fit statistics and leading to significant departures from percentage points expected for continuous random variables. A procedure based on Monte Carlo simulations is thus proposed to overcome these problems.

  10. Fast batch injection analysis of H(2)O(2) using an array of Pt-modified gold microelectrodes obtained from split electronic chips.

    PubMed

    Pacheco, Bruno D; Valério, Jaqueline; Angnes, Lúcio; Pedrotti, Jairo J

    2011-06-24

    A fast and robust analytical method for amperometric determination of hydrogen peroxide (H(2)O(2)) based on batch injection analysis (BIA) on an array of gold microelectrodes modified with platinum is proposed. The gold microelectrode array (n=14) was obtained from electronic chips developed for surface mounted device technology (SMD), whose size offers advantages to adapt them in batch cells. The effect of the dispensing rate, volume injected, distance between the platinum microelectrodes and the pipette tip, as well as the volume of solution in the cell on the analytical response were evaluated. The method allows the H(2)O(2) amperometric determination in the concentration range from 0.8 μmolL(-1) to 100 μmolL(-1). The analytical frequency can attain 300 determinations per hour and the detection limit was estimated in 0.34 μmolL(-1) (3σ). The anodic current peaks obtained after a series of 23 successive injections of 50 μL of 25 μmolL(-1) H(2)O(2) showed an RSD<0.9%. To ensure the good selectivity to detect H(2)O(2), its determination was performed in a differential mode, with selective destruction of the H(2)O(2) with catalase in 10 mmolL(-1) phosphate buffer solution. Practical application of the analytical procedure involved H(2)O(2) determination in rainwater of São Paulo City. A comparison of the results obtained by the proposed amperometric method with another one which combines flow injection analysis (FIA) with spectrophotometric detection showed good agreement. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Ambulatory estimation of foot placement during walking using inertial sensors.

    PubMed

    Martin Schepers, H; van Asseldonk, Edwin H F; Baten, Chris T M; Veltink, Peter H

    2010-12-01

    This study proposes a method to assess foot placement during walking using an ambulatory measurement system consisting of orthopaedic sandals equipped with force/moment sensors and inertial sensors (accelerometers and gyroscopes). Two parameters, lateral foot placement (LFP) and stride length (SL), were estimated for each foot separately during walking with eyes open (EO), and with eyes closed (EC) to analyze if the ambulatory system was able to discriminate between different walking conditions. For validation, the ambulatory measurement system was compared to a reference optical position measurement system (Optotrak). LFP and SL were obtained by integration of inertial sensor signals. To reduce the drift caused by integration, LFP and SL were defined with respect to an average walking path using a predefined number of strides. By varying this number of strides, it was shown that LFP and SL could be best estimated using three consecutive strides. LFP and SL estimated from the instrumented shoe signals and with the reference system showed good correspondence as indicated by the RMS difference between both measurement systems being 6.5 ± 1.0 mm (mean ± standard deviation) for LFP, and 34.1 ± 2.7 mm for SL. Additionally, a statistical analysis revealed that the ambulatory system was able to discriminate between the EO and EC condition, like the reference system. It is concluded that the ambulatory measurement system was able to reliably estimate foot placement during walking. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. A novel technique for real-time estimation of edge pedestal density gradients via reflectometer time delay data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.

    2016-11-15

    A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layermore » density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.« less

  13. Lifetime prediction and reliability estimation methodology for Stirling-type pulse tube refrigerators by gaseous contamination accelerated degradation testing

    NASA Astrophysics Data System (ADS)

    Wan, Fubin; Tan, Yuanyuan; Jiang, Zhenhua; Chen, Xun; Wu, Yinong; Zhao, Peng

    2017-12-01

    Lifetime and reliability are the two performance parameters of premium importance for modern space Stirling-type pulse tube refrigerators (SPTRs), which are required to operate in excess of 10 years. Demonstration of these parameters provides a significant challenge. This paper proposes a lifetime prediction and reliability estimation method that utilizes accelerated degradation testing (ADT) for SPTRs related to gaseous contamination failure. The method was experimentally validated via three groups of gaseous contamination ADT. First, the performance degradation model based on mechanism of contamination failure and material outgassing characteristics of SPTRs was established. Next, a preliminary test was performed to determine whether the mechanism of contamination failure of the SPTRs during ADT is consistent with normal life testing. Subsequently, the experimental program of ADT was designed for SPTRs. Then, three groups of gaseous contamination ADT were performed at elevated ambient temperatures of 40 °C, 50 °C, and 60 °C, respectively and the estimated lifetimes of the SPTRs under normal condition were obtained through acceleration model (Arrhenius model). The results show good fitting of the degradation model with the experimental data. Finally, we obtained the reliability estimation of SPTRs through using the Weibull distribution. The proposed novel methodology enables us to take less than one year time to estimate the reliability of the SPTRs designed for more than 10 years.

  14. Estimation of Temporal Gait Parameters Using a Human Body Electrostatic Sensing-Based Method.

    PubMed

    Li, Mengxuan; Li, Pengfei; Tian, Shanshan; Tang, Kai; Chen, Xi

    2018-05-28

    Accurate estimation of gait parameters is essential for obtaining quantitative information on motor deficits in Parkinson's disease and other neurodegenerative diseases, which helps determine disease progression and therapeutic interventions. Due to the demand for high accuracy, unobtrusive measurement methods such as optical motion capture systems, foot pressure plates, and other systems have been commonly used in clinical environments. However, the high cost of existing lab-based methods greatly hinders their wider usage, especially in developing countries. In this study, we present a low-cost, noncontact, and an accurate temporal gait parameters estimation method by sensing and analyzing the electrostatic field generated from human foot stepping. The proposed method achieved an average 97% accuracy on gait phase detection and was further validated by comparison to the foot pressure system in 10 healthy subjects. Two results were compared using the Pearson coefficient r and obtained an excellent consistency ( r = 0.99, p < 0.05). The repeatability of the purposed method was calculated between days by intraclass correlation coefficients (ICC), and showed good test-retest reliability (ICC = 0.87, p < 0.01). The proposed method could be an affordable and accurate tool to measure temporal gait parameters in hospital laboratories and in patients' home environments.

  15. Noninvasive estimation of assist pressure for direct mechanical ventricular actuation

    NASA Astrophysics Data System (ADS)

    An, Dawei; Yang, Ming; Gu, Xiaotong; Meng, Fan; Yang, Tianyue; Lin, Shujing

    2018-02-01

    Direct mechanical ventricular actuation is effective to reestablish the ventricular function with non-blood contact. Due to the energy loss within the driveline of the direct cardiac compression device, it is necessary to acquire the accurate value of assist pressure acting on the heart surface. To avoid myocardial trauma induced by invasive sensors, the noninvasive estimation method is developed and the experimental device is designed to measure the sample data for fitting the estimation models. By examining the goodness of fit numerically and graphically, the polynomial model presents the best behavior among the four alternative models. Meanwhile, to verify the effect of the noninvasive estimation, the simplified lumped parameter model is utilized to calculate the pre-support and the post-support left ventricular pressure. Furthermore, by adjusting the driving pressure beyond the range of the sample data, the assist pressure is estimated with the similar waveform and the post-support left ventricular pressure approaches the value of the adult healthy heart, indicating the good generalization ability of the noninvasive estimation method.

  16. Student perceptions of a good teacher: the gender perspective.

    PubMed

    Jules, V; Kutnick, P

    1997-12-01

    A large-scale survey of pupils' perceptions of a good teacher in the Caribbean republic of Trinidad and Tobago is reported. An essay-based, interpretative mode of research was used to elicit and identify constructs used by boys and girls. The study explores similarities and differences between boys and girls in their perceptions of a good teacher, in a society where girls achieve superior academic performance (than boys). A total of 1756 pupils and students aged between 8 and 16 provided the sample, which was proportional, stratified, clustered. Within these constraints classrooms were randomly selected to be representative of primary and secondary schools across the two islands. Altogether 1539 essays and 217 interviews were content analysed, coded for age development and compared between boys and girls. Content items identified by the pupils were logically grouped into: physical and personal characteristics of the teacher, quality of the relationship between the teacher and pupil, control of behaviour by the teacher, descriptions of the teaching process, and educational and other outcomes obtained by pupils due to teacher efforts. Female pupils identified more good teacher concepts at all age levels than males. There was some commonality between the sexes in concepts regarding interpersonal relationships and inclusiveness in the good teachers' teaching practices and boys showed significantly greater concerns regarding teacher control and use of punishment. Males as young as 8 years stated that good teachers should be sensitive to their needs. Only among the 16-year-old males were males noted as good teachers. Consideration is given to the roles of male and female teachers, how their classroom actions may set the basis for future success (or failure) of their pupils, and the needs of pupils with regard to teacher support within developing and developed countries.

  17. A non-parametric automatic blending methodology to estimate rainfall fields from rain gauge and radar data

    NASA Astrophysics Data System (ADS)

    Velasco-Forero, Carlos A.; Sempere-Torres, Daniel; Cassiraga, Eduardo F.; Jaime Gómez-Hernández, J.

    2009-07-01

    Quantitative estimation of rainfall fields has been a crucial objective from early studies of the hydrological applications of weather radar. Previous studies have suggested that flow estimations are improved when radar and rain gauge data are combined to estimate input rainfall fields. This paper reports new research carried out in this field. Classical approaches for the selection and fitting of a theoretical correlogram (or semivariogram) model (needed to apply geostatistical estimators) are avoided in this study. Instead, a non-parametric technique based on FFT is used to obtain two-dimensional positive-definite correlograms directly from radar observations, dealing with both the natural anisotropy and the temporal variation of the spatial structure of the rainfall in the estimated fields. Because these correlation maps can be automatically obtained at each time step of a given rainfall event, this technique might easily be used in operational (real-time) applications. This paper describes the development of the non-parametric estimator exploiting the advantages of FFT for the automatic computation of correlograms and provides examples of its application on a case study using six rainfall events. This methodology is applied to three different alternatives to incorporate the radar information (as a secondary variable), and a comparison of performances is provided. In particular, their ability to reproduce in estimated rainfall fields (i) the rain gauge observations (in a cross-validation analysis) and (ii) the spatial patterns of radar fields are analyzed. Results seem to indicate that the methodology of kriging with external drift [KED], in combination with the technique of automatically computing 2-D spatial correlograms, provides merged rainfall fields with good agreement with rain gauges and with the most accurate approach to the spatial tendencies observed in the radar rainfall fields, when compared with other alternatives analyzed.

  18. A new method for the estimation of high temperature radiant heat emittance by means of aero-acoustic levitation

    NASA Astrophysics Data System (ADS)

    Greffrath, Fabian; Prieler, Robert; Telle, Rainer

    2014-11-01

    A new method for the experimental estimation of radiant heat emittance at high temperatures has been developed which involves aero-acoustic levitation of samples, laser heating and contactless temperature measurement. Radiant heat emittance values are determined from the time dependent development of the sample temperature which requires analysis of both the radiant and convective heat transfer towards the surroundings by means of fluid dynamics calculations. First results for the emittance of a corundum sample obtained with this method are presented in this article and found in good agreement with literature values.

  19. Exploring Intergenerational Discontinuity in Problem Behavior: Bad Parents with Good Children

    PubMed Central

    Dong, Beidi; Krohn, Marvin D.

    2014-01-01

    Using data from the Rochester Youth Development Study, a series of regression models are estimated on offspring problem behavior with a focus on the interaction between parental history of delinquency and the parent-child relationship. Good parenting practices significantly interact with the particular shape of parental propensity of offending over time, functioning as protective factors to protect against problematic behaviors among those who are most at risk. The moderation effects vary slightly by the age of our subjects. Accordingly, it is important to distinguish the effect of not only the level of parental delinquency at one point in time, but also the shape of the delinquency trajectory on outcomes for their children. Good parenting holds the hope of breaking the vicious cycle of intergenerational transmission of delinquency. PMID:26097437

  20. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.

    PubMed

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-06-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.

  1. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-01-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560

  2. The Good Work.

    ERIC Educational Resources Information Center

    Csikszentmihalyi, Mihaly

    2003-01-01

    Examines the working lives of geneticists and journalists to place into perspective what lies behind personal ethics and success. Defines "good work" as productive activity that is valued socially and loved by people engaged in it. Asserts that certain cultural values, social controls, and personal standards are necessary to maintain good work and…

  3. Disaster debris estimation using high-resolution polarimetric stereo-SAR

    NASA Astrophysics Data System (ADS)

    Koyama, Christian N.; Gokon, Hideomi; Jimbo, Masaru; Koshimura, Shunichi; Sato, Motoyuki

    2016-10-01

    This paper addresses the problem of debris estimation which is one of the most important initial challenges in the wake of a disaster like the Great East Japan Earthquake and Tsunami. Reasonable estimates of the debris have to be made available to decision makers as quickly as possible. Current approaches to obtain this information are far from being optimal as they usually rely on manual interpretation of optical imagery. We have developed a novel approach for the estimation of tsunami debris pile heights and volumes for improved emergency response. The method is based on a stereo-synthetic aperture radar (stereo-SAR) approach for very high-resolution polarimetric SAR. An advanced gradient-based optical-flow estimation technique is applied for optimal image coregistration of the low-coherence non-interferometric data resulting from the illumination from opposite directions and in different polarizations. By applying model based decomposition of the coherency matrix, only the odd bounce scattering contributions are used to optimize echo time computation. The method exclusively considers the relative height differences from the top of the piles to their base to achieve a very fine resolution in height estimation. To define the base, a reference point on non-debris-covered ground surface is located adjacent to the debris pile targets by exploiting the polarimetric scattering information. The proposed technique is validated using in situ data of real tsunami debris taken on a temporary debris management site in the tsunami affected area near Sendai city, Japan. The estimated height error is smaller than 0.6 m RMSE. The good quality of derived pile heights allows for a voxel-based estimation of debris volumes with a RMSE of 1099 m3. Advantages of the proposed method are fast computation time, and robust height and volume estimation of debris piles without the need for pre-event data or auxiliary information like DEM, topographic maps or GCPs.

  4. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  5. Estimating metallicities with isochrone fits to photometric data of open clusters

    NASA Astrophysics Data System (ADS)

    Monteiro, H.; Oliveira, A. F.; Dias, W. S.; Caetano, T. C.

    2014-10-01

    The metallicity is a critical parameter that affects the correct determination of stellar cluster's fundamental characteristics and has important implications in Galactic and Stellar evolution research. Fewer than 10% of the 2174 currently catalogued open clusters have their metallicity determined in the literature. In this work we present a method for estimating the metallicity of open clusters via non-subjective isochrone fitting using the cross-entropy global optimization algorithm applied to UBV photometric data. The free parameters distance, reddening, age, and metallicity are simultaneously determined by the fitting method. The fitting procedure uses weights for the observational data based on the estimation of membership likelihood for each star, which considers the observational magnitude limit, the density profile of stars as a function of radius from the center of the cluster, and the density of stars in multi-dimensional magnitude space. We present results of [Fe/H] for well-studied open clusters based on distinct UBV data sets. The [Fe/H] values obtained in the ten cases for which spectroscopic determinations were available in the literature agree, indicating that our method provides a good alternative to estimating [Fe/H] by using an objective isochrone fitting. Our results show that the typical precision is about 0.1 dex.

  6. Effective channel estimation and efficient symbol detection for multi-input multi-output underwater acoustic communications

    NASA Astrophysics Data System (ADS)

    Ling, Jun

    Achieving reliable underwater acoustic communications (UAC) has long been recognized as a challenging problem owing to the scarce bandwidth available and the reverberant spread in both time and frequency domains. To pursue high data rates, we consider a multi-input multi-output (MIMO) UAC system, and our focus is placed on two main issues regarding a MIMO UAC system: (1) channel estimation, which involves the design of the training sequences and the development of a reliable channel estimation algorithm, and (2) symbol detection, which requires interference cancelation schemes due to simultaneous transmission from multiple transducers. To enhance channel estimation performance, we present a cyclic approach for designing training sequences with good auto- and cross-correlation properties, and a channel estimation algorithm called the iterative adaptive approach (IAA). Sparse channel estimates can be obtained by combining IAA with the Bayesian information criterion (BIC). Moreover, we present sparse learning via iterative minimization (SLIM) and demonstrate that SLIM gives similar performance to IAA but at a much lower computational cost. Furthermore, an extension of the SLIM algorithm is introduced to estimate the sparse and frequency modulated acoustic channels. The extended algorithm is referred to as generalization of SLIM (GoSLIM). Regarding symbol detection, a linear minimum mean-squared error based detection scheme, called RELAX-BLAST, which is a combination of vertical Bell Labs layered space-time (V-BLAST) algorithm and the cyclic principle of the RELAX algorithm, is presented and it is shown that RELAX-BLAST outperforms V-BLAST. We show that RELAX-BLAST can be implemented efficiently by making use of the conjugate gradient method and diagonalization properties of circulant matrices. This fast implementation approach requires only simple fast Fourier transform operations and facilitates parallel implementations. The effectiveness of the proposed MIMO schemes

  7. Papillary Thyroid Cancer: The Good and Bad of the "Good Cancer".

    PubMed

    Randle, Reese W; Bushman, Norah M; Orne, Jason; Balentine, Courtney J; Wendt, Elizabeth; Saucke, Megan; Pitt, Susan C; Macdonald, Cameron L; Connor, Nadine P; Sippel, Rebecca S

    2017-07-01

    Papillary thyroid cancer is often described as the "good cancer" because of its treatability and relatively favorable survival rates. This study sought to characterize the thoughts of papillary thyroid cancer patients as they relate to having the "good cancer." This qualitative study included 31 papillary thyroid cancer patients enrolled in an ongoing randomized trial. Semi-structured interviews were conducted with participants at the preoperative visit and two weeks, six weeks, six months, and one year after thyroidectomy. Grounded theory was used, inductively coding the first 113 interview transcripts with NVivo 11. The concept of thyroid cancer as "good cancer" emerged unprompted from 94% (n = 29) of participants, mostly concentrated around the time of diagnosis. Patients encountered this perception from healthcare providers, Internet research, friends, and preconceived ideas about other cancers. While patients generally appreciated optimism, this perspective also generated negative feelings. It eased the diagnosis of cancer but created confusion when individual experiences varied from expectations. Despite initially feeling reassured, participants described feeling the "good cancer" characterization invalidated their fears of having cancer. Thyroid cancer patients expressed that they did not want to hear that it's "only thyroid cancer" and that it's "no big deal," because "cancer is cancer," and it is significant. Patients with papillary thyroid cancer commonly confront the perception that their malignancy is "good," but the favorable prognosis and treatability of the disease do not comprehensively represent their cancer fight. The "good cancer" perception is at the root of many mixed and confusing emotions. Clinicians emphasize optimistic outcomes, hoping to comfort, but they might inadvertently invalidate the impact thyroid cancer has on patients' lives.

  8. The MAP Spacecraft Angular State Estimation After Sensor Failure

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    2003-01-01

    This work describes two algorithms for computing the angular rate and attitude in case of a gyro and a Star Tracker failure in the Microwave Anisotropy Probe (MAP) satellite, which was placed in the L2 parking point from where it collects data to determine the origin of the universe. The nature of the problem is described, two algorithms are suggested, an observability study is carried out and real MAP data are used to determine the merit of the algorithms. It is shown that one of the algorithms yields a good estimate of the rates but not of the attitude whereas the other algorithm yields a good estimate of the rate as well as two of the three attitude angles. The estimation of the third angle depends on the initial state estimate. There is a contradiction between this result and the outcome of the observability analysis. An explanation of this contradiction is given in the paper. Although this work treats a particular spacecraft, the conclusions have a far reaching consequence.

  9. Study on inverse estimation of radiative properties from directional radiances by using statistical RPSO algorithm

    NASA Astrophysics Data System (ADS)

    Han, Kuk-Il; Kim, Do-Hwi; Choi, Jun-Hyuk; Kim, Tae-Kuk; Shin, Jong-Jin

    2016-09-01

    Infrared signals are widely used to discriminate objects against the background. Prediction of infrared signal from an object surface is essential in evaluating the detectability of the object. Appropriate and easy method of procurement of the radiative properties such as the surface emissivity, bidirectional reflectivity is important in estimating infrared signals. Direct measurement can be a good choice but a costly and time consuming way of obtaining the radiative properties for surfaces coated with many different newly developed paints. Especially measurement of the bidirectional reflectivity usually expressed by the bidirectional reflectance distribution function (BRDF) is the most costly job. In this paper we are presenting an inverse estimation method of the radiative properties by using the directional radiances from the surface of concern. The inverse estimation method used in this study is the statistical repulsive particle swarm optimization (RPSO) algorithm which uses the randomly picked directional radiance data emitted and reflected from the surface. In this paper, we test the proposed inverse method by considering the radiation from a steel plate surface coated with different paints at a clear sunny day condition. For convenience, the directional radiance data from the steel plate within a spectral band of concern are obtained from the simulation using the commercial software, RadthermIR, instead of the field measurement. A widely used BRDF model called as the Sandford-Robertson(S-R) model is considered and the RPSO process is then used to find the best fitted model parameters for the S-R model. The results obtained from this study show an excellent agreement with the reference property data used for the simulation for directional radiances. The proposed process can be a useful way of obtaining the radiative properties from field measured directional radiance data for surfaces coated with or without various kinds of paints of unknown radiative

  10. An Evaluation of Residual Feed Intake Estimates Obtained with Computer Models Versus Empirical Regression

    USDA-ARS?s Scientific Manuscript database

    Data on individual daily feed intake, bi-weekly BW, and carcass composition were obtained on 1,212 crossbred steers, in Cycle VII of the Germplasm Evaluation Project at the U.S. Meat Animal Research Center. Within animal regressions of cumulative feed intake and BW on linear and quadratic days on fe...

  11. Empirical Bayes Gaussian likelihood estimation of exposure distributions from pooled samples in human biomonitoring.

    PubMed

    Li, Xiang; Kuk, Anthony Y C; Xu, Jinfeng

    2014-12-10

    Human biomonitoring of exposure to environmental chemicals is important. Individual monitoring is not viable because of low individual exposure level or insufficient volume of materials and the prohibitive cost of taking measurements from many subjects. Pooling of samples is an efficient and cost-effective way to collect data. Estimation is, however, complicated as individual values within each pool are not observed but are only known up to their average or weighted average. The distribution of such averages is intractable when the individual measurements are lognormally distributed, which is a common assumption. We propose to replace the intractable distribution of the pool averages by a Gaussian likelihood to obtain parameter estimates. If the pool size is large, this method produces statistically efficient estimates, but regardless of pool size, the method yields consistent estimates as the number of pools increases. An empirical Bayes (EB) Gaussian likelihood approach, as well as its Bayesian analog, is developed to pool information from various demographic groups by using a mixed-effect formulation. We also discuss methods to estimate the underlying mean-variance relationship and to select a good model for the means, which can be incorporated into the proposed EB or Bayes framework. By borrowing strength across groups, the EB estimator is more efficient than the individual group-specific estimator. Simulation results show that the EB Gaussian likelihood estimates outperform a previous method proposed for the National Health and Nutrition Examination Surveys with much smaller bias and better coverage in interval estimation, especially after correction of bias. Copyright © 2014 John Wiley & Sons, Ltd.

  12. Bulk canopy resistance: Modeling for the estimation of actual evapotranspiration of maize

    NASA Astrophysics Data System (ADS)

    Gharsallah, O.; Corbari, C.; Mancini, M.; Rana, G.

    2009-04-01

    Due to the scarcity of water resources, the correct evaluation of water losses by the crops as evapotranspiration (ET) is very important in irrigation management. This work presents a model for estimating actual evapotranspiration on hourly and daily scales of maize crop grown in well water condition in the Lombardia Region (North Italy). The maize is a difficult crop to model from the soil-canopy-atmosphere point of view, due to its very complex architecture and big height. The present ET model is based on the Penman-Monteith equation using Katerji and Perrier approach for modelling the variable canopy resistance value (rc). In fact rc is a primary factor in the evapotranspiration process and needs to be accurately estimated. Furthermore, ET also has an aerodynamic component, hence it depends on multiple factors such as meteorological variables and crop water condition. The proposed approach appears through a linear model in which rc depends on climate variables and aerodynamic resistance [rc/ra = f(r*/ra)] where ra is the aerodynamic resistance, function of wind speed and crop height, and r* is called "critical" or "climatic" resistance. Here, under humid climate, the model has been applied with good results at both hourly and daily scales. In this study, the reached good accuracy shows that the model worked well and are clearly more accurate than those obtained by using the more diffuse and known standard FAO 56 method for well watered and stressed crops.

  13. "Good Citizen" Program.

    ERIC Educational Resources Information Center

    Placer Hills Union Elementary School District, Meadow Vista, CA.

    THE FOLLOWING IS THE FULL TEXT OF THIS DOCUMENT: The "Good Citizen" Program was developed for many reasons: to keep the campus clean, to reward students for improvement, to reward students for good deeds, to improve the total school climate, to reward students for excellence, and to offer staff members a method of reward for positive…

  14. Estimating pregnancy-related mortality from census data: experience in Latin America

    PubMed Central

    Queiroz, Bernardo L; Wong, Laura; Plata, Jorge; Del Popolo, Fabiana; Rosales, Jimmy; Stanton, Cynthia

    2009-01-01

    Abstract Objective To assess the feasibility of measuring maternal mortality in countries lacking accurate birth and death registration through national population censuses by a detailed evaluation of such data for three Latin American countries. Methods We used established demographic techniques, including the general growth balance method, to evaluate the completeness and coverage of the household death data obtained through population censuses. We also compared parity to cumulative fertility data to evaluate the coverage of recent household births. After evaluating the data and adjusting it as necessary, we calculated pregnancy-related mortality ratios (PRMRs) per 100 000 live births and used them to estimate maternal mortality. Findings The PRMRs for Honduras (2001), Nicaragua (2005) and Paraguay (2002) were 168, 95 and 178 per 100 000 live births, respectively. Surprisingly, evaluation of the data for Nicaragua and Paraguay showed overreporting of adult deaths, so a downward adjustment of 20% to 30% was required. In Honduras, the number of adult female deaths required substantial upward adjustment. The number of live births needed minimal adjustment. The adjusted PRMR estimates are broadly consistent with existing estimates of maternal mortality from various data sources, though the comparison varies by source. Conclusion Census data can be used to measure pregnancy-related mortality as a proxy for maternal mortality in countries with poor death registration. However, because our data were obtained from countries with reasonably good statistical systems and literate populations, we cannot be certain the methods employed in the study will be equally useful in more challenging environments. Our data evaluation and adjustment methods worked, but with considerable uncertainty. Ways of quantifying this uncertainty are needed. PMID:19551237

  15. Data-Driven Robust RVFLNs Modeling of a Blast Furnace Iron-Making Process Using Cauchy Distribution Weighted M-Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Lv, Youbin; Wang, Hong

    Optimal operation of a practical blast furnace (BF) ironmaking process depends largely on a good measurement of molten iron quality (MIQ) indices. However, measuring the MIQ online is not feasible using the available techniques. In this paper, a novel data-driven robust modeling is proposed for online estimation of MIQ using improved random vector functional-link networks (RVFLNs). Since the output weights of traditional RVFLNs are obtained by the least squares approach, a robustness problem may occur when the training dataset is contaminated with outliers. This affects the modeling accuracy of RVFLNs. To solve this problem, a Cauchy distribution weighted M-estimation basedmore » robust RFVLNs is proposed. Since the weights of different outlier data are properly determined by the Cauchy distribution, their corresponding contribution on modeling can be properly distinguished. Thus robust and better modeling results can be achieved. Moreover, given that the BF is a complex nonlinear system with numerous coupling variables, the data-driven canonical correlation analysis is employed to identify the most influential components from multitudinous factors that affect the MIQ indices to reduce the model dimension. Finally, experiments using industrial data and comparative studies have demonstrated that the obtained model produces a better modeling and estimating accuracy and stronger robustness than other modeling methods.« less

  16. Dispersion curve estimation via a spatial covariance method with ultrasonic wavefield imaging.

    PubMed

    Chong, See Yenn; Todd, Michael D

    2018-05-01

    Numerous Lamb wave dispersion curve estimation methods have been developed to support damage detection and localization strategies in non-destructive evaluation/structural health monitoring (NDE/SHM) applications. In this paper, the covariance matrix is used to extract features from an ultrasonic wavefield imaging (UWI) scan in order to estimate the phase and group velocities of S0 and A0 modes. A laser ultrasonic interrogation method based on a Q-switched laser scanning system was used to interrogate full-field ultrasonic signals in a 2-mm aluminum plate at five different frequencies. These full-field ultrasonic signals were processed in three-dimensional space-time domain. Then, the time-dependent covariance matrices of the UWI were obtained based on the vector variables in Cartesian and polar coordinate spaces for all time samples. A spatial covariance map was constructed to show spatial correlations within the full wavefield. It was observed that the variances may be used as a feature for S0 and A0 mode properties. The phase velocity and the group velocity were found using a variance map and an enveloped variance map, respectively, at five different frequencies. This facilitated the estimation of Lamb wave dispersion curves. The estimated dispersion curves of the S0 and A0 modes showed good agreement with the theoretical dispersion curves. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Mode extraction on wind turbine blades via phase-based video motion estimation

    NASA Astrophysics Data System (ADS)

    Sarrafi, Aral; Poozesh, Peyman; Niezrecki, Christopher; Mao, Zhu

    2017-04-01

    In recent years, image processing techniques are being applied more often for structural dynamics identification, characterization, and structural health monitoring. Although as a non-contact and full-field measurement method, image processing still has a long way to go to outperform other conventional sensing instruments (i.e. accelerometers, strain gauges, laser vibrometers, etc.,). However, the technologies associated with image processing are developing rapidly and gaining more attention in a variety of engineering applications including structural dynamics identification and modal analysis. Among numerous motion estimation and image-processing methods, phase-based video motion estimation is considered as one of the most efficient methods regarding computation consumption and noise robustness. In this paper, phase-based video motion estimation is adopted for structural dynamics characterization on a 2.3-meter long Skystream wind turbine blade, and the modal parameters (natural frequencies, operating deflection shapes) are extracted. Phase-based video processing adopted in this paper provides reliable full-field 2-D motion information, which is beneficial for manufacturing certification and model updating at the design stage. The phase-based video motion estimation approach is demonstrated through processing data on a full-scale commercial structure (i.e. a wind turbine blade) with complex geometry and properties, and the results obtained have a good correlation with the modal parameters extracted from accelerometer measurements, especially for the first four bending modes, which have significant importance in blade characterization.

  18. Pure human urine is a good fertiliser for cucumbers.

    PubMed

    Heinonen-Tanski, Helvi; Sjöblom, Annalena; Fabritius, Helena; Karinen, Päivi

    2007-01-01

    Human urine obtained from separating toilets was tested as a fertiliser for cultivation of outdoor cucumber (Cucumis sativus L.) in a Nordic climate. The urine used contained high amounts of nitrogen with some phosphorus and potassium, but numbers of enteric microorganisms were low even though urine had not been preserved before sampling. The cucumber yield after urine fertilisation was similar or slightly better than the yield obtained from control rows fertilised with commercial mineral fertiliser. None of the cucumbers contained any enteric microorganisms (coliforms, enterococci, coliphages and clostridia). In the taste assessment, 11 out of 20 persons could recognise which cucumber of three cucumbers was different but they did not prefer one over the other cucumber samples, since all of them were assessed as equally good.

  19. Quantitative Comparison of Tandem Mass Spectra Obtained on Various Instruments

    NASA Astrophysics Data System (ADS)

    Bazsó, Fanni Laura; Ozohanics, Oliver; Schlosser, Gitta; Ludányi, Krisztina; Vékey, Károly; Drahos, László

    2016-08-01

    The similarity between two tandem mass spectra, which were measured on different instruments, was compared quantitatively using the similarity index (SI), defined as the dot product of the square root of peak intensities in the respective spectra. This function was found to be useful for comparing energy-dependent tandem mass spectra obtained on various instruments. Spectral comparisons show the similarity index in a 2D "heat map", indicating which collision energy combinations result in similar spectra, and how good this agreement is. The results and methodology can be used in the pharma industry to design experiments and equipment well suited for good reproducibility. We suggest that to get good long-term reproducibility, it is best to adjust the collision energy to yield a spectrum very similar to a reference spectrum. It is likely to yield better results than using the same tuning file, which, for example, does not take into account that contamination of the ion source due to extended use may influence instrument tuning. The methodology may be used to characterize energy dependence on various instrument types, to optimize instrumentation, and to study the influence or correlation between various experimental parameters.

  20. Estimation of the heat/Na flux using lidar data recorded at ALO, Cerro Pachon, Chile

    NASA Astrophysics Data System (ADS)

    Vargas, F.; Gardner, C. S.; Liu, A. Z.; Swenson, G. R.

    2013-12-01

    In this poster, lidar nigh-time data are used to estimate the vertical fluxes of heat and Na at the mesopause region due to dissipating gravity waves presenting periods from 5 min to 8 h, and vertical wavelengths > 2 km. About 60 hours of good quality data were recorded near the equinox during two observation campaigns held in Mar, 2012 and Apr, 2013 at the Andes Lidar Observatory (30.3S,70.7W). These first measurements of the heat/Na flux in the southern hemisphere will be discussed and compared with those from the northern hemisphere stations obtained at the Starfire Optical Range, NM, and Maui, HW.

  1. A Good Suit Beats a Good Idea.

    ERIC Educational Resources Information Center

    Machiavelli, Nick

    1992-01-01

    Inspired by Niccolo Machiavelli, this column offers beleaguered school executives advice on looking good, dressing well, losing weight, beating the proper enemy, and saying nothing. Administrators who follow these simple rules should have an easier life, jealous colleagues, well-tended gardens, and respectful board members. (MLH)

  2. Transient high frequency signal estimation: A model-based processing approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, F.L.

    1985-03-22

    By utilizing the superposition property of linear systems a method of estimating the incident signal from reflective nondispersive data is developed. One of the basic merits of this approach is that, the reflections were removed by direct application of a Weiner type estimation algorithm, after the appropriate input was synthesized. The structure of the nondispersive signal model is well documented, and thus its' credence is established. The model is stated and more effort is devoted to practical methods of estimating the model parameters. Though a general approach was developed for obtaining the reflection weights, a simpler approach was employed here,more » since a fairly good reflection model is available. The technique essentially consists of calculating ratios of the autocorrelation function at lag zero and that lag where the incident and first reflection coincide. We initially performed our processing procedure on a measurement of a single signal. Multiple application of the processing procedure was required when we applied the reflection removal technique on a measurement containing information from the interaction of two physical phenomena. All processing was performed using SIG, an interactive signal processing package. One of the many consequences of using SIG was that repetitive operations were, for the most part, automated. A custom menu was designed to perform the deconvolution process.« less

  3. Estimates of electronic coupling for excess electron transfer in DNA

    NASA Astrophysics Data System (ADS)

    Voityuk, Alexander A.

    2005-07-01

    Electronic coupling Vda is one of the key parameters that determine the rate of charge transfer through DNA. While there have been several computational studies of Vda for hole transfer, estimates of electronic couplings for excess electron transfer (ET) in DNA remain unavailable. In the paper, an efficient strategy is established for calculating the ET matrix elements between base pairs in a π stack. Two approaches are considered. First, we employ the diabatic-state (DS) method in which donor and acceptor are represented with radical anions of the canonical base pairs adenine-thymine (AT) and guanine-cytosine (GC). In this approach, similar values of Vda are obtained with the standard 6-31G* and extended 6-31++G** basis sets. Second, the electronic couplings are derived from lowest unoccupied molecular orbitals (LUMOs) of neutral systems by using the generalized Mulliken-Hush or fragment charge methods. Because the radical-anion states of AT and GC are well reproduced by LUMOs of the neutral base pairs calculated without diffuse functions, the estimated values of Vda are in good agreement with the couplings obtained for radical-anion states using the DS method. However, when the calculation of a neutral stack is carried out with diffuse functions, LUMOs of the system exhibit the dipole-bound character and cannot be used for estimating electronic couplings. Our calculations suggest that the ET matrix elements Vda for models containing intrastrand thymine and cytosine bases are essentially larger than the couplings in complexes with interstrand pyrimidine bases. The matrix elements for excess electron transfer are found to be considerably smaller than the corresponding values for hole transfer and to be very responsive to structural changes in a DNA stack.

  4. Research on bathymetry estimation by Worldview-2 based with the semi-analytical model

    NASA Astrophysics Data System (ADS)

    Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.

    2015-04-01

    South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.

  5. Estimating cell populations

    NASA Technical Reports Server (NTRS)

    White, B. S.; Castleman, K. R.

    1981-01-01

    An important step in the diagnosis of a cervical cytology specimen is estimating the proportions of the various cell types present. This is usually done with a cell classifier, the error rates of which can be expressed as a confusion matrix. We show how to use the confusion matrix to obtain an unbiased estimate of the desired proportions. We show that the mean square error of this estimate depends on a 'befuddlement matrix' derived from the confusion matrix, and how this, in turn, leads to a figure of merit for cell classifiers. Finally, we work out the two-class problem in detail and present examples to illustrate the theory.

  6. Goodness-of-Fit Tests for Generalized Normal Distribution for Use in Hydrological Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Das, Samiran

    2018-04-01

    The use of three-parameter generalized normal (GNO) as a hydrological frequency distribution is well recognized, but its application is limited due to unavailability of popular goodness-of-fit (GOF) test statistics. This study develops popular empirical distribution function (EDF)-based test statistics to investigate the goodness-of-fit of the GNO distribution. The focus is on the case most relevant to the hydrologist, namely, that in which the parameter values are unidentified and estimated from a sample using the method of L-moments. The widely used EDF tests such as Kolmogorov-Smirnov, Cramer von Mises, and Anderson-Darling (AD) are considered in this study. A modified version of AD, namely, the Modified Anderson-Darling (MAD) test, is also considered and its performance is assessed against other EDF tests using a power study that incorporates six specific Wakeby distributions (WA-1, WA-2, WA-3, WA-4, WA-5, and WA-6) as the alternative distributions. The critical values of the proposed test statistics are approximated using Monte Carlo techniques and are summarized in chart and regression equation form to show the dependence of shape parameter and sample size. The performance results obtained from the power study suggest that the AD and a variant of the MAD (MAD-L) are the most powerful tests. Finally, the study performs case studies involving annual maximum flow data of selected gauged sites from Irish and US catchments to show the application of the derived critical values and recommends further assessments to be carried out on flow data sets of rivers with various hydrological regimes.

  7. Spring Small Grains Area Estimation

    NASA Technical Reports Server (NTRS)

    Palmer, W. F.; Mohler, R. J.

    1986-01-01

    SSG3 automatically estimates acreage of spring small grains from Landsat data. Report describes development and testing of a computerized technique for using Landsat multispectral scanner (MSS) data to estimate acreage of spring small grains (wheat, barley, and oats). Application of technique to analysis of four years of data from United States and Canada yielded estimates of accuracy comparable to those obtained through procedures that rely on trained analysis.

  8. Localization Algorithm with On-line Path Loss Estimation and Node Selection

    PubMed Central

    Bel, Albert; Vicario, José López; Seco-Granados, Gonzalo

    2011-01-01

    RSS-based localization is considered a low-complexity algorithm with respect to other range techniques such as TOA or AOA. The accuracy of RSS methods depends on the suitability of the propagation models used for the actual propagation conditions. In indoor environments, in particular, it is very difficult to obtain a good propagation model. For that reason, we present a cooperative localization algorithm that dynamically estimates the path loss exponent by using RSS measurements. Since the energy consumption is a key point in sensor networks, we propose a node selection mechanism to limit the number of neighbours of a given node that are used for positioning purposes. Moreover, the selection mechanism is also useful to discard bad links that could negatively affect the performance accuracy. As a result, we derive a practical solution tailored to the strict requirements of sensor networks in terms of complexity, size and cost. We present results based on both computer simulations and real experiments with the Crossbow MICA2 motes showing that the proposed scheme offers a good trade-off in terms of position accuracy and energy efficiency. PMID:22163992

  9. Assessment of dietary intake of flavouring substances within the procedure for their safety evaluation: advantages and limitations of estimates obtained by means of a per capita method.

    PubMed

    Arcella, D; Leclercq, C

    2005-01-01

    The procedure for the safety evaluation of flavourings adopted by the European Commission in order to establish a positive list of these substances is a stepwise approach which was developed by the Joint FAO/WHO Expert Committee on Food Additives (JECFA) and amended by the Scientific Committee on Food. Within this procedure, a per capita amount based on industrial poundage data of flavourings, is calculated to estimate the dietary intake by means of the maximised survey-derived daily intake (MSDI) method. This paper reviews the MSDI method in order to check if it can provide conservative intake estimates as needed at the first steps of a stepwise procedure. Scientific papers and opinions dealing with the MSDI method were reviewed. Concentration levels reported by the industry were compared with estimates obtained with the MSDI method. It appeared that, in some cases, these estimates could be orders of magnitude (up to 5) lower than those calculated considering concentration levels provided by the industry and regular consumption of flavoured foods and beverages. A critical review of two studies which had been used to support the statement that MSDI is a conservative method for assessing exposure to flavourings among high consumers was performed. Special attention was given to the factors that affect exposure at high percentiles, such as brand loyalty and portion sizes. It is concluded that these studies may not be suitable to validate the MSDI method used to assess intakes of flavours by European consumers due to shortcomings in the assumptions made and in the data used. Exposure assessment is an essential component of risk assessment. The present paper suggests that the MSDI method is not sufficiently conservative. There is therefore a clear need for either using an alternative method to estimate exposure to flavourings in the procedure or for limiting intakes to the levels at which the safety was assessed.

  10. Reliability of Different Mark-Recapture Methods for Population Size Estimation Tested against Reference Population Sizes Constructed from Field Data

    PubMed Central

    Grimm, Annegret; Gruber, Bernd; Henle, Klaus

    2014-01-01

    Reliable estimates of population size are fundamental in many ecological studies and biodiversity conservation. Selecting appropriate methods to estimate abundance is often very difficult, especially if data are scarce. Most studies concerning the reliability of different estimators used simulation data based on assumptions about capture variability that do not necessarily reflect conditions in natural populations. Here, we used data from an intensively studied closed population of the arboreal gecko Gehyra variegata to construct reference population sizes for assessing twelve different population size estimators in terms of bias, precision, accuracy, and their 95%-confidence intervals. Two of the reference populations reflect natural biological entities, whereas the other reference populations reflect artificial subsets of the population. Since individual heterogeneity was assumed, we tested modifications of the Lincoln-Petersen estimator, a set of models in programs MARK and CARE-2, and a truncated geometric distribution. Ranking of methods was similar across criteria. Models accounting for individual heterogeneity performed best in all assessment criteria. For populations from heterogeneous habitats without obvious covariates explaining individual heterogeneity, we recommend using the moment estimator or the interpolated jackknife estimator (both implemented in CAPTURE/MARK). If data for capture frequencies are substantial, we recommend the sample coverage or the estimating equation (both models implemented in CARE-2). Depending on the distribution of catchabilities, our proposed multiple Lincoln-Petersen and a truncated geometric distribution obtained comparably good results. The former usually resulted in a minimum population size and the latter can be recommended when there is a long tail of low capture probabilities. Models with covariates and mixture models performed poorly. Our approach identified suitable methods and extended options to evaluate the

  11. Robust automatic measurement of 3D scanned models for the human body fat estimation.

    PubMed

    Giachetti, Andrea; Lovato, Christian; Piscitelli, Francesco; Milanese, Chiara; Zancanaro, Carlo

    2015-03-01

    In this paper, we present an automatic tool for estimating geometrical parameters from 3-D human scans independent on pose and robustly against the topological noise. It is based on an automatic segmentation of body parts exploiting curve skeleton processing and ad hoc heuristics able to remove problems due to different acquisition poses and body types. The software is able to locate body trunk and limbs, detect their directions, and compute parameters like volumes, areas, girths, and lengths. Experimental results demonstrate that measurements provided by our system on 3-D body scans of normal and overweight subjects acquired in different poses are highly correlated with the body fat estimates obtained on the same subjects with dual-energy X-rays absorptiometry (DXA) scanning. In particular, maximal lengths and girths, not requiring precise localization of anatomical landmarks, demonstrate a good correlation (up to 96%) with the body fat and trunk fat. Regression models based on our automatic measurements can be used to predict body fat values reasonably well.

  12. Semimajor Axis Estimation Strategies

    NASA Technical Reports Server (NTRS)

    How, Jonathan P.; Alfriend, Kyle T.; Breger, Louis; Mitchell, Megan

    2004-01-01

    This paper extends previous analysis on the impact of sensing noise for the navigation and control aspects of formation flying spacecraft. We analyze the use of Carrier-phase Differential GPS (CDGPS) in relative navigation filters, with a particular focus on the filter correlation coefficient. This work was motivated by previous publications which suggested that a "good" navigation filter would have a strong correlation (i.e., coefficient near -1) to reduce the semimajor axis (SMA) error, and therefore, the overall fuel use. However, practical experience with CDGPS-based filters has shown this strong correlation seldom occurs (typical correlations approx. -0.1), even when the estimation accuracies are very good. We derive an analytic estimate of the filter correlation coefficient and demonstrate that, for the process and sensor noises levels expected with CDGPS, the expected value will be very low. It is also demonstrated that this correlation can be improved by increasing the time step of the discrete Kalman filter, but since the balance condition is not satisfied, the SMA error also increases. These observations are verified with several linear simulations. The combination of these simulations and analysis provide new insights on the crucial role of the process noise in determining the semimajor axis knowledge.

  13. Tracking the time-varying cortical connectivity patterns by adaptive multivariate estimators.

    PubMed

    Astolfi, L; Cincotti, F; Mattia, D; De Vico Fallani, F; Tocci, A; Colosimo, A; Salinari, S; Marciani, M G; Hesse, W; Witte, H; Ursino, M; Zavaglia, M; Babiloni, F

    2008-03-01

    The directed transfer function (DTF) and the partial directed coherence (PDC) are frequency-domain estimators that are able to describe interactions between cortical areas in terms of the concept of Granger causality. However, the classical estimation of these methods is based on the multivariate autoregressive modelling (MVAR) of time series, which requires the stationarity of the signals. In this way, transient pathways of information transfer remains hidden. The objective of this study is to test a time-varying multivariate method for the estimation of rapidly changing connectivity relationships between cortical areas of the human brain, based on DTF/PDC and on the use of adaptive MVAR modelling (AMVAR) and to apply it to a set of real high resolution EEG data. This approach will allow the observation of rapidly changing influences between the cortical areas during the execution of a task. The simulation results indicated that time-varying DTF and PDC are able to estimate correctly the imposed connectivity patterns under reasonable operative conditions of signal-to-noise ratio (SNR) ad number of trials. An SNR of five and a number of trials of at least 20 provide a good accuracy in the estimation. After testing the method by the simulation study, we provide an application to the cortical estimations obtained from high resolution EEG data recorded from a group of healthy subject during a combined foot-lips movement and present the time-varying connectivity patterns resulting from the application of both DTF and PDC. Two different cortical networks were detected with the proposed methods, one constant across the task and the other evolving during the preparation of the joint movement.

  14. Direct magnitude estimates of speech intelligibility in dysarthria: effects of a chosen standard.

    PubMed

    Weismer, Gary; Laures, Jacqueline S

    2002-06-01

    Direct magnitude estimation (DME) has been used frequently as a perceptual scaling technique in studies of the speech intelligibility of persons with speech disorders. The technique is typically used with a standard, or reference stimulus, chosen as a good exemplar of "midrange" intelligibility. In several published studies, the standard has been chosen subjectively, usually on the basis of the expertise of the investigators. The current experiment demonstrates that a fixed set of sentence-level utterances, obtained from 4 individuals with dysarthria (2 with Parkinson disease, 2 with traumatic brain injury) as well as 3 neurologically normal speakers, is scaled differently depending on the identity of the standard. Four different standards were used in the main experiment, three of which were judged qualitatively in two independent evaluations to be good exemplars of midrange intelligibility. Acoustic analyses did not reveal obvious differences between these four standards but suggested that the standard with the worst-scaled intelligibility had much poorer voice source characteristics compared to the other three standards. Results are discussed in terms of possible standardization of midrange intelligibility exemplars for DME experiments.

  15. Simplified estimation of age-specific reference intervals for skewed data.

    PubMed

    Wright, E M; Royston, P

    1997-12-30

    Age-specific reference intervals are commonly used in medical screening and clinical practice, where interest lies in the detection of extreme values. Many different statistical approaches have been published on this topic. The advantages of a parametric method are that they necessarily produce smooth centile curves, the entire density is estimated and an explicit formula is available for the centiles. The method proposed here is a simplified version of a recent approach proposed by Royston and Wright. Basic transformations of the data and multiple regression techniques are combined to model the mean, standard deviation and skewness. Using these simple tools, which are implemented in almost all statistical computer packages, age-specific reference intervals may be obtained. The scope of the method is illustrated by fitting models to several real data sets and assessing each model using goodness-of-fit techniques.

  16. Toward a Smartphone Application for Estimation of Pulse Transit Time

    PubMed Central

    Liu, He; Ivanov, Kamen; Wang, Yadong; Wang, Lei

    2015-01-01

    Pulse transit time (PTT) is an important physiological parameter that directly correlates with the elasticity and compliance of vascular walls and variations in blood pressure. This paper presents a PTT estimation method based on photoplethysmographic imaging (PPGi). The method utilizes two opposing cameras for simultaneous acquisition of PPGi waveform signals from the index fingertip and the forehead temple. An algorithm for the detection of maxima and minima in PPGi signals was developed, which includes technology for interpolation of the real positions of these points. We compared our PTT measurements with those obtained from the current methodological standards. Statistical results indicate that the PTT measured by our proposed method exhibits a good correlation with the established method. The proposed method is especially suitable for implementation in dual-camera-smartphones, which could facilitate PTT measurement among populations affected by cardiac complications. PMID:26516861

  17. Toward a Smartphone Application for Estimation of Pulse Transit Time.

    PubMed

    Liu, He; Ivanov, Kamen; Wang, Yadong; Wang, Lei

    2015-10-27

    Pulse transit time (PTT) is an important physiological parameter that directly correlates with the elasticity and compliance of vascular walls and variations in blood pressure. This paper presents a PTT estimation method based on photoplethysmographic imaging (PPGi). The method utilizes two opposing cameras for simultaneous acquisition of PPGi waveform signals from the index fingertip and the forehead temple. An algorithm for the detection of maxima and minima in PPGi signals was developed, which includes technology for interpolation of the real positions of these points. We compared our PTT measurements with those obtained from the current methodological standards. Statistical results indicate that the PTT measured by our proposed method exhibits a good correlation with the established method. The proposed method is especially suitable for implementation in dual-camera-smartphones, which could facilitate PTT measurement among populations affected by cardiac complications.

  18. Estimators of wheel slip for electric vehicles using torque and encoder measurements

    NASA Astrophysics Data System (ADS)

    Boisvert, M.; Micheau, P.

    2016-08-01

    For the purpose of regenerative braking control in hybrid and electrical vehicles, recent studies have suggested controlling the slip ratio of the electric-powered wheel. A slip tracking controller requires an accurate slip estimation in the overall range of the slip ratio (from 0 to 1), contrary to the conventional slip limiter (ABS) which calls for an accurate slip estimation in the critical slip area, estimated at around 0.15 in several applications. Considering that it is not possible to directly measure the slip ratio of a wheel, the problem is to estimate the latter from available online data. To estimate the slip of a wheel, both wheel speed and vehicle speed must be known. Several studies provide algorithms that allow obtaining a good estimation of vehicle speed. On the other hand, there is no proposed algorithm for the conditioning of the wheel speed measurement. Indeed, the noise included in the wheel speed measurement reduces the accuracy of the slip estimation, a disturbance increasingly significant at low speed and low torque. Herein, two different extended Kalman observers of slip ratio were developed. The first calculates the slip ratio with data provided by an observer of vehicle speed and of propeller wheel speed. The second observer uses an original nonlinear model of the slip ratio as a function of the electric motor. A sinus tracking algorithm is included in the two observers, in order to reject harmonic disturbances of wheel speed measurement. Moreover, mass and road uncertainties can be compensated with a coefficient adapted online by an RLS. The algorithms were implemented and tested with a three-wheel recreational hybrid vehicle. Experimental results show the efficiency of both methods.

  19. Optical Estimation of Depth and Current in a Ebb Tidal Delta Environment

    NASA Astrophysics Data System (ADS)

    Holman, R. A.; Stanley, J.

    2012-12-01

    A key limitation to our ability to make nearshore environmental predictions is the difficulty of obtaining up-to-date bathymetry measurements at a reasonable cost and frequency. Due to the high cost and complex logistics of in-situ methods, research into remote sensing approaches has been steady and has finally yielded fairly robust methods like the cBathy algorithm for optical Argus data that show good performance on simple barred beach profiles and near immunity to noise and signal problems. In May, 2012, data were collected in a more complex ebb tidal delta environment during the RIVET field experiment at New River Inlet, NC. The presence of strong reversing tidal currents led to significant errors in cBathy depths that were phase-locked to the tide. In this paper we will test methods for the robust estimation of both depths and vector currents in a tidal delta domain. In contrast to previous Fourier methods, wavenumber estimation in cBathy can be done on small enough scales to resolve interesting nearshore features.

  20. A new algorithm for microwave delay estimation from water vapor radiometer data

    NASA Technical Reports Server (NTRS)

    Robinson, S. E.

    1986-01-01

    A new algorithm has been developed for the estimation of tropospheric microwave path delays from water vapor radiometer (WVR) data, which does not require site and weather dependent empirical parameters to produce high accuracy. Instead of taking the conventional linear approach, the new algorithm first uses the observables with an emission model to determine an approximate form of the vertical water vapor distribution which is then explicitly integrated to estimate wet path delays, in a second step. The intrinsic accuracy of this algorithm has been examined for two channel WVR data using path delays and stimulated observables computed from archived radiosonde data. It is found that annual RMS errors for a wide range of sites are in the range from 1.3 mm to 2.3 mm, in the absence of clouds. This is comparable to the best overall accuracy obtainable from conventional linear algorithms, which must be tailored to site and weather conditions using large radiosonde data bases. The new algorithm's accuracy and flexibility are indications that it may be a good candidate for almost all WVR data interpretation.

  1. PMP Estimations at Sparsely Controlled Andinian Basins and Climate Change Projections

    NASA Astrophysics Data System (ADS)

    Lagos Zúñiga, M. A.; Vargas, X.

    2012-12-01

    Probable Maximum Precipitation (PMP) estimation implies an extensive review of hydrometeorological data and understandig of precipitation formation processes. There exists different methodology processes that apply for their estimations and all of them require a good spatial and temporal representation of storms. The estimation of hydrometeorological PMP on sparsely controlled basins is a difficult task, specially if the studied area has an important orographic effect due to mountains and the mixed precipitation occurrence in the most several storms time period, the main task of this study is to propose and estimate PMP in a sparsely controlled basin, affected by abrupt topography and mixed hidrology basin; also analyzing statystic uncertainties estimations and possible climate changes effects in its estimation. In this study the PMP estimation under statistical and hydrometeorological aproaches (watershed-based and traditional depth area duration analysis) was done in a semi arid zone at Puclaro dam in north Chile. Due to the lack of good spatial meteorological representation at the study zone, we propose a methodology to consider the orographic effects of Los Andes due to orographic effects patterns based in a RCM PRECIS-DGF and annual isoyetal maps. Estimations were validated with precipitation patterns for given winters, considering snow route and rainfall gauges at the preferencial wind direction, finding good results. The estimations are also compared with the highest areal storms in USA, Australia, India and China and with frequency analysis in local rain gauge stations in order to decide about the most adequate approach for the study zone. Climate change projections were evaluated with ECHAM5 GCM model, due to its good quality representation in the seasonality and the magnitude of meteorological variables. Temperature projections, for 2040-2065 period, show that there would be a rise in the catchment contributing area that would lead to an increase of the

  2. New method for estimating arterial pulse wave velocity at single site.

    PubMed

    Abdessalem, Khaled Ben; Flaud, Patrice; Zobaidi, Samir

    2018-01-01

    The clinical importance of measuring local pulse wave velocity (PWV), has encouraged researchers to develop several local methods to estimate it. In this work, we proposed a new method, the sum-of-squares method [Formula: see text], that allows the estimations of PWV by using simultaneous measurements of blood pressure (P) and arterial diameter (D) at single-location. Pulse waveforms generated by: (1) two-dimensional (2D) fluid-structure interaction simulation (FSI) in a compliant tube, (2) one-dimensional (1D) model of 55 larger human systemic arteries and (3) experimental data were used to validate the new formula and evaluate several classical methods. The performance of the proposed method was assessed by comparing its results to theoretical PWV calculated from the parameters of the model and/or to PWV estimated by several classical methods. It was found that values of PWV obtained by the developed method [Formula: see text] are in good agreement with theoretical ones and with those calculated by PA-loop and D 2 P-loop. The difference between the PWV calculated by [Formula: see text] and PA-loop does not exceed 1% when data from simulations are used, 3% when in vitro data are used and 5% when in vivo data are used. In addition, this study suggests that estimated PWV from arterial pressure and diameter waveforms provide correct values while methods that require flow rate (Q) and velocity (U) overestimate or underestimate PWV.

  3. Space shuttle propulsion estimation development verification

    NASA Technical Reports Server (NTRS)

    Rogers, Robert M.

    1989-01-01

    The application of extended Kalman filtering to estimating the Space Shuttle Propulsion performance, i.e., specific impulse, from flight data in a post-flight processing computer program is detailed. The flight data used include inertial platform acceleration, SRB head pressure, SSME chamber pressure and flow rates, and ground based radar tracking data. The key feature in this application is the model used for the SRB's, which is a nominal or reference quasi-static internal ballistics model normalized to the propellant burn depth. Dynamic states of mass overboard and propellant burn depth are included in the filter model to account for real-time deviations from the reference model used. Aerodynamic, plume, wind and main engine uncertainties are also included for an integrated system model. Assuming uncertainty within the propulsion system model and attempts to estimate its deviations represent a new application of parameter estimation for rocket powered vehicles. Illustrations from the results of applying this estimation approach to several missions show good quality propulsion estimates.

  4. Evaluation of multiple tracer methods to estimate low groundwater flow velocities

    DOE PAGES

    Reimus, Paul W.; Arnold, Bill W.

    2017-02-20

    Here, four different tracer methods were used to estimate groundwater flow velocity at a multiple-well site in the saturated alluvium south of Yucca Mountain, Nevada: (1) two single-well tracer tests with different rest or “shut-in” periods, (2) a cross-hole tracer test with an extended flow interruption, (3) a comparison of two tracer decay curves in an injection borehole with and without pumping of a downgradient well, and (4) a natural-gradient tracer test. Such tracer methods are potentially very useful for estimating groundwater velocities when hydraulic gradients are flat (and hence uncertain) and also when water level and hydraulic conductivity datamore » are sparse, both of which were the case at this test location. The purpose of the study was to evaluate the first three methods for their ability to provide reasonable estimates of relatively low groundwater flow velocities in such low-hydraulic-gradient environments. The natural-gradient method is generally considered to be the most robust and direct method, so it was used to provide a “ground truth” velocity estimate. However, this method usually requires several wells, so it is often not practical in systems with large depths to groundwater and correspondingly high well installation costs. The fact that a successful natural gradient test was conducted at the test location offered a unique opportunity to compare the flow velocity estimates obtained by the more easily deployed and lower risk methods with the ground-truth natural-gradient method. The groundwater flow velocity estimates from the four methods agreed very well with each other, suggesting that the first three methods all provided reasonably good estimates of groundwater flow velocity at the site. We discuss the advantages and disadvantages of the different methods, as well as some of the uncertainties associated with them.« less

  5. Evaluation of multiple tracer methods to estimate low groundwater flow velocities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reimus, Paul W.; Arnold, Bill W.

    Here, four different tracer methods were used to estimate groundwater flow velocity at a multiple-well site in the saturated alluvium south of Yucca Mountain, Nevada: (1) two single-well tracer tests with different rest or “shut-in” periods, (2) a cross-hole tracer test with an extended flow interruption, (3) a comparison of two tracer decay curves in an injection borehole with and without pumping of a downgradient well, and (4) a natural-gradient tracer test. Such tracer methods are potentially very useful for estimating groundwater velocities when hydraulic gradients are flat (and hence uncertain) and also when water level and hydraulic conductivity datamore » are sparse, both of which were the case at this test location. The purpose of the study was to evaluate the first three methods for their ability to provide reasonable estimates of relatively low groundwater flow velocities in such low-hydraulic-gradient environments. The natural-gradient method is generally considered to be the most robust and direct method, so it was used to provide a “ground truth” velocity estimate. However, this method usually requires several wells, so it is often not practical in systems with large depths to groundwater and correspondingly high well installation costs. The fact that a successful natural gradient test was conducted at the test location offered a unique opportunity to compare the flow velocity estimates obtained by the more easily deployed and lower risk methods with the ground-truth natural-gradient method. The groundwater flow velocity estimates from the four methods agreed very well with each other, suggesting that the first three methods all provided reasonably good estimates of groundwater flow velocity at the site. We discuss the advantages and disadvantages of the different methods, as well as some of the uncertainties associated with them.« less

  6. Evaluation of multiple tracer methods to estimate low groundwater flow velocities.

    PubMed

    Reimus, Paul W; Arnold, Bill W

    2017-04-01

    Four different tracer methods were used to estimate groundwater flow velocity at a multiple-well site in the saturated alluvium south of Yucca Mountain, Nevada: (1) two single-well tracer tests with different rest or "shut-in" periods, (2) a cross-hole tracer test with an extended flow interruption, (3) a comparison of two tracer decay curves in an injection borehole with and without pumping of a downgradient well, and (4) a natural-gradient tracer test. Such tracer methods are potentially very useful for estimating groundwater velocities when hydraulic gradients are flat (and hence uncertain) and also when water level and hydraulic conductivity data are sparse, both of which were the case at this test location. The purpose of the study was to evaluate the first three methods for their ability to provide reasonable estimates of relatively low groundwater flow velocities in such low-hydraulic-gradient environments. The natural-gradient method is generally considered to be the most robust and direct method, so it was used to provide a "ground truth" velocity estimate. However, this method usually requires several wells, so it is often not practical in systems with large depths to groundwater and correspondingly high well installation costs. The fact that a successful natural gradient test was conducted at the test location offered a unique opportunity to compare the flow velocity estimates obtained by the more easily deployed and lower risk methods with the ground-truth natural-gradient method. The groundwater flow velocity estimates from the four methods agreed very well with each other, suggesting that the first three methods all provided reasonably good estimates of groundwater flow velocity at the site. The advantages and disadvantages of the different methods, as well as some of the uncertainties associated with them are discussed. Published by Elsevier B.V.

  7. Estimating surface hardening profile of blank for obtaining high drawing ratio in deep drawing process using FE analysis

    NASA Astrophysics Data System (ADS)

    Tan, C. J.; Aslian, A.; Honarvar, B.; Puborlaksono, J.; Yau, Y. H.; Chong, W. T.

    2015-12-01

    We constructed an FE axisymmetric model to simulate the effect of partially hardened blanks on increasing the limiting drawing ratio (LDR) of cylindrical cups. We partitioned an arc-shaped hard layer into the cross section of a DP590 blank. We assumed the mechanical property of the layer is equivalent to either DP980 or DP780. We verified the accuracy of the model by comparing the calculated LDR for DP590 with the one reported in the literature. The LDR for the partially hardened blank increased from 2.11 to 2.50 with a 1 mm depth of DP980 ring-shaped hard layer on the top surface of the blank. The position of the layer changed with drawing ratios. We proposed equations for estimating the inner and outer diameters of the layer, and tested its accuracy in the simulation. Although the outer diameters fitted in well with the estimated line, the inner diameters are slightly less than the estimated ones.

  8. Improved Estimates of Temporally Coherent Internal Tides and Energy Fluxes from Satellite Altimetry

    NASA Technical Reports Server (NTRS)

    Ray, Richard D.; Chao, Benjamin F. (Technical Monitor)

    2002-01-01

    Satellite altimetry has opened a surprising new avenue to observing internal tides in the open ocean. The tidal surface signatures are very small, a few cm at most, but in many areas they are robust, owing to averaging over many years. By employing a simplified two dimensional wave fitting to the surface elevations in combination with climatological hydrography to define the relation between the surface height and the current and pressure at depth, we may obtain rough estimates of internal tide energy fluxes. Initial results near Hawaii with Topex/Poseidon (T/P) data show good agreement with detailed 3D (three dimensional) numerical models, but the altimeter picture is somewhat blurred owing to the widely spaced T/P tracks. The resolution may be enhanced somewhat by using data from the ERS-1 (ESA (European Space Agency) Remote Sensing) and ERS-2 satellite altimeters. The ERS satellite tracks are much more closely spaced (0.72 deg longitude vs. 2.83 deg for T/P), but the tidal estimates are less accurate than those for T/P. All altimeter estimates are also severely affected by noise in regions of high mesoscale variability, and we have obtained some success in reducing this contamination by employing a prior correction for mesoscale variability based on ten day detailed sea surface height maps developed by Le Traon and colleagues. These improvements allow us to more clearly define the internal tide surface field and the corresponding energy fluxes. Results from throughout the global ocean will be presented.

  9. 20 CFR 404.810 - How to obtain a statement of earnings and a benefit estimate statement.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... records at the time of the request. If you have a social security number and have wages or net earnings... prescribed form, giving us your name, social security number, date of birth, and sex. You, your authorized... benefit estimate statement. 404.810 Section 404.810 Employees' Benefits SOCIAL SECURITY ADMINISTRATION...

  10. Monte Carlo Estimation of Absorbed Dose Distributions Obtained from Heterogeneous 106Ru Eye Plaques.

    PubMed

    Zaragoza, Francisco J; Eichmann, Marion; Flühs, Dirk; Sauerwein, Wolfgang; Brualla, Lorenzo

    2017-09-01

    The distribution of the emitter substance in 106 Ru eye plaques is usually assumed to be homogeneous for treatment planning purposes. However, this distribution is never homogeneous, and it widely differs from plaque to plaque due to manufacturing factors. By Monte Carlo simulation of radiation transport, we study the absorbed dose distribution obtained from the specific CCA1364 and CCB1256 106 Ru plaques, whose actual emitter distributions were measured. The idealized, homogeneous CCA and CCB plaques are also simulated. The largest discrepancy in depth dose distribution observed between the heterogeneous and the homogeneous plaques was 7.9 and 23.7% for the CCA and CCB plaques, respectively. In terms of isodose lines, the line referring to 100% of the reference dose penetrates 0.2 and 1.8 mm deeper in the case of heterogeneous CCA and CCB plaques, respectively, with respect to the homogeneous counterpart. The observed differences in absorbed dose distributions obtained from heterogeneous and homogeneous plaques are clinically irrelevant if the plaques are used with a lateral safety margin of at least 2 mm. However, these differences may be relevant if the plaques are used in eccentric positioning.

  11. Good Concrete Activity Is Good Mental Activity

    ERIC Educational Resources Information Center

    McDonough, Andrea

    2016-01-01

    Early years mathematics classrooms can be colourful, exciting, and challenging places of learning. Andrea McDonough and fellow teachers have noticed that some students make good decisions about using materials to assist their problem solving, but this is not always the case. These experiences lead her to ask the following questions: (1) Are…

  12. Effect of the depreciation of public goods in spatial public goods games

    NASA Astrophysics Data System (ADS)

    Shi, Dong-Mei; Zhuang, Yong; Wang, Bing-Hong

    2012-02-01

    In this work, the depreciation effect of public goods is considered in the public goods games, which is realized by rescaling the multiplication factor r of each group as r‧=r( (β≥0). It is assumed that each individual enjoys the full profit r of the public goods if all the players of this group are cooperators. Otherwise, the value of public goods is reduced to r‧. It is found that compared with the original version (β=0), the emergence of cooperation is remarkably promoted for β>0, and there exist intermediate values of β inducing the best cooperation. Particularly, there exists a range of β inducing the highest cooperative level, and this range of β broadens as r increases. It is further presented that the variation of cooperator density with noise has close relations with the values of β and r, and cooperation at an intermediate value of β=1.0 is most tolerant to noise.

  13. Simple Form of MMSE Estimator for Super-Gaussian Prior Densities

    NASA Astrophysics Data System (ADS)

    Kittisuwan, Pichid

    2015-04-01

    The denoising method that become popular in recent years for additive white Gaussian noise (AWGN) are Bayesian estimation techniques e.g., maximum a posteriori (MAP) and minimum mean square error (MMSE). In super-Gaussian prior densities, it is well known that the MMSE estimator in such a case has a complicated form. In this work, we derive the MMSE estimation with Taylor series. We show that the proposed estimator also leads to a simple formula. An extension of this estimator to Pearson type VII prior density is also offered. The experimental result shows that the proposed estimator to the original MMSE nonlinearity is reasonably good.

  14. Reciprocal Sliding Friction Model for an Electro-Deposited Coating and Its Parameter Estimation Using Markov Chain Monte Carlo Method

    PubMed Central

    Kim, Kyungmok; Lee, Jaewook

    2016-01-01

    This paper describes a sliding friction model for an electro-deposited coating. Reciprocating sliding tests using ball-on-flat plate test apparatus are performed to determine an evolution of the kinetic friction coefficient. The evolution of the friction coefficient is classified into the initial running-in period, steady-state sliding, and transition to higher friction. The friction coefficient during the initial running-in period and steady-state sliding is expressed as a simple linear function. The friction coefficient in the transition to higher friction is described with a mathematical model derived from Kachanov-type damage law. The model parameters are then estimated using the Markov Chain Monte Carlo (MCMC) approach. It is identified that estimated friction coefficients obtained by MCMC approach are in good agreement with measured ones. PMID:28773359

  15. New fast least-squares algorithm for estimating the best-fitting parameters due to simple geometric-structures from gravity anomalies.

    PubMed

    Essa, Khalid S

    2014-01-01

    A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values.

  16. New fast least-squares algorithm for estimating the best-fitting parameters due to simple geometric-structures from gravity anomalies

    PubMed Central

    Essa, Khalid S.

    2013-01-01

    A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values. PMID:25685472

  17. Agricultural land-use classification using landsat imagery data, and estimates of irrigation water use in Gooding, Jerome, Lincoln, and Minidoka counties, 1992 water year, Upper Snake River basin, Idaho and western Wyoming

    USGS Publications Warehouse

    Maupin, Molly A.

    1997-01-01

    As part of the U.S. Geological Survey's National Water-Quality Assessment Program in the upper Snake River Basin study unit, land- and water-use data were used to describe activities that have potential effects on water quality, including biological conditions, in the basin. Land-use maps and estimates of water use by irrigated agriculture were needed for Gooding, Jerome, Lincoln, and Minidoka Counties (south-central Idaho), four of the most intensively irrigated counties in the study unit. Land use in the four counties was mapped from Landsat Thematic Mapper imagery data for the 1992 water year using the SPECTRUM computer program. Land-use data were field verified in 108 randomly selected sections (640 acres each); results compared favorably with land-use maps from other sources. Water used for irrigation during the 1992 water year was estimated using land-use and ancillary data. In 1992, a drought year, estimated irrigation withdrawals in the four counties were about 2.9 million acre-feet of water. Of the 2.9 million acre-feet, an estimated 2.12 million acre-feet of water was withdrawn from surface water, mainly the Snake River, and nearly 776,000 acre-feet was withdrawn from ground water. One-half of the 2.9 million acre-feet of water withdrawn for irrigation was considered to be lost during conveyance or was returned to the Snake River; the remainder was consumptively used by crops during the growing season.

  18. Multiscale estimation of excess mass from gravity data

    NASA Astrophysics Data System (ADS)

    Castaldo, Raffaele; Fedi, Maurizio; Florio, Giovanni

    2014-06-01

    We describe a multiscale method to estimate the excess mass of gravity anomaly sources, based on the theory of source moments. Using a multipole expansion of the potential field and considering only the data along the vertical direction, a system of linear equations is obtained. The choice of inverting data along a vertical profile can help us to reduce the interference effects due to nearby anomalies and will allow a local estimate of the source parameters. A criterion is established allowing the selection of the optimal highest altitude of the vertical profile data and truncation order of the series expansion. The inversion provides an estimate of the total anomalous mass and of the depth to the centre of mass. The method has several advantages with respect to classical methods, such as the Gauss' method: (i) we need just a 1-D inversion to obtain our estimates, being the inverted data sampled along a single vertical profile; (ii) the resolution may be straightforward enhanced by using vertical derivatives; (iii) the centre of mass is also estimated, besides the excess mass; (iv) the method is very robust versus noise; (v) the profile may be chosen in such a way to minimize the effects from interfering anomalies or from side effects due to the a limited area extension. The multiscale estimation of excess mass method can be successfully used in various fields of application. Here, we analyse the gravity anomaly generated by a sulphide body in the Skelleftea ore district, North Sweden, obtaining source mass and volume estimates in agreement with the known information. We show also that these estimates are substantially improved with respect to those obtained with the classical approach.

  19. Comparison of specific-yield estimates for calculating evapotranspiration from diurnal groundwater-level fluctuations

    NASA Astrophysics Data System (ADS)

    Gribovszki, Zoltán

    2018-05-01

    Methods that use diurnal groundwater-level fluctuations are commonly used for shallow water-table environments to estimate evapotranspiration (ET) and recharge. The key element needed to obtain reliable estimates is the specific yield (Sy), a soil-water storage parameter that depends on unsaturated soil-moisture and water-table fluxes, among others. Soil-moisture profile measurement down to the water table, along with water-table-depth measurements, can provide a good opportunity to calculate Sy values even on a sub-daily scale. These values were compared with Sy estimates derived by traditional techniques, and it was found that slug-test-based Sy values gave the most similar results in a sandy soil environment. Therefore, slug-test methods, which are relatively cheap and require little time, were most suited to estimate Sy using diurnal fluctuations. The reason for this is that the timeframe of the slug-test measurement is very similar to the dynamic of the diurnal signal. The dynamic characteristic of Sy was also analyzed on a sub-daily scale (depending mostly on the speed of drainage from the soil profile) and a remarkable difference was found in Sy with respect to the rate of change of the water table. When comparing constant and sub-daily (dynamic) Sy values for ET estimation, the sub-daily Sy application yielded higher correlation, but only a slightly smaller deviation from the control ET method, compared with the usage of constant Sy.

  20. Estimating of higher order velocity moments and their derivatives in boundary layer by Smoke Image Velocimetry

    NASA Astrophysics Data System (ADS)

    Mikheev, N. I.; Goltsman, A. E.; Salekhova, I. G.; Saushin, I. I.

    2017-11-01

    The results of an experimental evaluation of the third-order moments profiles of velocity fluctuations and their partial derivatives in a zero pressure-gradient turbulent boundary layer are presented. Profiles of characteristics are estimated on the basis of the dynamics of two-component instantaneous velocity vector fields measured by the optical method Smoke Image Velocimetry (SIV). Comparison SIV-measurements with the results of measurements by a thermoanemometer and DNS data with similar Reθ and Reτ showed good agreement between the profiles of +, +, ∂+/∂y+ и ∂+/∂y+ obtained by SIV and DNS.

  1. Investigation of Properties of Nanocomposite Polyimide Samples Obtained by Fused Deposition Modeling

    NASA Astrophysics Data System (ADS)

    Polyakov, I. V.; Vaganov, G. V.; Yudin, V. E.; Ivan'kova, E. M.; Popova, E. N.; Elokhovskii, V. Yu.

    2018-03-01

    Nanomodified polyimide samples were obtained by fused deposition modeling (FDM) using an experimental setup for 3D printing of highly heat-resistant plastics. The mechanical properties and structure of these samples were studied by viscosimetry, differential scanning calorimetry, and scanning electron microscopy. A comparative estimation of the mechanical properties of laboratory samples obtained from a nanocomposite based on heat-resistant polyetherimide by FDM and injection molding is presented.

  2. Two-Dimensional Echocardiography Estimates of Fetal Ventricular Mass throughout Gestation.

    PubMed

    Aye, Christina Y L; Lewandowski, Adam James; Ohuma, Eric O; Upton, Ross; Packham, Alice; Kenworthy, Yvonne; Roseman, Fenella; Norris, Tess; Molloholli, Malid; Wanyonyi, Sikolia; Papageorghiou, Aris T; Leeson, Paul

    2017-08-12

    Two-dimensional (2D) ultrasound quality has improved in recent years. Quantification of cardiac dimensions is important to screen and monitor certain fetal conditions. We assessed the feasibility and reproducibility of fetal ventricular measures using 2D echocardiography, reported normal ranges in our cohort, and compared estimates to other modalities. Mass and end-diastolic volume were estimated by manual contouring in the four-chamber view using TomTec Image Arena 4.6 in end diastole. Nomograms were created from smoothed centiles of measures, constructed using fractional polynomials after log transformation. The results were compared to those of previous studies using other modalities. A total of 294 scans from 146 fetuses from 15+0 to 41+6 weeks of gestation were included. Seven percent of scans were unanalysable and intraobserver variability was good (intraclass correlation coefficients for left and right ventricular mass 0.97 [0.87-0.99] and 0.99 [0.95-1.0], respectively). Mass and volume increased exponentially, showing good agreement with 3D mass estimates up to 28 weeks of gestation, after which our measurements were in better agreement with neonatal cardiac magnetic resonance imaging. There was good agreement with 4D volume estimates for the left ventricle. Current state-of-the-art 2D echocardiography platforms provide accurate, feasible, and reproducible fetal ventricular measures across gestation, and in certain circumstances may be the modality of choice. © 2017 S. Karger AG, Basel.

  3. Estimation of tunnel blockage from wall pressure signatures: A review and data correlation

    NASA Technical Reports Server (NTRS)

    Hackett, J. E.; Wilsden, D. J.; Lilley, D. E.

    1979-01-01

    A method is described for estimating low speed wind tunnel blockage, including model volume, bubble separation and viscous wake effects. A tunnel-centerline, source/sink distribution is derived from measured wall pressure signatures using fast algorithms to solve the inverse problem in three dimensions. Blockage may then be computed throughout the test volume. Correlations using scaled models or tests in two tunnels were made in all cases. In many cases model reference area exceeded 10% of the tunnel cross-sectional area. Good correlations were obtained regarding model surface pressures, lift drag and pitching moment. It is shown that blockage-induced velocity variations across the test section are relatively unimportant but axial gradients should be considered when model size is determined.

  4. Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.

  5. Reconsidering the "Good Divorce"

    PubMed

    Amato, Paul R; Kane, Jennifer B; James, Spencer

    2011-12-01

    This study attempted to assess the notion that a "good divorce" protects children from the potential negative consequences of marital dissolution. A cluster analysis of data on postdivorce parenting from 944 families resulted in three groups: cooperative coparenting, parallel parenting, and single parenting. Children in the cooperative coparenting (good divorce) cluster had the smallest number of behavior problems and the closest ties to their fathers. Nevertheless, children in this cluster did not score significantly better than other children on 10 additional outcomes. These findings provide only modest support for the good divorce hypothesis.

  6. Energy and maximum norm estimates for nonlinear conservation laws

    NASA Technical Reports Server (NTRS)

    Olsson, Pelle; Oliger, Joseph

    1994-01-01

    We have devised a technique that makes it possible to obtain energy estimates for initial-boundary value problems for nonlinear conservation laws. The two major tools to achieve the energy estimates are a certain splitting of the flux vector derivative f(u)(sub x), and a structural hypothesis, referred to as a cone condition, on the flux vector f(u). These hypotheses are fulfilled for many equations that occur in practice, such as the Euler equations of gas dynamics. It should be noted that the energy estimates are obtained without any assumptions on the gradient of the solution u. The results extend to weak solutions that are obtained as point wise limits of vanishing viscosity solutions. As a byproduct we obtain explicit expressions for the entropy function and the entropy flux of symmetrizable systems of conservation laws. Under certain circumstances the proposed technique can be applied repeatedly so as to yield estimates in the maximum norm.

  7. Estimating regional centile curves from mixed data sources and countries.

    PubMed

    van Buuren, Stef; Hayes, Daniel J; Stasinopoulos, D Mikis; Rigby, Robert A; ter Kuile, Feiko O; Terlouw, Dianne J

    2009-10-15

    Regional or national growth distributions can provide vital information on the health status of populations. In most resource poor countries, however, the required anthropometric data from purpose-designed growth surveys are not readily available. We propose a practical method for estimating regional (multi-country) age-conditional weight distributions based on existing survey data from different countries. We developed a two-step method by which one is able to model data with widely different age ranges and sample sizes. The method produces references both at the country level and at the regional (multi-country) level. The first step models country-specific centile curves by Box-Cox t and Box-Cox power exponential distributions implemented in generalized additive model for location, scale and shape through a common model. Individual countries may vary in location and spread. The second step defines the regional reference from a finite mixture of the country distributions, weighted by population size. To demonstrate the method we fitted the weight-for-age distribution of 12 countries in South East Asia and the Western Pacific, based on 273 270 observations. We modeled both the raw body weight and the corresponding Z score, and obtained a good fit between the final models and the original data for both solutions. We briefly discuss an application of the generated regional references to obtain appropriate, region specific, age-based dosing regimens of drugs used in the tropics. The method is an affordable and efficient strategy to estimate regional growth distributions where the standard costly alternatives are not an option. Copyright (c) 2009 John Wiley & Sons, Ltd.

  8. Estimation of daily flow rate of photovoltaic water pumping systems using solar radiation data

    NASA Astrophysics Data System (ADS)

    Benghanem, M.; Daffallah, K. O.; Almohammedi, A.

    2018-03-01

    This paper presents a simple model which allows us to contribute in the studies of photovoltaic (PV) water pumping systems sizing. The nonlinear relation between water flow rate and solar power has been obtained experimentally in a first step and then used for performance prediction. The model proposed enables us to simulate the water flow rate using solar radiation data for different heads (50 m, 60 m, 70 m and 80 m) and for 8S × 3P PV array configuration. The experimental data are obtained with our pumping test facility located at Madinah site (Saudi Arabia). The performances are calculated using the measured solar radiation data of different locations in Saudi Arabia. Knowing the solar radiation data, we have estimated with a good precision the water flow rate Q in five locations (Al-Jouf, Solar Village, AL-Ahsa, Madinah and Gizan) in Saudi Arabia. The flow rate Q increases with the increase of pump power for different heads following the nonlinear model proposed.

  9. Estimating recharge rates with analytic element models and parameter estimation

    USGS Publications Warehouse

    Dripps, W.R.; Hunt, R.J.; Anderson, M.P.

    2006-01-01

    Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).

  10. Innovative Formulation Combining Al, Zr and Si Precursors to Obtain Anticorrosion Hybrid Sol-Gel Coating.

    PubMed

    Genet, Clément; Menu, Marie-Joëlle; Gavard, Olivier; Ansart, Florence; Gressier, Marie; Montpellaz, Robin

    2018-05-10

    The aim of our study is to improve the aluminium alloy corrosion resistance with Organic-Inorganic Hybrid (OIH) sol-gel coating. Coatings are obtained from unusual formulation with precursors mixing: glycidoxypropyltrimethoxysilane (GPTMS), zirconium (IV) propoxide (TPOZ) and aluminium tri-sec-butoxide (ASB). This formulation was characterized and compared with sol formulations GPTMS/TPOZ and GPTMS/ASB. In each formulation, a corrosion inhibitor, cerium (III) nitrate hexahydrate, is employed to improve the corrosion performance. Coatings obtained from sol based on GPTMS/TPOZ/ASB have good anti-corrosion performances with Natural Salt Spray (NSS) resistance of 500 h for a thickness lower than 4 µm. Contact angle measurement showed a coating hydrophobic behaviour. To understand these performances, nuclear magnetic resonance (NMR) analyses were performed, results make sol-gel coating condensation evident and are in very good agreement with previous results.

  11. Tree Canopy Light Interception Estimates in Almond and a Walnut Orchards Using Ground, Low Flying Aircraft, and Satellite Based Methods to Improve Irrigation Scheduling Programs

    NASA Technical Reports Server (NTRS)

    Rosecrance, Richard C.; Johnson, Lee; Soderstrom, Dominic

    2016-01-01

    Canopy light interception is a main driver of water use and crop yield in almond and walnut production. Fractional green canopy cover (Fc) is a good indicator of light interception and can be estimated remotely from satellite using the normalized difference vegetation index (NDVI) data. Satellite-based Fc estimates could be used to inform crop evapotranspiration models, and hence support improvements in irrigation evaluation and management capabilities. Satellite estimates of Fc in almond and walnut orchards, however, need to be verified before incorporating them into irrigation scheduling or other crop water management programs. In this study, Landsat-based NDVI and Fc from NASA's Satellite Irrigation Management Support (SIMS) were compared with four estimates of canopy cover: 1. light bar measurement, 2. in-situ and image-based dimensional tree-crown analyses, 3. high-resolution NDVI data from low flying aircraft, and 4. orchard photos obtained via Google Earth and processed by an Image J thresholding routine. Correlations between the various estimates are discussed.

  12. Tree canopy light interception estimates in almond and a walnut orchards using ground, low flying aircraft, and satellite based methods to improve irrigation scheduling programs.

    NASA Astrophysics Data System (ADS)

    Rosecrance, R. C.; Johnson, L.; Soderstrom, D.

    2016-12-01

    Canopy light interception is a main driver of water use and crop yield in almond and walnut production. Fractional green canopy cover (Fc) is a good indicator of light interception and can be estimated remotely from satellite using the normalized difference vegetation index (NDVI) data. Satellite-based Fc estimates could be used to inform crop evapotranspiration models, and hence support improvements in irrigation evaluation and management capabilities. Satellite estimates of Fc in almond and walnut orchards, however, need to be verified before incorporating them into irrigation scheduling or other crop water management programs. In this study, Landsat-based NDVI and Fc from NASA's Satellite Irrigation Management Support (SIMS) were compared with four estimates of canopy cover: 1. light bar measurement, 2. in-situ and image-based dimensional tree-crown analyses, 3. high-resolution NDVI data from low flying aircraft, and 4. orchard photos obtained via Google Earth and processed by an Image J thresholding routine. Correlations between the various estimates are discussed.

  13. Structural, electronic, elastic, and thermal properties of CaNiH3 perovskite obtained from first-principles calculations

    NASA Astrophysics Data System (ADS)

    Benlamari, S.; Bendjeddou, H.; Boulechfar, R.; Amara Korba, S.; Meradji, H.; Ahmed, R.; Ghemid, S.; Khenata, R.; Omran, S. Bin

    2018-03-01

    A theoretical study of the structural, elastic, electronic, mechanical, and thermal properties of the perovskite-type hydride CaNiH3 is presented. This study is carried out via first-principles full potential (FP) linearized augmented plane wave plus local orbital (LAPW+lo) method designed within the density functional theory (DFT). To treat the exchange–correlation energy/potential for the total energy calculations, the local density approximation (LDA) of Perdew–Wang (PW) and the generalized gradient approximation (GGA) of Perdew–Burke–Ernzerhof (PBE) are used. The three independent elastic constants (C 11, C 12, and C 44) are calculated from the direct computation of the stresses generated by small strains. Besides, we report the variation of the elastic constants as a function of pressure as well. From the calculated elastic constants, the mechanical character of CaNiH3 is predicted. Pertaining to the thermal properties, the Debye temperature is estimated from the average sound velocity. To further comprehend this compound, the quasi-harmonic Debye model is used to analyze the thermal properties. From the calculations, we find that the obtained results of the lattice constant (a 0), bulk modulus (B 0), and its pressure derivative ({B}0^{\\prime }) are in good agreement with the available theoretical as well as experimental results. Similarly, the obtained electronic band structure demonstrates the metallic character of this perovskite-type hydride.

  14. Covariance Matrix Estimation for Massive MIMO

    NASA Astrophysics Data System (ADS)

    Upadhya, Karthik; Vorobyov, Sergiy A.

    2018-04-01

    We propose a novel pilot structure for covariance matrix estimation in massive multiple-input multiple-output (MIMO) systems in which each user transmits two pilot sequences, with the second pilot sequence multiplied by a random phase-shift. The covariance matrix of a particular user is obtained by computing the sample cross-correlation of the channel estimates obtained from the two pilot sequences. This approach relaxes the requirement that all the users transmit their uplink pilots over the same set of symbols. We derive expressions for the achievable rate and the mean-squared error of the covariance matrix estimate when the proposed method is used with staggered pilots. The performance of the proposed method is compared with existing methods through simulations.

  15. Hospice in Assisted Living: Promoting Good Quality Care at End of Life

    ERIC Educational Resources Information Center

    Cartwright, Juliana C.; Miller, Lois; Volpin, Miriam

    2009-01-01

    Purpose: The purpose of this study was to describe good quality care at the end of life (EOL) for hospice-enrolled residents in assisted living facilities (ALFs). Design and Methods: A qualitative descriptive design was used to obtain detailed descriptions of EOL care provided by ALF medication aides, caregivers, nurses, and hospice nurses in…

  16. Satellite-derived methane hotspot emission estimates using a fast data-driven method

    NASA Astrophysics Data System (ADS)

    Buchwitz, Michael; Schneising, Oliver; Reuter, Maximilian; Heymann, Jens; Krautwurst, Sven; Bovensmann, Heinrich; Burrows, John P.; Boesch, Hartmut; Parker, Robert J.; Somkuti, Peter; Detmers, Rob G.; Hasekamp, Otto P.; Aben, Ilse; Butz, André; Frankenberg, Christian; Turner, Alexander J.

    2017-05-01

    Methane is an important atmospheric greenhouse gas and an adequate understanding of its emission sources is needed for climate change assessments, predictions, and the development and verification of emission mitigation strategies. Satellite retrievals of near-surface-sensitive column-averaged dry-air mole fractions of atmospheric methane, i.e. XCH4, can be used to quantify methane emissions. Maps of time-averaged satellite-derived XCH4 show regionally elevated methane over several methane source regions. In order to obtain methane emissions of these source regions we use a simple and fast data-driven method to estimate annual methane emissions and corresponding 1σ uncertainties directly from maps of annually averaged satellite XCH4. From theoretical considerations we expect that our method tends to underestimate emissions. When applying our method to high-resolution atmospheric methane simulations, we typically find agreement within the uncertainty range of our method (often 100 %) but also find that our method tends to underestimate emissions by typically about 40 %. To what extent these findings are model dependent needs to be assessed. We apply our method to an ensemble of satellite XCH4 data products consisting of two products from SCIAMACHY/ENVISAT and two products from TANSO-FTS/GOSAT covering the time period 2003-2014. We obtain annual emissions of four source areas: Four Corners in the south-western USA, the southern part of Central Valley, California, Azerbaijan, and Turkmenistan. We find that our estimated emissions are in good agreement with independently derived estimates for Four Corners and Azerbaijan. For the Central Valley and Turkmenistan our estimated annual emissions are higher compared to the EDGAR v4.2 anthropogenic emission inventory. For Turkmenistan we find on average about 50 % higher emissions with our annual emission uncertainty estimates overlapping with the EDGAR emissions. For the region around Bakersfield in the Central Valley we

  17. Management Documentation: Indicators & Good Practice at Cultural Heritage Places

    NASA Astrophysics Data System (ADS)

    Eppich, R.; Garcia Grinda, J. L.

    2015-08-01

    Documentation for cultural heritage places usually refers to describing the physical attributes, surrounding context, condition or environment; most of the time with images, graphics, maps or digital 3D models in their various forms with supporting textural information. Just as important as this type of information is the documentation of managerial attributes. How do managers of cultural heritage places collect information related to financial or economic well-being? How are data collected over time measured, and what are significant indicators for improvement? What quality of indicator is good enough? Good management of cultural heritage places is essential for conservation longevity, preservation of values and enjoyment by the public. But how is management documented? The paper will describe the research methodology, selection and description of attributes or indicators related to good management practice. It will describe the criteria for indicator selection and why they are important, how and when they are collected, by whom, and the difficulties in obtaining this information. As importantly it will describe how this type of documentation directly contributes to improving conservation practice. Good practice summaries will be presented that highlight this type of documentation including Pamplona and Ávila, Spain and Valletta, Malta. Conclusions are drawn with preliminary recommendations for improvement of this important aspect of documentation. Documentation of this nature is not typical and presents a unique challenge to collect, measure and communicate easily. However, it is an essential category that is often ignored yet absolutely essential in order to conserve cultural heritage places.

  18. Novel angle estimation for bistatic MIMO radar using an improved MUSIC

    NASA Astrophysics Data System (ADS)

    Li, Jianfeng; Zhang, Xiaofei; Chen, Han

    2014-09-01

    In this article, we study the problem of angle estimation for bistatic multiple-input multiple-output (MIMO) radar and propose an improved multiple signal classification (MUSIC) algorithm for joint direction of departure (DOD) and direction of arrival (DOA) estimation. The proposed algorithm obtains initial estimations of angles obtained from the signal subspace and uses the local one-dimensional peak searches to achieve the joint estimations of DOD and DOA. The angle estimation performance of the proposed algorithm is better than that of estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm, and is almost the same as that of two-dimensional MUSIC. Furthermore, the proposed algorithm can be suitable for irregular array geometry, obtain automatically paired DOD and DOA estimations, and avoid two-dimensional peak searching. The simulation results verify the effectiveness and improvement of the algorithm.

  19. Experimental estimation of transmissibility matrices for industrial multi-axis vibration isolation systems

    NASA Astrophysics Data System (ADS)

    Beijen, Michiel A.; Voorhoeve, Robbert; Heertjes, Marcel F.; Oomen, Tom

    2018-07-01

    Vibration isolation is essential for industrial high-precision systems to suppress external disturbances. The aim of this paper is to develop a general identification approach to estimate the frequency response function (FRF) of the transmissibility matrix, which is a key performance indicator for vibration isolation systems. The major challenge lies in obtaining a good signal-to-noise ratio in view of a large system weight. A non-parametric system identification method is proposed that combines floor and shaker excitations. Furthermore, a method is presented to analyze the input power spectrum of the floor excitations, both in terms of magnitude and direction. In turn, the input design of the shaker excitation signals is investigated to obtain sufficient excitation power in all directions with minimum experiment cost. The proposed methods are shown to provide an accurate FRF of the transmissibility matrix in three relevant directions on an industrial active vibration isolation system over a large frequency range. This demonstrates that, despite their heavy weight, industrial vibration isolation systems can be accurately identified using this approach.

  20. Carbon nanofibers obtained from electrospinning process

    NASA Astrophysics Data System (ADS)

    Bovi de Oliveira, Juliana; Müller Guerrini, Lília; Sizuka Oishi, Silvia; Rogerio de Oliveira Hein, Luis; dos Santos Conejo, Luíza; Cerqueira Rezende, Mirabel; Cocchieri Botelho, Edson

    2018-02-01

    In recent years, reinforcements consisting of carbon nanostructures, such as carbon nanotubes, fullerenes, graphenes, and carbon nanofibers have received significant attention due mainly to their chemical inertness and good mechanical, electrical and thermal properties. Since carbon nanofibers comprise a continuous reinforcing with high specific surface area, associated with the fact that they can be obtained at a low cost and in a large amount, they have shown to be advantageous compared to traditional carbon nanotubes. The main objective of this work is the processing of carbon nanofibers, using polyacrylonitrile (PAN) as a precursor, obtained by the electrospinning process via polymer solution, with subsequent use for airspace applications as reinforcement in polymer composites. In this work, firstly PAN nanofibers were produced by electrospinning with diameters in the range of (375 ± 85) nm, using a dimethylformamide solution. Using a furnace, the PAN nanofiber was converted into carbon nanofiber. Morphologies and structures of PAN and carbon nanofibers were investigated by scanning electron microscopy, Raman Spectroscopy, thermogravimetric analyses and differential scanning calorimeter. The resulting residual weight after carbonization was approximately 38% in weight, with a diameters reduction of 50%, and the same showed a carbon yield of 25%. From the analysis of the crystalline structure of the carbonized material, it was found that the material presented a disordered structure.

  1. Calculating weighted estimates of peak streamflow statistics

    USGS Publications Warehouse

    Cohn, Timothy A.; Berenbrock, Charles; Kiang, Julie E.; Mason, Jr., Robert R.

    2012-01-01

    According to the Federal guidelines for flood-frequency estimation, the uncertainty of peak streamflow statistics, such as the 1-percent annual exceedance probability (AEP) flow at a streamgage, can be reduced by combining the at-site estimate with the regional regression estimate to obtain a weighted estimate of the flow statistic. The procedure assumes the estimates are independent, which is reasonable in most practical situations. The purpose of this publication is to describe and make available a method for calculating a weighted estimate from the uncertainty or variance of the two independent estimates.

  2. Diagnostic value and cost-effectiveness of good quality digital images accompanying electronic referrals for suspected skin malignancies.

    PubMed

    Ng, Michael F Y; Stevenson, J Howard

    2011-04-01

    The aim of this study was to investigate the outcome and cost-effectiveness of good and poor quality photographs accompanying the electronic referrals for suspected skin malignancies. A retrospective study of 100 patients, divided into 2 groups, 50 with good quality photographs and 50 with poor quality photographs. Patients with no digital images, or who failed to attend, or patients with incomplete notes were excluded from the study. The treatment pathway, waiting times, and estimated cost between the 2 groups were compared. Good photographs were more likely to be treated at the 1-Stop Clinic (P = 0.05). Good images had a better positive predictive value than poor quality images (62.55% vs. 42.86%). Good quality images are more accurate than poor quality images in triaging of patients, and thus more effective in facilitating the treatment of malignant lesions timely. Good quality photographs allow a delayed appropriate treatment of benign lesions. This increases the safety for patients in a queue in a rationed health care system, and improves patient flow.

  3. Estimating Health-State Utility for Economic Models in Clinical Studies: An ISPOR Good Research Practices Task Force Report.

    PubMed

    Wolowacz, Sorrel E; Briggs, Andrew; Belozeroff, Vasily; Clarke, Philip; Doward, Lynda; Goeree, Ron; Lloyd, Andrew; Norman, Richard

    Cost-utility models are increasingly used in many countries to establish whether the cost of a new intervention can be justified in terms of health benefits. Health-state utility (HSU) estimates (the preference for a given state of health on a cardinal scale where 0 represents dead and 1 represents full health) are typically among the most important and uncertain data inputs in cost-utility models. Clinical trials represent an important opportunity for the collection of health-utility data. However, trials designed primarily to evaluate efficacy and safety often present challenges to the optimal collection of HSU estimates for economic models. Careful planning is needed to determine which of the HSU estimates may be measured in planned trials; to establish the optimal methodology; and to plan any additional studies needed. This report aimed to provide a framework for researchers to plan the collection of health-utility data in clinical studies to provide high-quality HSU estimates for economic modeling. Recommendations are made for early planning of health-utility data collection within a research and development program; design of health-utility data collection during protocol development for a planned clinical trial; design of prospective and cross-sectional observational studies and alternative study types; and statistical analyses and reporting. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  4. A novel SURE-based criterion for parametric PSF estimation.

    PubMed

    Xue, Feng; Blu, Thierry

    2015-02-01

    We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.

  5. Understanding AlN Obtaining Through Computational Thermodynamics Combined with Experimental Investigation

    NASA Astrophysics Data System (ADS)

    Florea, R. M.

    2017-06-01

    Basic material concept, technology and some results of studies on aluminum matrix composite with dispersive aluminum nitride reinforcement was shown. Studied composites were manufactured by „in situ” technique. Aluminum nitride (AlN) has attracted large interest recently, because of its high thermal conductivity, good dielectric properties, high flexural strength, thermal expansion coefficient matches that of Si and its non-toxic nature, as a suitable material for hybrid integrated circuit substrates. AlMg alloys are the best matrix for AlN obtaining. Al2O3-AlMg, AlN-Al2O3, and AlN-AlMg binary diagrams were thermodynamically modelled. The obtained Gibbs free energies of components, solution parameters and stoichiometric phases were used to build a thermodynamic database of AlN- Al2O3-AlMg system. Obtaining of AlN with Liquid-phase of AlMg as matrix has been studied and compared with the thermodynamic results. The secondary phase microstructure has a significant effect on the final thermal conductivity of the obtained AlN. Thermodynamic modelling of AlN-Al2O3-AlMg system provided an important basis for understanding the obtaining behavior and interpreting the experimental results.

  6. Cooperation among cancer cells as public goods games on Voronoi networks.

    PubMed

    Archetti, Marco

    2016-05-07

    Cancer cells produce growth factors that diffuse and sustain tumour proliferation, a form of cooperation that can be studied using mathematical models of public goods in the framework of evolutionary game theory. Cell populations, however, form heterogeneous networks that cannot be described by regular lattices or scale-free networks, the types of graphs generally used in the study of cooperation. To describe the dynamics of growth factor production in populations of cancer cells, I study public goods games on Voronoi networks, using a range of non-linear benefits that account for the known properties of growth factors, and different types of diffusion gradients. The results are surprisingly similar to those obtained on regular graphs and different from results on scale-free networks, revealing that network heterogeneity per se does not promote cooperation when public goods diffuse beyond one-step neighbours. The exact shape of the diffusion gradient is not crucial, however, whereas the type of non-linear benefit is an essential determinant of the dynamics. Public goods games on Voronoi networks can shed light on intra-tumour heterogeneity, the evolution of resistance to therapies that target growth factors, and new types of cell therapy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Fatigue Strength Estimation Based on Local Mechanical Properties for Aluminum Alloy FSW Joints

    PubMed Central

    Sillapasa, Kittima; Mutoh, Yoshiharu; Miyashita, Yukio; Seo, Nobushiro

    2017-01-01

    Overall fatigue strengths and hardness distributions of the aluminum alloy similar and dissimilar friction stir welding (FSW) joints were determined. The local fatigue strengths as well as local tensile strengths were also obtained by using small round bar specimens extracted from specific locations, such as the stir zone, heat affected zone, and base metal. It was found from the results that fatigue fracture of the FSW joint plate specimen occurred at the location of the lowest local fatigue strength as well as the lowest hardness, regardless of microstructural evolution. To estimate the fatigue strengths of aluminum alloy FSW joints from the hardness measurements, the relationship between fatigue strength and hardness for aluminum alloys was investigated based on the present experimental results and the available wide range of data from the references. It was found as: σa (R = −1) = 1.68 HV (σa is in MPa and HV has no unit). It was also confirmed that the estimated fatigue strengths were in good agreement with the experimental results for aluminum alloy FSW joints. PMID:28772543

  8. Fatigue Strength Estimation Based on Local Mechanical Properties for Aluminum Alloy FSW Joints.

    PubMed

    Sillapasa, Kittima; Mutoh, Yoshiharu; Miyashita, Yukio; Seo, Nobushiro

    2017-02-15

    Overall fatigue strengths and hardness distributions of the aluminum alloy similar and dissimilar friction stir welding (FSW) joints were determined. The local fatigue strengths as well as local tensile strengths were also obtained by using small round bar specimens extracted from specific locations, such as the stir zone, heat affected zone, and base metal. It was found from the results that fatigue fracture of the FSW joint plate specimen occurred at the location of the lowest local fatigue strength as well as the lowest hardness, regardless of microstructural evolution. To estimate the fatigue strengths of aluminum alloy FSW joints from the hardness measurements, the relationship between fatigue strength and hardness for aluminum alloys was investigated based on the present experimental results and the available wide range of data from the references. It was found as: σ a ( R = -1) = 1.68 HV ( σ a is in MPa and HV has no unit). It was also confirmed that the estimated fatigue strengths were in good agreement with the experimental results for aluminum alloy FSW joints.

  9. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  10. Defining the Good Reading Teacher.

    ERIC Educational Resources Information Center

    Kupersmith, Judy; And Others

    In the quest for a definition of the good reading teacher, a review of the literature shows that new or copious materials, one specific teaching method, and static teaching behaviors are not responsible for effective teaching. However, observations of five reading teachers, with good references and good reputations but with widely divergent…

  11. Validation of the concentration profiles obtained from the near infrared/multivariate curve resolution monitoring of reactions of epoxy resins using high performance liquid chromatography as a reference method.

    PubMed

    Garrido, M; Larrechi, M S; Rius, F X

    2007-03-07

    This paper reports the validation of the results obtained by combining near infrared spectroscopy and multivariate curve resolution-alternating least squares (MCR-ALS) and using high performance liquid chromatography as a reference method, for the model reaction of phenylglycidylether (PGE) and aniline. The results are obtained as concentration profiles over the reaction time. The trueness of the proposed method has been evaluated in terms of lack of bias. The joint test for the intercept and the slope showed that there were no significant differences between the profiles calculated spectroscopically and the ones obtained experimentally by means of the chromatographic reference method at an overall level of confidence of 5%. The uncertainty of the results was estimated by using information derived from the process of assessment of trueness. Such operational aspects as the cost and availability of instrumentation and the length and cost of the analysis were evaluated. The method proposed is a good way of monitoring the reactions of epoxy resins, and it adequately shows how the species concentration varies over time.

  12. Cost Estimation and Control for Flight Systems

    NASA Technical Reports Server (NTRS)

    Hammond, Walter E.; Vanhook, Michael E. (Technical Monitor)

    2002-01-01

    Good program management practices, cost analysis, cost estimation, and cost control for aerospace flight systems are interrelated and depend upon each other. The best cost control process cannot overcome poor design or poor systems trades that lead to the wrong approach. The project needs robust Technical, Schedule, Cost, Risk, and Cost Risk practices before it can incorporate adequate Cost Control. Cost analysis both precedes and follows cost estimation -- the two are closely coupled with each other and with Risk analysis. Parametric cost estimating relationships and computerized models are most often used. NASA has learned some valuable lessons in controlling cost problems, and recommends use of a summary Project Manager's checklist as shown here.

  13. Model parameter estimations from residual gravity anomalies due to simple-shaped sources using Differential Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Ekinci, Yunus Levent; Balkaya, Çağlayan; Göktürkler, Gökhan; Turan, Seçil

    2016-06-01

    An efficient approach to estimate model parameters from residual gravity data based on differential evolution (DE), a stochastic vector-based metaheuristic algorithm, has been presented. We have showed the applicability and effectiveness of this algorithm on both synthetic and field anomalies. According to our knowledge, this is a first attempt of applying DE for the parameter estimations of residual gravity anomalies due to isolated causative sources embedded in the subsurface. The model parameters dealt with here are the amplitude coefficient (A), the depth and exact origin of causative source (zo and xo, respectively) and the shape factors (q and ƞ). The error energy maps generated for some parameter pairs have successfully revealed the nature of the parameter estimation problem under consideration. Noise-free and noisy synthetic single gravity anomalies have been evaluated with success via DE/best/1/bin, which is a widely used strategy in DE. Additionally some complicated gravity anomalies caused by multiple source bodies have been considered, and the results obtained have showed the efficiency of the algorithm. Then using the strategy applied in synthetic examples some field anomalies observed for various mineral explorations such as a chromite deposit (Camaguey district, Cuba), a manganese deposit (Nagpur, India) and a base metal sulphide deposit (Quebec, Canada) have been considered to estimate the model parameters of the ore bodies. Applications have exhibited that the obtained results such as the depths and shapes of the ore bodies are quite consistent with those published in the literature. Uncertainty in the solutions obtained from DE algorithm has been also investigated by Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing without cooling schedule. Based on the resulting histogram reconstructions of both synthetic and field data examples the algorithm has provided reliable parameter estimations being within the sampling limits of

  14. A channel estimation scheme for MIMO-OFDM systems

    NASA Astrophysics Data System (ADS)

    He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen

    2017-08-01

    In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.

  15. Good life good death according to Christiaan Barnard.

    PubMed

    Toledo-Pereyra, Luis H

    2010-06-01

    Christiaan Barnard (1922-2002), pioneering heart transplant surgeon, introduced his ideas on euthanasia in a well-written and researched book, Good Life Good Death. A Doctor's Case for Euthanasia and Suicide, published in 1980. His courage in analyzing this topic in a forthright and clear manner is worth reviewing today. In essence, Barnard supported and practiced passive euthanasia (the ending of life by indirect methods, such as stopping of life support) and discussed, but never practiced, active euthanasia (the ending of life by direct means). Barnard believed that "the primary goal of medicine was to alleviate suffering-not merely to prolong life-he argued that advances in modern medical technology demanded that we evaluate our view of death and the handling of terminal illness." Some in the surgical community took issue with Barnard when he publicized his personal views on euthanasia. We discuss Barnard's beliefs and attempt to clarify some misunderstandings regarding this particular controversial area of medicine.

  16. Theoretical verification of experimentally obtained conformation-dependent electronic conductance in a biphenyl molecule

    NASA Astrophysics Data System (ADS)

    Maiti, Santanu K.

    2014-07-01

    The experimentally obtained (Venkataraman et al. [1]) cosine squared relation of electronic conductance in a biphenyl molecule is verified theoretically within a tight-binding framework. Using Green's function formalism we numerically calculate two-terminal conductance as a function of relative twist angle among the molecular rings and find that the results are in good agreement with the experimental observation.

  17. An approach to using heart rate monitoring to estimate the ventilation and load of air pollution exposure.

    PubMed

    Cozza, Izabela Campos; Zanetta, Dirce Maria Trevisan; Fernandes, Frederico Leon Arrabal; da Rocha, Francisco Marcelo Monteiro; de Andre, Paulo Afonso; Garcia, Maria Lúcia Bueno; Paceli, Renato Batista; Prado, Gustavo Faibischew; Terra-Filho, Mario; do Nascimento Saldiva, Paulo Hilário; de Paula Santos, Ubiratan

    2015-07-01

    The effects of air pollution on health are associated with the amount of pollutants inhaled which depends on the environmental concentration and the inhaled air volume. It has not been clear whether statistical models of the relationship between heart rate and ventilation obtained using laboratory cardiopulmonary exercise test (CPET) can be applied to an external group to estimate ventilation. To develop and evaluate a model to estimate respiratory ventilation based on heart rate for inhaled load of pollutant assessment in field studies. Sixty non-smoking men; 43 public street workers (public street group) and 17 employees of the Forest Institute (park group) performed a maximum cardiopulmonary exercise test (CPET). Regression equation models were constructed with the heart rate and natural logarithmic of minute ventilation data obtained on CPET. Ten individuals were chosen randomly (public street group) and were used for external validation of the models (test group). All subjects also underwent heart rate register, and particulate matter (PM2.5) monitoring for a 24-hour period. For the public street group, the median difference between estimated and observed data was 0.5 (CI 95% -0.2 to 1.4) l/min and for the park group was 0.2 (CI 95% -0.2 to 1.2) l/min. In the test group, estimated values were smaller than the ones observed in the CPET, with a median difference of -2.4 (CI 95% -4.2 to -1.8) l/min. The mixed model estimated values suggest that this model is suitable for situations in which heart rate is around 120-140bpm. The mixed effect model is suitable for ventilation estimate, with good accuracy when applied to homogeneous groups, suggesting that, in this case, the model could be used in field studies to estimate ventilation. A small but significant difference in the median of external validation estimates was observed, suggesting that the applicability of the model to external groups needs further evaluation. Copyright © 2015 Elsevier B.V. All rights

  18. Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels

    NASA Technical Reports Server (NTRS)

    Lennington, R. K.; Abotteen, K. M. (Principal Investigator)

    1980-01-01

    The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.

  19. The Estimate of Atmospheric Boundary Layer Height Above a Coniferous Forest During BEARPEX 2007 and 2009

    NASA Astrophysics Data System (ADS)

    Choi, W.; McKay, M.; Weber, R.; Goldstein, A. H.; Baker, B. M.; Faloona, I. C.

    2009-12-01

    The atmospheric boundary layer (ABL) height (zi) is an extremely important parameter for interpreting field observations of reactive trace gases and understanding air quality at the local or regional scale. Despite its importance, zi is often crudely estimated for atmospheric chemistry or air pollution studies due to limited resources and the difficulty of measuring its altitude. In this study, zi over complex terrain (a coniferous forest in the California Sierra Nevada) is estimated based on the power spectra and the integral length scale of horizontal winds obtained from a three-axis sonic anemometer during the BEARPEX (Biosphere Effects on Aerosol and Photochemistry Experiment) 2007 and 2009. Estimated zi shows very good agreement with observations which were obtained from the balloon tether sonde (2007) and radio sonde (2009) measurements under unstable conditions (z/L<0). The behavior of zi under stable conditions (z/L>0), including the evolution and breakdown of the nocturnal boundary layer over the forest is also presented. Finally, significant directional wind shear was consistently observed during 2009 with winds backing from the prevailing surface west-southwesterlies (anabatic cross-valley circulation) to consistent southerlies just above the ABL. We show that this is the result of a thermal wind driven by the potential temperature gradient aligned upslope. The resultant wind flow pattern can modify the conventional model of transport along the Sacramento urban plume and has implications for California central valley basin flushing characteristics.

  20. Sensor fusion for structural tilt estimation using an acceleration-based tilt sensor and a gyroscope

    NASA Astrophysics Data System (ADS)

    Liu, Cheng; Park, Jong-Woong; Spencer, B. F., Jr.; Moon, Do-Soo; Fan, Jiansheng

    2017-10-01

    A tilt sensor can provide useful information regarding the health of structural systems. Most existing tilt sensors are gravity/acceleration based and can provide accurate measurements of static responses. However, for dynamic tilt, acceleration can dramatically affect the measured responses due to crosstalk. Thus, dynamic tilt measurement is still a challenging problem. One option is to integrate the output of a gyroscope sensor, which measures the angular velocity, to obtain the tilt; however, problems arise because the low-frequency sensitivity of the gyroscope is poor. This paper proposes a new approach to dynamic tilt measurements, fusing together information from a MEMS-based gyroscope and an acceleration-based tilt sensor. The gyroscope provides good estimates of the tilt at higher frequencies, whereas the acceleration measurements are used to estimate the tilt at lower frequencies. The Tikhonov regularization approach is employed to fuse these measurements together and overcome the ill-posed nature of the problem. The solution is carried out in the frequency domain and then implemented in the time domain using FIR filters to ensure stability. The proposed method is validated numerically and experimentally to show that it performs well in estimating both the pseudo-static and dynamic tilt measurements.

  1. Estimating Total-Test Scores from Partial Scores in a Matrix Sampling Design.

    ERIC Educational Resources Information Center

    Sachar, Jane; Suppes, Patrick

    1980-01-01

    The present study compared six methods, two of which utilize the content structure of items, to estimate total-test scores using 450 students and 60 items of the 110-item Stanford Mental Arithmetic Test. Three methods yielded fairly good estimates of the total-test score. (Author/RL)

  2. Reflexion on linear regression trip production modelling method for ensuring good model quality

    NASA Astrophysics Data System (ADS)

    Suprayitno, Hitapriya; Ratnasari, Vita

    2017-11-01

    Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.

  3. Matching the best viewing angle in depth cameras for biomass estimation based on poplar seedling geometry.

    PubMed

    Andújar, Dionisio; Fernández-Quintanilla, César; Dorado, José

    2015-06-04

    In energy crops for biomass production a proper plant structure is important to optimize wood yields. A precise crop characterization in early stages may contribute to the choice of proper cropping techniques. This study assesses the potential of the Microsoft Kinect for Windows v.1 sensor to determine the best viewing angle of the sensor to estimate the plant biomass based on poplar seedling geometry. Kinect Fusion algorithms were used to generate a 3D point cloud from the depth video stream. The sensor was mounted in different positions facing the tree in order to obtain depth (RGB-D) images from different angles. Individuals of two different ages, e.g., one month and one year old, were scanned. Four different viewing angles were compared: top view (0°), 45° downwards view, front view (90°) and ground upwards view (-45°). The ground-truth used to validate the sensor readings consisted of a destructive sampling in which the height, leaf area and biomass (dry weight basis) were measured in each individual plant. The depth image models agreed well with 45°, 90° and -45° measurements in one-year poplar trees. Good correlations (0.88 to 0.92) between dry biomass and the area measured with the Kinect were found. In addition, plant height was accurately estimated with a few centimeters error. The comparison between different viewing angles revealed that top views showed poorer results due to the fact the top leaves occluded the rest of the tree. However, the other views led to good results. Conversely, small poplars showed better correlations with actual parameters from the top view (0°). Therefore, although the Microsoft Kinect for Windows v.1 sensor provides good opportunities for biomass estimation, the viewing angle must be chosen taking into account the developmental stage of the crop and the desired parameters. The results of this study indicate that Kinect is a promising tool for a rapid canopy characterization, i.e., for estimating crop biomass

  4. Development of good modelling practice for phsiologically based pharmacokinetic models for use in risk assessment: The first steps

    EPA Science Inventory

    The increasing use of tissue dosimetry estimated using pharmacokinetic models in chemical risk assessments in multiple countries necessitates the need to develop internationally recognized good modelling practices. These practices would facilitate sharing of models and model eva...

  5. What Are Good Universities?

    ERIC Educational Resources Information Center

    Connell, Raewyn

    2016-01-01

    This paper considers how we can arrive at a concept of the good university. It begins with ideas expressed by Australian Vice-Chancellors and in the "league tables" for universities, which essentially reproduce existing privilege. It then considers definitions of the good university via wish lists, classic texts, horror lists, structural…

  6. Estimating population diversity with CatchAll

    PubMed Central

    Bunge, John; Woodard, Linda; Böhning, Dankmar; Foster, James A.; Connolly, Sean; Allen, Heather K.

    2012-01-01

    Motivation: The massive data produced by next-generation sequencing require advanced statistical tools. We address estimating the total diversity or species richness in a population. To date, only relatively simple methods have been implemented in available software. There is a need for software employing modern, computationally intensive statistical analyses including error, goodness-of-fit and robustness assessments. Results: We present CatchAll, a fast, easy-to-use, platform-independent program that computes maximum likelihood estimates for finite-mixture models, weighted linear regression-based analyses and coverage-based non-parametric methods, along with outlier diagnostics. Given sample ‘frequency count’ data, CatchAll computes 12 different diversity estimates and applies a model-selection algorithm. CatchAll also derives discounted diversity estimates to adjust for possibly uncertain low-frequency counts. It is accompanied by an Excel-based graphics program. Availability: Free executable downloads for Linux, Windows and Mac OS, with manual and source code, at www.northeastern.edu/catchall. Contact: jab18@cornell.edu PMID:22333246

  7. Land motion estimates from GPS at tide gauges: a geophysical evaluation

    NASA Astrophysics Data System (ADS)

    Bouin, M. N.; Wöppelmann, G.

    2010-01-01

    Space geodesy applications have mainly been limited to horizontal deformations due to a number of restrictions on the vertical component accuracy. Monitoring vertical land motion is nonetheless of crucial interest in observations of long-term sea level change or postglacial rebound measurements. Here, we present a global vertical velocity field obtained with more than 200 permanent GPS stations, most of them colocated with tide gauges (TGs). We used a state of the art, homogeneous processing strategy to ensure that the reference frame was stable throughout the observation period of almost 10 yr. We associate realistic uncertainties to our vertical rates, taking into account the time-correlation noise in the time-series. The results are compared with two independent geophysical vertical velocity fields: (1) vertical velocity estimates using long-term TG records and (2) postglacial model predictions from the ICE-5G (VM2) adjustment. The quantitative agreement of the GPS vertical velocities with the `internal estimates' of vertical displacements using the TG record is very good, with a mean difference of -0.13 +/- 1.64 mm yr-1 on more than 100 sites. For 84 per cent of the GPS stations considered, the vertical velocity is confirmed by the TG estimate to within 2 mm yr-1. The overall agreement with the glacial isostatic adjustment (GIA) model is good, with discrepancy patterns related either to a local misfit of the model or to active tectonics. For 72 per cent of the sites considered, the predictions of the GIA model agree with the GPS results to within two standard deviations. Most of the GPS velocities showing discrepancies with respect to the predictions of the GIA model are, however, consistent with previously published space geodesy results. We, in turn, confirm the value of 1.8 +/- 0.5 mm yr-1 for the 20th century average global sea level rise, and conclude that GPS is now a robust tool for vertical land motion monitoring which is accurate at least at 1 mm yr-1.

  8. A new hydrological model for estimating extreme floods in the Alps

    NASA Astrophysics Data System (ADS)

    Receanu, R. G.; Hertig, J.-A.; Fallot, J.-M.

    2012-04-01

    Protection against flooding is very important for a country like Switzerland with a varied topography and many rivers and lakes. Because of the potential danger caused by extreme precipitation, structural and functional safety of large dams must be guaranteed to withstand the passage of an extreme flood. We introduce a new distributed hydrological model to calculate the PMF from a PMP which is spatially and temporally distributed using clouds. This model has permitted the estimation of extreme floods based on the distributed PMP and the taking into account of the specifics of alpine catchments, in particular the small size of the basins, the complex topography, the large lakes, snowmelt and glaciers. This is an important evolution compared to other models described in the literature, as they mainly use a uniform distribution of extreme precipitation all over the watershed. This paper presents the results of calculation with the developed rainfall-runoff model, taking into account measured rainfall and comparing results to observed flood events. This model includes three parts: surface runoff, underground flow and melting snow. Two Swiss watersheds are studied, for which rainfall data and flow rates are available for a considerably long period, including several episodes of heavy rainfall with high flow events. From these events, several simulations are performed to estimate the input model parameters such as soil roughness and average width of rivers in case of surface runoff. Following the same procedure, the parameters used in the underground flow simulation are also estimated indirectly, since direct underground flow and exfiltration measurements are difficult to obtain. A sensitivity analysis of the parameters is performed at the first step to define more precisely the boundary and initial conditions. The results for the two alpine basins, validated with the Nash equation, show a good correlation between the simulated and observed flows. This good correlation

  9. LES-based generation of high-frequency fluctuation in wind turbulence obtained by meteorological model

    NASA Astrophysics Data System (ADS)

    Tamura, Tetsuro; Kawaguchi, Masaharu; Kawai, Hidenori; Tao, Tao

    2017-11-01

    The connection between a meso-scale model and a micro-scale large eddy simulation (LES) is significant to simulate the micro-scale meteorological problem such as strong convective events due to the typhoon or the tornado using LES. In these problems the mean velocity profiles and the mean wind directions change with time according to the movement of the typhoons or tornadoes. Although, a fine grid micro-scale LES could not be connected to a coarse grid meso-scale WRF directly. In LES when the grid is suddenly refined at the interface of nested grids which is normal to the mean advection the resolved shear stresses decrease due to the interpolation errors and the delay of the generation of smaller scale turbulence that can be resolved on the finer mesh. For the estimation of wind gust disaster the peak wind acting on buildings and structures has to be correctly predicted. In the case of meteorological model the velocity fluctuations have a tendency of diffusive variation without the high frequency component due to the numerically filtering effects. In order to predict the peak value of wind velocity with good accuracy, this paper proposes a LES-based method for generating the higher frequency components of velocity and temperature fields obtained by meteorological model.

  10. Estimating Canopy Dark Respiration for Crop Models

    NASA Technical Reports Server (NTRS)

    Monje Mejia, Oscar Alberto

    2014-01-01

    Crop production is obtained from accurate estimates of daily carbon gain.Canopy gross photosynthesis (Pgross) can be estimated from biochemical models of photosynthesis using sun and shaded leaf portions and the amount of intercepted photosyntheticallyactive radiation (PAR).In turn, canopy daily net carbon gain can be estimated from canopy daily gross photosynthesis when canopy dark respiration (Rd) is known.

  11. Detecting stages of needle penetration into tissues through force estimation at needle tip using fiber Bragg grating sensors

    NASA Astrophysics Data System (ADS)

    Kumar, Saurabh; Shrikanth, Venkoba; Amrutur, Bharadwaj; Asokan, Sundarrajan; Bobji, Musuvathi S.

    2016-12-01

    Several medical procedures involve the use of needles. The advent of robotic and robot assisted procedures requires dynamic estimation of the needle tip location during insertion for use in both assistive systems as well as for automatic control. Most prior studies have focused on the maneuvering of solid flexible needles using external force measurements at the base of the needle holder. However, hollow needles are used in several procedures and measurements of forces in proximity of such needles can eliminate the need for estimating frictional forces that have high variations. These measurements are also significant for endoscopic procedures in which measurement of forces at the needle holder base is difficult. Fiber Bragg grating sensors, due to their small size, inert nature, and multiplexing capability, provide a good option for this purpose. Force measurements have been undertaken during needle insertion into tissue mimicking phantoms made of polydimethylsiloxane as well as chicken tissue using an 18-G needle instrumented with FBG sensors. The results obtained show that it is possible to estimate the different stages of needle penetration including partial rupture, which is significant for procedures in which precise estimation of needle tip position inside the organ or tissue is required.

  12. Using nonlinear programming to correct leakage and estimate mass change from GRACE observation and its application to Antarctica

    NASA Astrophysics Data System (ADS)

    Tang, Jingshi; Cheng, Haowen; Liu, Lin

    2012-11-01

    The Gravity Recovery And Climate Experiment (GRACE) mission has been providing high quality observations since its launch in 2002. Over the years, fruitful achievements have been obtained and the temporal gravity field has revealed the ongoing geophysical, hydrological and other processes. These discoveries help the scientists better understand various aspects of the Earth. However, errors exist in high degree and order spherical harmonics, which need to be processed before use. Filtering is one of the most commonly used techniques to smooth errors, yet it attenuates signals and also causes leakage of gravity signal into surrounding areas. This paper reports a new method to estimate the true mass change on the grid (expressed in equivalent water height or surface density). The mass change over the grid can be integrated to estimate regional or global mass change. This method assumes the GRACE-observed apparent mass change is only caused by the mass change on land. By comparing the computed and observed apparent mass change, the true mass change can be iteratively adjusted and estimated. The problem is solved with nonlinear programming (NLP) and yields solutions which are in good agreement with other GRACE-based estimates.

  13. The global public good concept: a means of promoting good veterinary governance.

    PubMed

    Eloit, M

    2012-08-01

    At the outset, the concept of a 'public good' was associated with economic policies. However, it has now evolved not only from a national to a global concept (global public good), but also from a concept applying solely to the production of goods to one encompassing societal issues (education, environment, etc.) and fundamental rights, including the right to health and food. Through their actions, Veterinary Services, as defined by the Terrestrial Animal Health Code (Terrestrial Code) of the World Organisation for Animal Health (OIE), help to improve animal health and reduce production losses. In this way they contribute directly and indirectly to food security and to safeguarding human health and economic resources. The organisation and operating procedures of Veterinary Services are therefore key to the efficient governance required to achieve these objectives. The OIE is a major player in global cooperation and governance in the fields of animal and public health through the implementation of its strategic standardisation mission and other programmes for the benefit of Veterinary Services and OIE Member Countries. Thus, the actions of Veterinary Services and the OIE deserve to be recognised as a global public good, backed by public investment to ensure that all Veterinary Services are in a position to apply the principles of good governance and to comply with the international standards for the quality of Veterinary Services set out in the OIE Terrestrial Code (Section 3 on Quality of Veterinary Services) and Aquatic Animal Health Code (Section 3 on Quality of Aquatic Animal Health Services).

  14. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed Central

    du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian

    2016-01-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564

  15. 19 CFR 102.12 - Fungible goods.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... RULES OF ORIGIN Rules of Origin § 102.12 Fungible goods. When fungible goods of different countries of origin are commingled the country of origin of the goods: (a) Is the countries of origin of those... the origin of the commingled good is not practical, the country or countries of origin may be...

  16. An Evaluation of Total Solar Reflectance and Spectral Band Ratioing Techniques for Estimating Soil Water Content

    NASA Technical Reports Server (NTRS)

    Reginato, R. J.; Vedder, J. F.; Idso, S. B.; Jackson, R. D.; Blanchard, M. B.; Goettelman, R.

    1977-01-01

    For several days in March of 1975, reflected solar radiation measurements were obtained from smooth and rough surfaces of wet, drying, and continually dry Avondale loam at Phoenix, Arizona, with pyranometers located 50 cm above the ground surface and a multispectral scanner flown at a 300-m height. The simple summation of the different band radiances measured by the multispectral scanner proved equally as good as the pyranometer data for estimating surface soil water content if the multispectral scanner data were standardized with respect to the intensity of incoming solar radiation or the reflected radiance from a reference surface, such as the continually dry soil. Without this means of standardization, multispectral scanner data are most useful in a spectral band ratioing context. Our results indicated that, for the bands used, no significant information on soil water content could be obtained by band ratioing. Thus the variability in soil water content should insignificantly affect soil-type discrimination based on identification of type-specific spectral signatures. Therefore remote sensing, conducted in the 0.4- to 1.0-micron wavelength region of the solar spectrum, would seem to be much More suited to identifying crop and soil types than to estimating of soil water content.

  17. REVERBERATION AND PHOTOIONIZATION ESTIMATES OF THE BROAD-LINE REGION RADIUS IN LOW-z QUASARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Negrete, C. Alenka; Dultzin, Deborah; Marziani, Paola

    2013-07-01

    Black hole mass estimation in quasars, especially at high redshift, involves the use of single-epoch spectra with signal-to-noise ratio and resolution that permit accurate measurement of the width of a broad line assumed to be a reliable virial estimator. Coupled with an estimate of the radius of the broad-line region (BLR) this yields the black hole mass M{sub BH}. The radius of the BLR may be inferred from an extrapolation of the correlation between source luminosity and reverberation-derived r{sub BLR} measures (the so-called Kaspi relation involving about 60 low-z sources). We are exploring a different method for estimating r{sub BLR}more » directly from inferred physical conditions in the BLR of each source. We report here on a comparison of r{sub BLR} estimates that come from our method and from reverberation mapping. Our ''photoionization'' method employs diagnostic line intensity ratios in the rest-frame range 1400-2000 A (Al III {lambda}1860/Si III] {lambda}1892, C IV {lambda}1549/Al III {lambda}1860) that enable derivation of the product of density and ionization parameter with the BLR distance derived from the definition of the ionization parameter. We find good agreement between our estimates of the density, ionization parameter, and r{sub BLR} and those from reverberation mapping. We suggest empirical corrections to improve the agreement between individual photoionization-derived r{sub BLR} values and those obtained from reverberation mapping. The results in this paper can be exploited to estimate M{sub BH} for large samples of high-z quasars using an appropriate virial broadening estimator. We show that the width of the UV intermediate emission lines are consistent with the width of H{beta}, thereby providing a reliable virial broadening estimator that can be measured in large samples of high-z quasars.« less

  18. Estimation of Length-Scales in Soils by MRI

    NASA Technical Reports Server (NTRS)

    Daidzic, N. E.; Altobelli, S.; Alexander, J. I. D.

    2004-01-01

    Soil can be best described as an unconsolidated granular media that forms porous structure. The present macroscopic theory of water transport in porous media rests upon the continuum hypothesis that the physical properties of porous media can be associated with continuous, twice-differentiable field variables whose spatial domain is a set of centroids of Representative Elementary Volume (REV) elements. MRI is an ideal technique to estimate various length-scales in porous media. A 0.267 T permanent magnet at NASA GRC was used for this study. A 2D or 3D spatially-resolved porosity distribution were obtained from the NMR signal strength from each voxel and the spin-lattice relaxation time. A classical spin-warp imaging with Multiple Spin Echos (MSE) was used to evaluate proton density in each voxel. Initial resolution of 256 x 256 was subsequently reduced by averaging neighboring voxels and the porosity convergence was observed. A number of engineered "space candidate" soils such as Isolite(trademark), Zeoponics(trademark), Turface(trademark), and Profile(trademark) were used. Glass beads in the size range between 50 microns to 2 mm were used as well. Initial results with saturated porous samples have shown a good estimate of the average porosity consistent with the gravimetric porosity measurement results. For Profile(trademark) samples with particle sizes ranging between 0.25 to 1 mm and characteristic interparticle pore size of 100 microns the characteristic Darcy scale was estimated to be about delta(sub REV) = 10 mm. Glass beads porosity show clear convergence toward a definite REV which stays constant throughout homogeneous sample. Additional information is included in the original extended abstract.

  19. Bootstrap Estimates of Standard Errors in Generalizability Theory

    ERIC Educational Resources Information Center

    Tong, Ye; Brennan, Robert L.

    2007-01-01

    Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…

  20. No correlation between ultrasound placental grading at 31-34 weeks of gestation and a surrogate estimate of organ function at term obtained by stereological analysis.

    PubMed

    Yin, T T; Loughna, P; Ong, S S; Padfield, J; Mayhew, T M

    2009-08-01

    We test the experimental hypothesis that early changes in the ultrasound appearance of the placenta reflect poor or reduced placental function. The sonographic (Grannum) grade of placental maturity was compared to placental function as expressed by the morphometric oxygen diffusive conductance of the villous membrane. Ultrasonography was used to assess the Grannum grade of 32 placentas at 31-34 weeks of gestation. Indications for the scans included a history of previous fetal abnormalities, previous fetal growth problems or suspicion of IUGR. Placentas were classified from grade 0 (most immature) to grade III (most mature). We did not exclude smokers or complicated pregnancies as we aimed to correlate the early appearance of mature placentas with placental function. After delivery, microscopical fields on formalin-fixed, trichrome-stained histological sections of each placenta were obtained by multistage systematic uniform random sampling. Using design-based stereological methods, the exchange surface areas of peripheral (terminal and intermediate) villi and their fetal capillaries and the arithmetic and harmonic mean thicknesses of the villous membrane (maternal surface of villous trophoblast to adluminal surface of vascular endothelium) were estimated. An index of the variability in thickness of this membrane, and an estimate of its oxygen diffusive conductance, were derived secondarily as were estimates of the mean diameters and total lengths of villi and fetal capillaries. Group comparisons were drawn using analysis of variance. We found no significant differences in placental volume or composition or in the dimensions or diffusive conductances of the villous membrane. Subsequent exclusion of smokers did not alter these main findings. Grannum grades at 31-34 weeks of gestation appear not to provide reliable predictors of the functional capacity of the term placenta as expressed by the surrogate measure, morphometric diffusive conductance.

  1. "Act in Good Faith."

    ERIC Educational Resources Information Center

    McKay, Robert B.

    1979-01-01

    It is argued that the Supreme Court's Bakke decision overturning the University of California's minority admissions program is good for those who favor affirmative action programs in higher education. The Supreme Court gives wide latitude for devising programs that take race and ethnic background into account if colleges are acting in good faith.…

  2. [A method for obtaining redshifts of quasars based on wavelet multi-scaling feature matching].

    PubMed

    Liu, Zhong-Tian; Li, Xiang-Ru; Wu, Fu-Chao; Zhao, Yong-Heng

    2006-09-01

    The LAMOST project, the world's largest sky survey project being implemented in China, is expected to obtain 10(5) quasar spectra. The main objective of the present article is to explore methods that can be used to estimate the redshifts of quasar spectra from LAMOST. Firstly, the features of the broad emission lines are extracted from the quasar spectra to overcome the disadvantage of low signal-to-noise ratio. Then the redshifts of quasar spectra can be estimated by using the multi-scaling feature matching. The experiment with the 15, 715 quasars from the SDSS DR2 shows that the correct rate of redshift estimated by the method is 95.13% within an error range of 0. 02. This method was designed to obtain the redshifts of quasar spectra with relative flux and a low signal-to-noise ratio, which is applicable to the LAMOST data and helps to study quasars and the large-scale structure of the universe etc.

  3. A Pretty Good Paper about Pretty Good Privacy.

    ERIC Educational Resources Information Center

    McCollum, Roy

    With today's growth in the use of electronic information systems for e-mail, data development and research, and the relative ease of access to such resources, protecting one's data and correspondence has become a great concern. "Pretty Good Privacy" (PGP), an encryption program developed by Phil Zimmermann, may be the software tool that…

  4. 42 CFR 93.210 - Good faith.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false Good faith. 93.210 Section 93.210 Public Health... MISCONDUCT Definitions § 93.210 Good faith. Good faith as applied to a complainant or witness, means having a... allegation or cooperation with a research misconduct proceeding is not in good faith if made with knowing or...

  5. 42 CFR 93.210 - Good faith.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 1 2013-10-01 2013-10-01 false Good faith. 93.210 Section 93.210 Public Health... MISCONDUCT Definitions § 93.210 Good faith. Good faith as applied to a complainant or witness, means having a... allegation or cooperation with a research misconduct proceeding is not in good faith if made with knowing or...

  6. 42 CFR 93.210 - Good faith.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false Good faith. 93.210 Section 93.210 Public Health... MISCONDUCT Definitions § 93.210 Good faith. Good faith as applied to a complainant or witness, means having a... allegation or cooperation with a research misconduct proceeding is not in good faith if made with knowing or...

  7. Deliberation before determination: the definition and evaluation of good decision making.

    PubMed

    Elwyn, Glyn; Miron-Shatz, Talya

    2010-06-01

    In this article, we examine definitions of suggested approaches to measure the concept of good decisions, highlight the ways in which they converge, and explain why we have concerns about their emphasis on post-hoc estimations and post-decisional outcomes, their prescriptive concept of knowledge, and their lack of distinction between the process of deliberation, and the act of decision determination. There has been a steady trend to involve patients in decision making tasks in clinical practice, part of a shift away from paternalism towards the concept of informed choice. An increased understanding of the uncertainties that exist in medicine, arising from a weak evidence base and, in addition, the stochastic nature of outcomes at the individual level, have contributed to shifting the responsibility for decision making from physicians to patients. This led to increasing use of decision support and communication methods, with the ultimate aim of improving decision making by patients. Interest has therefore developed in attempting to define good decision making and in the development of measurement approaches. We pose and reflect whether decisions can be judged good or not, and, if so, how this goodness might be evaluated. We hypothesize that decisions cannot be measured by reference to their outcomes and offer an alternative means of assessment, which emphasizes the deliberation process rather than the decision's end results. We propose decision making comprises a pre-decisional process and an act of decision determination and consider how this model of decision making serves to develop a new approach to evaluating what constitutes a good decision making process. We proceed to offer an alternative, which parses decisions into the pre-decisional deliberation process, the act of determination and post-decisional outcomes. Evaluating the deliberation process, we propose, should comprise of a subjective sufficiency of knowledge, as well as emotional processing and

  8. Assessing Interval Estimation Methods for Hill Model ...

    EPA Pesticide Factsheets

    The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet

  9. Feedback in Software and a Desktop Manufacturing Context for Learning Estimation Strategies in Middle School

    ERIC Educational Resources Information Center

    Malcolm, Peter

    2013-01-01

    The ability and to make good estimates is essential, as is the ability to assess the reasonableness of estimates. These abilities are becoming increasingly important as digital technologies transform the ways in which people work. To estimate is to provide an approximation to a problem that is mathematical in nature, and the ability to estimate is…

  10. Evaluation of the MV (CAPON) Coherent Doppler Lidar Velocity Estimator

    NASA Technical Reports Server (NTRS)

    Lottman, B.; Frehlich, R.

    1997-01-01

    The performance of the CAPON velocity estimator for coherent Doppler lidar is determined for typical space-based and ground-based parameter regimes. Optimal input parameters for the algorithm were determined for each regime. For weak signals, performance is described by the standard deviation of the good estimates and the fraction of outliers. For strong signals, the fraction of outliers is zero. Numerical effort was also determined.

  11. Public Goods and Services.

    ERIC Educational Resources Information Center

    Zicht, Barbara, Ed.; And Others

    1982-01-01

    This document includes an introduction to the role of government in the production of public goods and services and 3 brief teaching units. The introduction describes the nature of a mixed economy and points out why most people identify the production of goods and services with private enterprise rather than government. It develops a rationale for…

  12. Productivity and Capital Goods.

    ERIC Educational Resources Information Center

    Zicht, Barbara, Ed.; And Others

    1981-01-01

    Providing teacher background on the concepts of productivity and capital goods, this document presents 3 teaching units about these ideas for different grade levels. The grade K-2 unit, "How Do They Do It?," is designed to provide students with an understanding of how physical capital goods add to productivity. Activities include a field trip to…

  13. Electrochemical estimation of the polyphenol index in wines using a laccase biosensor.

    PubMed

    Gamella, M; Campuzano, S; Reviejo, A J; Pingarrón, J M

    2006-10-18

    The use of a laccase biosensor, under both batch and flow injection (FI) conditions, for a rapid and reliable amperometric estimation of the total content of polyphenolic compounds in wines is reported. The enzyme was immobilized by cross-linking with glutaraldehyde onto a glassy carbon electrode. Caffeic acid and gallic acid were selected as standard compounds to carry out such estimation. Experimental variables such as the enzyme loading, the applied potential, and the pH value were optimized, and different aspects regarding the operational stability of the laccase biosensor were evaluated. Using batch amperometry at -200 mV, the detection limits obtained were 2.6 x 10(-3) and 7.2 x 10(-4) mg L(-1) gallic acid and caffeic acid, respectively, which compares advantageously with previous biosensor designs. An extremely simple sample treatment consisting only of an appropriate dilution of wine sample with the supporting electrolyte solution (0.1 mol L(-1) citrate buffer of pH 5.0) was needed for the amperometric analysis of red, rosé, and white wines. Good correlations were found when the polyphenol indices obtained with the biosensor (in both the batch and FI modes) for different wine samples were plotted versus the results achieved with the classic Folin-Ciocalteu method. Application of the calibration transfer chemometric model (multiplicative fitting) allowed that the confidence intervals (for a significance level of 0.05) for the slope and intercept values of the amperometric index versus Folin-Ciocalteu index plots (r = 0.997) included the unit and zero values, respectively. This indicates that the laccase biosensor can be successfully used for the estimation of the polyphenol index in wines when compared with the Folin-Ciocalteu reference method.

  14. Analysis of short pulse laser altimetry data obtained over horizontal path

    NASA Technical Reports Server (NTRS)

    Im, K. E.; Tsai, B. M.; Gardner, C. S.

    1983-01-01

    Recent pulsed measurements of atmospheric delay obtained by ranging to the more realistic targets including a simulated ocean target and an extended plate target are discussed. These measurements are used to estimate the expected timing accuracy of a correlation receiver system. The experimental work was conducted using a pulsed two color laser altimeter.

  15. Spectrum-based method to generate good decoy libraries for spectral library searching in peptide identifications.

    PubMed

    Cheng, Chia-Ying; Tsai, Chia-Feng; Chen, Yu-Ju; Sung, Ting-Yi; Hsu, Wen-Lian

    2013-05-03

    As spectral library searching has received increasing attention for peptide identification, constructing good decoy spectra from the target spectra is the key to correctly estimating the false discovery rate in searching against the concatenated target-decoy spectral library. Several methods have been proposed to construct decoy spectral libraries. Most of them construct decoy peptide sequences and then generate theoretical spectra accordingly. In this paper, we propose a method, called precursor-swap, which directly constructs decoy spectral libraries directly at the "spectrum level" without generating decoy peptide sequences by swapping the precursors of two spectra selected according to a very simple rule. Our spectrum-based method does not require additional efforts to deal with ion types (e.g., a, b or c ions), fragment mechanism (e.g., CID, or ETD), or unannotated peaks, but preserves many spectral properties. The precursor-swap method is evaluated on different spectral libraries and the results of obtained decoy ratios show that it is comparable to other methods. Notably, it is efficient in time and memory usage for constructing decoy libraries. A software tool called Precursor-Swap-Decoy-Generation (PSDG) is publicly available for download at http://ms.iis.sinica.edu.tw/PSDG/.

  16. 'She sort of shines': midwives' accounts of 'good' midwifery and 'good' leadership.

    PubMed

    Byrom, Sheena; Downe, Soo

    2010-02-01

    to explore midwives' accounts of the characteristics of 'good' leadership and 'good' midwifery. a phenomenological interview survey. Participants were asked about what made both good and poor midwives and leaders. two maternity departments within National Health Service trusts in the North West of England. qualified midwives, selected by random sampling stratified to encompass senior and junior grades. thematic analysis, carried out manually. ten midwives were interviewed. Sixteen codes and six sub-themes were generated. Across the responses, two clear dimensions (themes) were identified, relating on the one hand to aspects of knowledge, skill and competence (termed 'skilled competence'), and on the other hand to specific personality characteristics (termed 'emotional intelligence'). This study suggests that the ability to act knowledgeably, safely and competently was seen as a basic requirement for both clinical midwives and midwife leaders. The added element which made both the midwife and the leader 'good' was the extent of their emotional capability. this small-scale in-depth study could form the basis for hypothesis generation for larger scale work in this area in future. The findings offer some reinforcement for the potential applicability of theories of transformational leadership to midwifery management and practice. Copyright 2008 Elsevier Ltd. All rights reserved.

  17. Estimation of sensible and latent heat flux from natural sparse vegetation surfaces using surface renewal

    NASA Astrophysics Data System (ADS)

    Zapata, N.; Martínez-Cob, A.

    2001-12-01

    This paper reports a study undertaken to evaluate the feasibility of the surface renewal method to accurately estimate long-term evaporation from the playa and margins of an endorreic salty lagoon (Gallocanta lagoon, Spain) under semiarid conditions. High-frequency temperature readings were taken for two time lags ( r) and three measurement heights ( z) in order to get surface renewal sensible heat flux ( HSR) values. These values were compared against eddy covariance sensible heat flux ( HEC) values for a calibration period (25-30 July 2000). Error analysis statistics (index of agreement, IA; root mean square error, RMSE; and systematic mean square error, MSEs) showed that the agreement between HSR and HEC improved as measurement height decreased and time lag increased. Calibration factors α were obtained for all analyzed cases. The best results were obtained for the z=0.9 m ( r=0.75 s) case for which α=1.0 was observed. In this case, uncertainty was about 10% in terms of relative error ( RE). Latent heat flux values were obtained by solving the energy balance equation for both the surface renewal ( LESR) and the eddy covariance ( LEEC) methods, using HSR and HEC, respectively, and measurements of net radiation and soil heat flux. For the calibration period, error analysis statistics for LESR were quite similar to those for HSR, although errors were mostly at random. LESR uncertainty was less than 9%. Calibration factors were applied for a validation data subset (30 July-4 August 2000) for which meteorological conditions were somewhat different (higher temperatures and wind speed and lower solar and net radiation). Error analysis statistics for both HSR and LESR were quite good for all cases showing the goodness of the calibration factors. Nevertheless, the results obtained for the z=0.9 m ( r=0.75 s) case were still the best ones.

  18. Estimating discharge in rivers using remotely sensed hydraulic information

    USGS Publications Warehouse

    Bjerklie, D.M.; Moller, D.; Smith, L.C.; Dingman, S.L.

    2005-01-01

    A methodology to estimate in-bank river discharge exclusively from remotely sensed hydraulic data is developed. Water-surface width and maximum channel width measured from 26 aerial and digital orthophotos of 17 single channel rivers and 41 SAR images of three braided rivers were coupled with channel slope data obtained from topographic maps to estimate the discharge. The standard error of the discharge estimates were within a factor of 1.5-2 (50-100%) of the observed, with the mean estimate accuracy within 10%. This level of accuracy was achieved using calibration functions developed from observed discharge. The calibration functions use reach specific geomorphic variables, the maximum channel width and the channel slope, to predict a correction factor. The calibration functions are related to channel type. Surface velocity and width information, obtained from a single C-band image obtained by the Jet Propulsion Laboratory's (JPL's) AirSAR was also used to estimate discharge for a reach of the Missouri River. Without using a calibration function, the estimate accuracy was +72% of the observed discharge, which is within the expected range of uncertainty for the method. However, using the observed velocity to calibrate the initial estimate improved the estimate accuracy to within +10% of the observed. Remotely sensed discharge estimates with accuracies reported in this paper could be useful for regional or continental scale hydrologic studies, or in regions where ground-based data is lacking. ?? 2004 Elsevier B.V. All rights reserved.

  19. WTA estimates using the method of paired comparison: tests of robustness

    Treesearch

    Patricia A. Champ; John B. Loomis

    1998-01-01

    The method of paired comparison is modified to allow choices between two alternative gains so as to estimate willingness to accept (WTA) without loss aversion. The robustness of WTA values for two public goods is tested with respect to sensitivity of theWTA measure to the context of the bundle of goods used in the paired comparison exercise and to the scope (scale) of...

  20. Deep sea animal density and size estimated using a Dual-frequency IDentification SONar (DIDSON) offshore the island of Hawaii

    NASA Astrophysics Data System (ADS)

    Giorli, Giacomo; Drazen, Jeffrey C.; Neuheimer, Anna B.; Copeland, Adrienne; Au, Whitlow W. L.

    2018-01-01

    Pelagic animals that form deep sea scattering layers (DSLs) represent an important link in the food web between zooplankton and top predators. While estimating the composition, density and location of the DSL is important to understand mesopelagic ecosystem dynamics and to predict top predators' distribution, DSL composition and density are often estimated from trawls which may be biased in terms of extrusion, avoidance, and gear-associated biases. Instead, location and biomass of DSLs can be estimated from active acoustic techniques, though estimates are often in aggregate without regard to size or taxon specific information. For the first time in the open ocean, we used a DIDSON sonar to characterize the fauna in DSLs. Estimates of the numerical density and length of animals at different depths and locations along the Kona coast of the Island of Hawaii were determined. Data were collected below and inside the DSLs with the sonar mounted on a profiler. A total of 7068 animals were counted and sized. We estimated numerical densities ranging from 1 to 7 animals/m3 and individuals as long as 3 m were detected. These numerical densities were orders of magnitude higher than those estimated from trawls and average sizes of animals were much larger as well. A mixed model was used to characterize numerical density and length of animals as a function of deep sea layer sampled, location, time of day, and day of the year. Numerical density and length of animals varied by month, with numerical density also a function of depth. The DIDSON proved to be a good tool for open-ocean/deep-sea estimation of the numerical density and size of marine animals, especially larger ones. Further work is needed to understand how this methodology relates to estimates of volume backscatters obtained with standard echosounding techniques, density measures obtained with other sampling methodologies, and to precisely evaluate sampling biases.

  1. A means to estimate thermal and kinetic parameters of coal dust layer from hot surface ignition tests.

    PubMed

    Park, Haejun; Rangwala, Ali S; Dembsey, Nicholas A

    2009-08-30

    A method to estimate thermal and kinetic parameters of Pittsburgh seam coal subject to thermal runaway is presented using the standard ASTM E 2021 hot surface ignition test apparatus. Parameters include thermal conductivity (k), activation energy (E), coupled term (QA) of heat of reaction (Q) and pre-exponential factor (A) which are required, but rarely known input values to determine the thermal runaway propensity of a dust material. Four different dust layer thicknesses: 6.4, 12.7, 19.1 and 25.4mm, are tested, and among them, a single steady state dust layer temperature profile of 12.7 mm thick dust layer is used to estimate k, E and QA. k is calculated by equating heat flux from the hot surface layer and heat loss rate on the boundary assuming negligible heat generation in the coal dust layer at a low hot surface temperature. E and QA are calculated by optimizing a numerically estimated steady state dust layer temperature distribution to the experimentally obtained temperature profile of a 12.7 mm thick dust layer. Two unknowns, E and QA, are reduced to one from the correlation of E and QA obtained at criticality of thermal runaway. The estimated k is 0.1 W/mK matching the previously reported value. E ranges from 61.7 to 83.1 kJ/mol, and the corresponding QA ranges from 1.7 x 10(9) to 4.8 x 10(11)J/kg s. The mean values of E (72.4 kJ/mol) and QA (2.8 x 10(10)J/kg s) are used to predict the critical hot surface temperatures for other thicknesses, and good agreement is observed between measured and experimental values. Also, the estimated E and QA ranges match the corresponding ranges calculated from the multiple tests method and values reported in previous research.

  2. The prevalence and characterization of self-medication for obtaining pain relief among undergraduate nursing students.

    PubMed

    Souza, Layz Alves Ferreira; da Silva, Camila Damázio; Ferraz, Gisely Carvalho; Sousa, Fátima Aparecida Emm Faleiros; Pereira, Lílian Varanda

    2011-01-01

    This study investigates the prevalence of self-medication among undergraduate nursing students seeking to relieve pain and characterizes the pain and relief obtained through the used medication. This epidemiological and cross-sectional study was carried out with 211 nursing students from a public university in Goiás, GO, Brazil. A numerical scale (0-10) measured pain intensity and relief. The prevalence of self-medication was 38.8%. The source and main determining factor of this practice were the student him/herself (54.1%) and lack of time to go to a doctor (50%), respectively. The most frequently used analgesic was dipyrone (59.8%) and pain relief was classified as good (Md=8.5;Max=10;Min=0). The prevalence of self-medication was higher than that observed in similar studies. Many students reported that relief obtained through self-medication was good, a fact that can delay the clarification of a diagnosis and its appropriate treatment.

  3. Partitioning the Uncertainty in Estimates of Mean Basal Area Obtained from 10-year Diameter Growth Model Predictions

    Treesearch

    Ronald E. McRoberts

    2005-01-01

    Uncertainty in model-based predictions of individual tree diameter growth is attributed to three sources: measurement error for predictor variables, residual variability around model predictions, and uncertainty in model parameter estimates. Monte Carlo simulations are used to propagate the uncertainty from the three sources through a set of diameter growth models to...

  4. Can lagrangian models reproduce the migration time of European eel obtained from otolith analysis?

    NASA Astrophysics Data System (ADS)

    Rodríguez-Díaz, L.; Gómez-Gesteira, M.

    2017-12-01

    European eel can be found at the Bay of Biscay after a long migration across the Atlantic. The duration of migration, which takes place at larval stage, is of primary importance to understand eel ecology and, hence, its survival. This duration is still a controversial matter since it can range from 7 months to > 4 years depending on the method to estimate duration. The minimum migration duration estimated from our lagrangian model is similar to the duration obtained from the microstructure of eel otoliths, which is typically on the order of 7-9 months. The lagrangian model showed to be sensitive to different conditions like spatial and time resolution, release depth, release area and initial distribution. In general, migration showed to be faster when decreasing the depth and increasing the resolution of the model. In average, the fastest migration was obtained when only advective horizontal movement was considered. However, faster migration was even obtained in some cases when locally oriented random migration was taken into account.

  5. Cetacean population density estimation from single fixed sensors using passive acoustics.

    PubMed

    Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

    2011-06-01

    Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America

  6. 41 CFR 302-7.201 - Is temporary storage in excess of authorized limits and excess valuation of goods and services...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... excess of authorized limits and excess valuation of goods and services payable at Government expense? 302... Government expense? No, charges for excess weight, valuation above the minimum amount, and services obtained... HOUSEHOLD GOODS AND PROFESSIONAL BOOKS, PAPERS, AND EQUIPMENT (PBP&E) Actual Expense Method § 302-7.201 Is...

  7. 41 CFR 302-7.201 - Is temporary storage in excess of authorized limits and excess valuation of goods and services...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... excess of authorized limits and excess valuation of goods and services payable at Government expense? 302... Government expense? No, charges for excess weight, valuation above the minimum amount, and services obtained... HOUSEHOLD GOODS AND PROFESSIONAL BOOKS, PAPERS, AND EQUIPMENT (PBP&E) Actual Expense Method § 302-7.201 Is...

  8. Estimation in SEM: A Concrete Example

    ERIC Educational Resources Information Center

    Ferron, John M.; Hess, Melinda R.

    2007-01-01

    A concrete example is used to illustrate maximum likelihood estimation of a structural equation model with two unknown parameters. The fitting function is found for the example, as are the vector of first-order partial derivatives, the matrix of second-order partial derivatives, and the estimates obtained from each iteration of the Newton-Raphson…

  9. Obtaining an equivalent beam

    NASA Technical Reports Server (NTRS)

    Butler, Thomas G.

    1990-01-01

    In modeling a complex structure the researcher was faced with a component that would have logical appeal if it were modeled as a beam. The structure was a mast of a robot controlled gantry crane. The structure up to this point already had a large number of degrees of freedom, so the idea of conserving grid points by modeling the mast as a beam was attractive. The researcher decided to make a separate problem of of the mast and model it in three dimensions with plates, then extract the equivalent beam properties by setting up the loading to simulate beam-like deformation and constraints. The results could then be used to represent the mast as a beam in the full model. A comparison was made of properties derived from models of different constraints versus manual calculations. The researcher shows that the three-dimensional model is ineffective in trying to conform to the requirements of an equivalent beam representation. If a full 3-D plate model were used in the complete representation of the crane structure, good results would be obtained. Since the attempt is to economize on the size of the model, a better way to achieve the same results is to use substructuring and condense the mast to equivalent end boundary and intermediate mass points.

  10. Good Health Before Pregnancy: Preconception Care

    MedlinePlus

    ... Advocacy For Patients About ACOG Good Health Before Pregnancy: Preconception Care Home For Patients Search FAQs Good ... FAQ056, April 2017 PDF Format Good Health Before Pregnancy: Preconception Care Pregnancy What is a preconception care ...

  11. 28 CFR 523.14 - Industrial good time.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.14 Industrial good time. Extra good time... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Industrial good time. 523.14 Section 523... Industries is not awarded industrial good time until actually employed. ...

  12. 28 CFR 523.14 - Industrial good time.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.14 Industrial good time. Extra good time... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Industrial good time. 523.14 Section 523... Industries is not awarded industrial good time until actually employed. ...

  13. 28 CFR 523.14 - Industrial good time.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.14 Industrial good time. Extra good time... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Industrial good time. 523.14 Section 523... Industries is not awarded industrial good time until actually employed. ...

  14. 28 CFR 523.14 - Industrial good time.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.14 Industrial good time. Extra good time... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Industrial good time. 523.14 Section 523... Industries is not awarded industrial good time until actually employed. ...

  15. 28 CFR 523.14 - Industrial good time.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.14 Industrial good time. Extra good time... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Industrial good time. 523.14 Section 523... Industries is not awarded industrial good time until actually employed. ...

  16. LACIE large area acreage estimation. [United States of America

    NASA Technical Reports Server (NTRS)

    Chhikara, R. S.; Feiveson, A. H. (Principal Investigator)

    1979-01-01

    A sample wheat acreage for a large area is obtained by multiplying its small grains acreage estimate as computed by the classification and mensuration subsystem by the best available ratio of wheat to small grains acreages obtained from historical data. In the United States, as in other countries with detailed historical data, an additional level of aggregation was required because sample allocation was made at the substratum level. The essential features of the estimation procedure for LACIE countries are included along with procedures for estimating wheat acreage in the United States.

  17. Markov state models from short non-equilibrium simulations—Analysis and correction of estimation bias

    NASA Astrophysics Data System (ADS)

    Nüske, Feliks; Wu, Hao; Prinz, Jan-Hendrik; Wehmeyer, Christoph; Clementi, Cecilia; Noé, Frank

    2017-03-01

    Many state-of-the-art methods for the thermodynamic and kinetic characterization of large and complex biomolecular systems by simulation rely on ensemble approaches, where data from large numbers of relatively short trajectories are integrated. In this context, Markov state models (MSMs) are extremely popular because they can be used to compute stationary quantities and long-time kinetics from ensembles of short simulations, provided that these short simulations are in "local equilibrium" within the MSM states. However, over the last 15 years since the inception of MSMs, it has been controversially discussed and not yet been answered how deviations from local equilibrium can be detected, whether these deviations induce a practical bias in MSM estimation, and how to correct for them. In this paper, we address these issues: We systematically analyze the estimation of MSMs from short non-equilibrium simulations, and we provide an expression for the error between unbiased transition probabilities and the expected estimate from many short simulations. We show that the unbiased MSM estimate can be obtained even from relatively short non-equilibrium simulations in the limit of long lag times and good discretization. Further, we exploit observable operator model (OOM) theory to derive an unbiased estimator for the MSM transition matrix that corrects for the effect of starting out of equilibrium, even when short lag times are used. Finally, we show how the OOM framework can be used to estimate the exact eigenvalues or relaxation time scales of the system without estimating an MSM transition matrix, which allows us to practically assess the discretization quality of the MSM. Applications to model systems and molecular dynamics simulation data of alanine dipeptide are included for illustration. The improved MSM estimator is implemented in PyEMMA of version 2.3.

  18. Information matrix estimation procedures for cognitive diagnostic models.

    PubMed

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  19. The Improved Estimation of Ratio of Two Population Proportions

    ERIC Educational Resources Information Center

    Solanki, Ramkrishna S.; Singh, Housila P.

    2016-01-01

    In this article, first we obtained the correct mean square error expression of Gupta and Shabbir's linear weighted estimator of the ratio of two population proportions. Later we suggested the general class of ratio estimators of two population proportions. The usual ratio estimator, Wynn-type estimator, Singh, Singh, and Kaur difference-type…

  20. Heritability of decisions and outcomes of public goods games

    PubMed Central

    Hiraishi, Kai; Shikishima, Chizuru; Yamagata, Shinji; Ando, Juko

    2015-01-01

    Prosociality is one of the most distinctive features of human beings but there are individual differences in cooperative behavior. Employing the twin method, we examined the heritability of cooperativeness and its outcomes on public goods games using a strategy method. In two experiments (Study 1 and Study 2), twin participants were asked to indicate (1) how much they would contribute to a group when they did not know how much the other group members were contributing, and (2) how much they would contribute if they knew the contributions of others. Overall, the heritability estimates were relatively small for each type of decision, but heritability was greater when participants knew that the others had made larger contributions. Using registered decisions in Study 2, we conducted seven Monte Carlo simulations to examine genetic and environmental influences on the expected game payoffs. For the simulated one-shot game, the heritability estimates were small, comparable to those of game decisions. For the simulated iterated games, we found that the genetic influences first decreased, then increased as the numbers of iterations grew. The implication for the evolution of individual differences in prosociality is discussed. PMID:25954213

  1. Empirical Allometric Models to Estimate Total Needle Biomass For Loblolly Pine

    Treesearch

    Hector M. de los Santos-Posadas; Bruce E. Borders

    2002-01-01

    Empirical geometric models based on the cone surface formula were adapted and used to estimate total dry needle biomass (TNB) and live branch basal area (LBBA). The results suggest that the empirical geometric equations produced good fit and stable parameters while estimating TNB and LBBA. The data used include trees form a spacing study of 12 years old and a set of...

  2. Ring profiler: a new method for estimating tree-ring density for improved estimates of carbon storage

    Treesearch

    David W. Vahey; C. Tim Scott; J.Y. Zhu; Kenneth E. Skog

    2012-01-01

    Methods for estimating present and future carbon storage in trees and forests rely on measurements or estimates of tree volume or volume growth multiplied by specific gravity. Wood density can vary by tree ring and height in a tree. If data on density by tree ring could be obtained and linked to tree size and stand characteristics, it would be possible to more...

  3. Variability of dental cone beam CT grey values for density estimations

    PubMed Central

    Pauwels, R; Nackaerts, O; Bellaiche, N; Stamatakis, H; Tsiklakis, K; Walker, A; Bosmans, H; Bogaerts, R; Jacobs, R; Horner, K

    2013-01-01

    Objective The aim of this study was to investigate the use of dental cone beam CT (CBCT) grey values for density estimations by calculating the correlation with multislice CT (MSCT) values and the grey value error after recalibration. Methods A polymethyl methacrylate (PMMA) phantom was developed containing inserts of different density: air, PMMA, hydroxyapatite (HA) 50 mg cm−3, HA 100, HA 200 and aluminium. The phantom was scanned on 13 CBCT devices and 1 MSCT device. Correlation between CBCT grey values and CT numbers was calculated, and the average error of the CBCT values was estimated in the medium-density range after recalibration. Results Pearson correlation coefficients ranged between 0.7014 and 0.9996 in the full-density range and between 0.5620 and 0.9991 in the medium-density range. The average error of CBCT voxel values in the medium-density range was between 35 and 1562. Conclusion Even though most CBCT devices showed a good overall correlation with CT numbers, large errors can be seen when using the grey values in a quantitative way. Although it could be possible to obtain pseudo-Hounsfield units from certain CBCTs, alternative methods of assessing bone tissue should be further investigated. Advances in knowledge The suitability of dental CBCT for density estimations was assessed, involving a large number of devices and protocols. The possibility for grey value calibration was thoroughly investigated. PMID:23255537

  4. Reexamination of optimal quantum state estimation of pure states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    2005-09-15

    A direct derivation is given for the optimal mean fidelity of quantum state estimation of a d-dimensional unknown pure state with its N copies given as input, which was first obtained by Hayashi in terms of an infinite set of covariant positive operator valued measures (POVM's) and by Bruss and Macchiavello establishing a connection to optimal quantum cloning. An explicit condition for POVM measurement operators for optimal estimators is obtained, by which we construct optimal estimators with finite POVMs using exact quadratures on a hypersphere. These finite optimal estimators are not generally universal, where universality means the fidelity is independentmore » of input states. However, any optimal estimator with finite POVM for M(>N) copies is universal if it is used for N copies as input.« less

  5. Bayesian approach to estimate AUC, partition coefficient and drug targeting index for studies with serial sacrifice design.

    PubMed

    Wang, Tianli; Baron, Kyle; Zhong, Wei; Brundage, Richard; Elmquist, William

    2014-03-01

    The current study presents a Bayesian approach to non-compartmental analysis (NCA), which provides the accurate and precise estimate of AUC 0 (∞) and any AUC 0 (∞) -based NCA parameter or derivation. In order to assess the performance of the proposed method, 1,000 simulated datasets were generated in different scenarios. A Bayesian method was used to estimate the tissue and plasma AUC 0 (∞) s and the tissue-to-plasma AUC 0 (∞) ratio. The posterior medians and the coverage of 95% credible intervals for the true parameter values were examined. The method was applied to laboratory data from a mice brain distribution study with serial sacrifice design for illustration. Bayesian NCA approach is accurate and precise in point estimation of the AUC 0 (∞) and the partition coefficient under a serial sacrifice design. It also provides a consistently good variance estimate, even considering the variability of the data and the physiological structure of the pharmacokinetic model. The application in the case study obtained a physiologically reasonable posterior distribution of AUC, with a posterior median close to the value estimated by classic Bailer-type methods. This Bayesian NCA approach for sparse data analysis provides statistical inference on the variability of AUC 0 (∞) -based parameters such as partition coefficient and drug targeting index, so that the comparison of these parameters following destructive sampling becomes statistically feasible.

  6. Qualities of a good Singaporean psychiatrist: Qualitative differences between psychiatrists and patients.

    PubMed

    Tor, Phern-Chern; Tan, Jacinta O A

    2015-06-01

    Pilot studies in Singapore established four themes (personal values, professional, relationship, academic-executive) relating to the qualities of a good psychiatrist, and suggested potential differences of opinion between patients and psychiatrists. We sought to explore differences between patients and psychiatrists regarding the qualities of a good psychiatrist. Qualitative analysis of interviews using a modified grounded theory approach with 21 voluntary psychiatric inpatients and 18 psychiatrists. One hundred thirty-one separate qualities emerged from the data. The qualities of a good psychiatrist were viewed in the context of motivations, functions, methods, and results obtained, mirroring the themes established in the pilot studies. Patients and psychiatrists mostly concurred on the qualities of a good psychiatrist, with 62.6% of the qualities emerging from both groups. However significant differences existed. Patient-specific qualities included proof of altruistic motives, diligence, clinical competence, and positive results. What the psychiatrist represented to patients in relation to gender, culture, and clinical prestige also mattered to patients. Psychiatrist-specific qualities related to societal (e.g. public protection) and professional concerns (e.g. boundary issues). The results of this study demonstrate that patients and psychiatrists have different views about the qualities of a good psychiatrist. Patients may expect proof of care, diligence, and competence from the psychiatrist, along with positive results. In addition, psychiatrists should be mindful of what they represent to patients and how that can impact the doctor-patient relationship. © 2014 Wiley Publishing Asia Pty Ltd.

  7. Glucose-6-phosphate dehydrogenase deficiency and the use of primaquine: top-down and bottom-up estimation of professional costs.

    PubMed

    Peixoto, Henry Maia; Brito, Marcelo Augusto Mota; Romero, Gustavo Adolfo Sierra; Monteiro, Wuelton Marcelo; Lacerda, Marcus Vinícius Guimarães de; Oliveira, Maria Regina Fernandes de

    2017-10-05

    The aim of this study has been to study whether the top-down method, based on the average value identified in the Brazilian Hospitalization System (SIH/SUS), is a good estimator of the cost of health professionals per patient, using the bottom-up method for comparison. The study has been developed from the context of hospital care offered to the patient carrier of glucose-6-phosphate dehydrogenase (G6PD) deficiency with severe adverse effect because of the use of primaquine, in the Brazilian Amazon. The top-down method based on the spending with SIH/SUS professional services, as a proxy for this cost, corresponded to R$60.71, and the bottom-up, based on the salaries of the physician (R$30.43), nurse (R$16.33), and nursing technician (R$5.93), estimated a total cost of R$52.68. The difference was only R$8.03, which shows that the amounts paid by the Hospital Inpatient Authorization (AIH) are estimates close to those obtained by the bottom-up technique for the professionals directly involved in the care.

  8. One-pot synthesis of fluorescent nitrogen-doped carbon dots with good biocompatibility for cell labeling.

    PubMed

    Zhang, Zhengwei; Yan, Kun; Yang, Qiulian; Liu, Yanhua; Yan, Zhengyu; Chen, Jianqiu

    2017-12-01

    Here we report an easy and economical hydrothermal carbonization approach to synthesize the fluorescent nitrogen-doped carbon dots (N-CDs) that was developed using citric acid and triethanolamine as the precursors. The synthesis conditions were optimized to obtain the N-CDs with superior fluorescence performances. The as-prepared N-CDs are monodispersed sphere nanoparticles with good water solubility, and exhibited strong fluorescence, favourable photostability and excitation wavelength-dependent behavior. Furthermore, the in vitro cytotoxicity and cellular labeling of N-CDs were investigated using the rat glomerular mesangial cells. The results showed the N-CDs have more inconspicuous cytotoxicity and better biosafety in comparison with ZnSe quantum dots, although both targeted the cells successfully. Considering their admirable photostability, low toxicity and good compatibility, the as-obtained N-CDs could have potential applications in biosensors, cellular imaging, and other fields. Copyright © 2017 John Wiley & Sons, Ltd.

  9. A diagnostic model to estimate winds and small-scale drag from Mars Observer PMIRR data

    NASA Technical Reports Server (NTRS)

    Barnes, J. R.

    1993-01-01

    Theoretical and modeling studies indicate that small-scale drag due to breaking gravity waves is likely to be of considerable importance for the circulation in the middle atmospheric region (approximately 40-100 km altitude) on Mars. Recent earth-based spectroscopic observations have provided evidence for the existence of circulation features, in particular, a warm winter polar region, associated with gravity wave drag. Since the Mars Observer PMIRR experiment will obtain temperature profiles extending from the surface up to about 80 km altitude, it will be extensively sampling middle atmospheric regions in which gravity wave drag may play a dominant role. Estimating the drag then becomes crucial to the estimation of the atmospheric winds from the PMIRR-observed temperatures. An interative diagnostic model based upon one previously developed and tested with earth satellite temperature data will be applied to the PMIRR measurements to produce estimates of the small-scale zonal drag and three-dimensional wind fields in the Mars middle atmosphere. This model is based on the primitive equations, and can allow for time dependence (the time tendencies used may be based upon those computed in a Fast Fourier Mapping procedure). The small-scale zonal drag is estimated as the residual in the zonal momentum equation; the horizontal winds having first been estimated from the meridional momentum equation and the continuity equation. The scheme estimates the vertical motions from the thermodynamic equation, and thus needs estimates of the diabatic heating based upon the observed temperatures. The latter will be generated using a radiative model. It is hoped that the diagnostic scheme will be able to produce good estimates of the zonal gravity wave drag in the Mars middle atmosphere, estimates that can then be used in other diagnostic or assimilation efforts, as well as more theoretical studies.

  10. Analysing the Zenith Tropospheric Delay Estimates in On-line Precise Point Positioning (PPP) Services and PPP Software Packages.

    PubMed

    Mendez Astudillo, Jorge; Lau, Lawrence; Tang, Yu-Ting; Moore, Terry

    2018-02-14

    As Global Navigation Satellite System (GNSS) signals travel through the troposphere, a tropospheric delay occurs due to a change in the refractive index of the medium. The Precise Point Positioning (PPP) technique can achieve centimeter/millimeter positioning accuracy with only one GNSS receiver. The Zenith Tropospheric Delay (ZTD) is estimated alongside with the position unknowns in PPP. Estimated ZTD can be very useful for meteorological applications, an example is the estimation of water vapor content in the atmosphere from the estimated ZTD. PPP is implemented with different algorithms and models in online services and software packages. In this study, a performance assessment with analysis of ZTD estimates from three PPP online services and three software packages is presented. The main contribution of this paper is to show the accuracy of ZTD estimation achievable in PPP. The analysis also provides the GNSS users and researchers the insight of the processing algorithm dependence and impact on PPP ZTD estimation. Observation data of eight whole days from a total of nine International GNSS Service (IGS) tracking stations spread in the northern hemisphere, the equatorial region and the southern hemisphere is used in this analysis. The PPP ZTD estimates are compared with the ZTD obtained from the IGS tropospheric product of the same days. The estimates of two of the three online PPP services show good agreement (<1 cm) with the IGS ZTD values at the northern and southern hemisphere stations. The results also show that the online PPP services perform better than the selected PPP software packages at all stations.

  11. Estimating Willingness-to-Pay for health insurance among rural poor in India by reference to Engel's law.

    PubMed

    Binnendijk, Erika; Dror, David M; Gerelle, Eric; Koren, Ruth

    2013-01-01

    Community-Based Health Insurance (CBHI) (a.k.a. micro health insurance) is a contributory health insurance among rural poor in developing countries. As CBHI schemes typically function with no subsidy income, the schemes' expenditures cannot exceed their premium income. A good estimate of Willingness-To-Pay (WTP) among the target population affiliating on a voluntary basis is therefore essential for package design. Previous estimates of WTP reported materially and significantly different WTP levels across locations (even within one state), making it necessity to base estimates on household surveys. This is time-consuming and expensive. This study seeks to identify a coherent anchor for local estimation of WTP without having to rely on household surveys in each CBHI implementation. Using data collected in 2008-2010 among rural poor households in six locations in India (total 7874 households), we found that in all locations WTP expressed as percentage of income decreases with household income. This reminds of Engel's law on food expenditures. We checked several possible anchors: overall income, discretionary income and food expenditures. We compared WTP expressed as percentage of these anchors, by calculating the Coefficient of Variation (for inter-community variation) and Concentration indices (for intra-community variation). The Coefficient of variation was 0.36, 0.43 and 0.50 for WTP as percent of food expenditures, overall income and discretionary income, respectively. In all locations the concentration index for WTP as percentage of food expenditures was the lowest. Thus, food expenditures had the most consistent relationship with WTP within each location and across the six locations. These findings indicate that like food, health insurance is considered a necessity good even by people with very low income and no prior experience with health insurance. We conclude that the level of WTP could be estimated based on each community's food expenditures, and that this

  12. "A space-time ensemble Kalman filter for state and parameter estimation of groundwater transport models"

    NASA Astrophysics Data System (ADS)

    Briseño, Jessica; Herrera, Graciela S.

    2010-05-01

    Herrera (1998) proposed a method for the optimal design of groundwater quality monitoring networks that involves space and time in a combined form. The method was applied later by Herrera et al (2001) and by Herrera and Pinder (2005). To get the estimates of the contaminant concentration being analyzed, this method uses a space-time ensemble Kalman filter, based on a stochastic flow and transport model. When the method is applied, it is important that the characteristics of the stochastic model be congruent with field data, but, in general, it is laborious to manually achieve a good match between them. For this reason, the main objective of this work is to extend the space-time ensemble Kalman filter proposed by Herrera, to estimate the hydraulic conductivity, together with hydraulic head and contaminant concentration, and its application in a synthetic example. The method has three steps: 1) Given the mean and the semivariogram of the natural logarithm of hydraulic conductivity (ln K), random realizations of this parameter are obtained through two alternatives: Gaussian simulation (SGSim) and Latin Hypercube Sampling method (LHC). 2) The stochastic model is used to produce hydraulic head (h) and contaminant (C) realizations, for each one of the conductivity realizations. With these realization the mean of ln K, h and C are obtained, for h and C, the mean is calculated in space and time, and also the cross covariance matrix h-ln K-C in space and time. The covariance matrix is obtained averaging products of the ln K, h and C realizations on the estimation points and times, and the positions and times with data of the analyzed variables. The estimation points are the positions at which estimates of ln K, h or C are gathered. In an analogous way, the estimation times are those at which estimates of any of the three variables are gathered. 3) Finally the ln K, h and C estimate are obtained using the space-time ensemble Kalman filter. The realization mean for each one

  13. Group differences in adult simple arithmetic: good retrievers, not-so-good retrievers, and perfectionists.

    PubMed

    Hecht, Steven A

    2006-01-01

    We used the choice/no-choice methodology in two experiments to examine patterns of strategy selection and execution in groups of undergraduates. Comparisons between choice and no-choice trials revealed three groups. Some participants good retrievers) were consistently able to use retrieval to solve almost all arithmetic problems. Other participants (perfectionists) successfully used retrieval substantially less often in choice-allowed trials than when strategy choices were prohibited. Not-so-good retrievers retrieved correct answers less often than the other participants in both the choice-allowed and no-choice conditions. No group differences emerged with respect to time needed to search and access answers from long-term memory; however, not-so-good retrievers were consistently slower than the other subgroups at executing fact-retrieval processes that are peripheral to memory search and access. Theoretical models of simple arithmetic, such as the Strategy Choice and Discovery Simulation (Shrager & Siegler, 1998), should be updated to include the existence of both perfectionist and not-so-good retriever adults.

  14. Sustainable cooperation based on reputation and habituation in the public goods game.

    PubMed

    Liu, Yan; Chen, Tong

    2017-10-01

    Reputation can promote cooperation in public goods game and player's cooperative behavior is not pure economical rationality, but habituation would influence their behaviors as well. One's habituation can be formed by repeated behaviors in daily life and be affected by habitual preference. We aim to investigate the sustainable cooperation based on reputation and habit formation. To better investigate the impacts of reputation and habitual preference on the evolution and sustainability of cooperation. We introduce three types of agents into our spatial public goods game. Through numerical simulations, we find that the larger habitual preference make cooperation easier to emerge and maintain. Additionally, we find that a moderate number of agents who want to obtain more reputation (ICs) are best for the sustainability of cooperation. Finally, we observe that the variation of donations of ICs can influence greatly on the equilibrium of public goods game. When ICs reduce their donations, a proper contribution will be better to maintain the cooperative behaviors. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Estimating Price Elasticity using Market-Level Appliance Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujita, K. Sydny

    This report provides and update to and expansion upon our 2008 LBNL report “An Analysis of the Price Elasticity of Demand for Appliances,” in which we estimated an average relative price elasticity of -0.34 for major household appliances (Dale and Fujita 2008). Consumer responsiveness to price change is a key component of energy efficiency policy analysis; these policies influence consumer purchases through price both explicitly and implicitly. However, few studies address appliance demand elasticity in the U.S. market and public data sources are generally insufficient for rigorous estimation. Therefore, analysts have relied on a small set of outdated papers focusedmore » on limited appliance types, assuming long-term elasticities estimated for other durables (e.g., vehicles) decades ago are applicable to current and future appliance purchasing behavior. We aim to partially rectify this problem in the context of appliance efficiency standards by revisiting our previous analysis, utilizing data released over the last ten years and identifying additional estimates of durable goods price elasticities in the literature. Reviewing the literature, we find the following ranges of market-level price elasticities: -0.14 to -0.42 for appliances; -0.30 to -1.28 for automobiles; -0.47 to -2.55 for other durable goods. Brand price elasticities are substantially higher for these product groups, with most estimates -2.0 or more elastic. Using market-level shipments, sales value, and efficiency level data for 1989-2009, we run various iterations of a log-log regression model, arriving at a recommended range of short run appliance price elasticity between -0.4 and -0.5, with a default value of -0.45.« less

  16. A Computational Approach to Investigate Properties of Estimators

    ERIC Educational Resources Information Center

    Caudle, Kyle A.; Ruth, David M.

    2013-01-01

    Teaching undergraduates the basic properties of an estimator can be difficult. Most definitions are easy enough to comprehend, but difficulties often lie in gaining a "good feel" for these properties and why one property might be more desired as compared to another property. Simulations which involve visualization of these properties can…

  17. What Good Are Warfare Models?

    DTIC Science & Technology

    1981-05-01

    PROFESSIONAL PAPER 306 / May 1981 WHAT GOOD ARE WARFARE MODELS? Thomas E. Anger DTICS E LECTE ,JUN 2198 1 j CENTER FOR NAVAL ANALYSES 81 6 19 025 V...WHAT GOOD ARE WARFARE MODELS? Thomas E. /Anger J Accession For !ETIS GRA&I DTIC TAB thonnounceldŕ 5 By-C Availability Codes iAva il aand/or Di1st...least flows from a life-or-death incenLive to make good guesses when choosing weapons, forces, or strategies. It is not surprising, however, that

  18. Sediment yield estimation in mountain catchments of the Camastra reservoir, southern Italy: a comparison among different empirical methods

    NASA Astrophysics Data System (ADS)

    Lazzari, Maurizio; Danese, Maria; Gioia, Dario; Piccarreta, Marco

    2013-04-01

    Sedimentary budget estimation is an important topic for both scientific and social community, because it is crucial to understand both dynamics of orogenic belts and many practical problems, such as soil conservation and sediment accumulation in reservoir. Estimations of sediment yield or denudation rates in southern-central Italy are generally obtained by simple empirical relationships based on statistical regression between geomorphic parameters of the drainage network and the measured suspended sediment yield at the outlet of several drainage basins or through the use of models based on sediment delivery ratio or on soil loss equations. In this work, we perform a study of catchment dynamics and an estimation of sedimentary yield for several mountain catchments of the central-western sector of the Basilicata region, southern Italy. Sediment yield estimation has been obtained through both an indirect estimation of suspended sediment yield based on the Tu index (mean annual suspension sediment yield, Ciccacci et al., 1980) and the application of the Rusle (Renard et al., 1997) and the USPED (Mitasova et al., 1996) empirical methods. The preliminary results indicate a reliable difference between the RUSLE and USPED methods and the estimation based on the Tu index; a critical data analysis of results has been carried out considering also the present-day spatial distribution of erosion, transport and depositional processes in relation to the maps obtained from the application of those different empirical methods. The studied catchments drain an artificial reservoir (i.e. the Camastra dam), where a detailed evaluation of the amount of historical sediment storage has been collected. Sediment yield estimation obtained by means of the empirical methods have been compared and checked with historical data of sediment accumulation measured in the artificial reservoir of the Camastra dam. The validation of such estimations of sediment yield at the scale of large catchments

  19. A Photometrically Detected Forming Cluster of Galaxies at Redshift 1.6 in the GOODS Field

    NASA Astrophysics Data System (ADS)

    Castellano, M.; Salimbeni, S.; Trevese, D.; Grazian, A.; Pentericci, L.; Fiore, F.; Fontana, A.; Giallongo, E.; Santini, P.; Cristiani, S.; Nonino, M.; Vanzella, E.

    2007-12-01

    We report the discovery of a localized overdensity at z~1.6 in the GOODS-South field, presumably a poor cluster in the process of formation. The three-dimensional galaxy density has been estimated on the basis of well-calibrated photometric redshifts from the multiband photometric GOODS-MUSIC catalog using the (2+1)-dimensional technique. The density peak is embedded in the larger scale overdensity of galaxies known to exist at z=1.61 in the area. The properties of the member galaxies are compared to those of the surrounding field, and we find that the two populations are significantly different, supporting the reality of the structure. The reddest galaxies, once evolved according to their best-fit models, have colors consistent with the red sequence of lower redshift clusters. The estimated M200 total mass of the cluster is in the range 1.3×1014-5.7×1014 Msolar, depending on the assumed bias factor b. An upper limit for the 2-10 keV X-ray luminosity, based on the 1 Ms Chandra observations, is LX=0.5×1043 erg s-1, suggesting that the cluster has not yet reached the virial equilibrium.

  20. Soil hydraulic properties estimate based on numerical analysis of disc infiltrometer three-dimensional infiltration curve

    NASA Astrophysics Data System (ADS)

    Latorre, Borja; Peña-Sancho, Carolina; Angulo-Jaramillo, Rafaël; Moret-Fernández, David

    2015-04-01

    Measurement of soil hydraulic properties is of paramount importance in fields such as agronomy, hydrology or soil science. Fundamented on the analysis of the Haverkamp et al. (1994) model, the aim of this paper is to explain a technique to estimate the soil hydraulic properties (sorptivity, S, and hydraulic conductivity, K) from the full-time cumulative infiltration curves. The method (NSH) was validated by means of 12 synthetic infiltration curves generated with HYDRUS-3D from known soil hydraulic properties. The K values used to simulate the synthetic curves were compared to those estimated with the proposed method. A procedure to identify and remove the effect of the contact sand layer on the cumulative infiltration curve was also developed. A sensitivity analysis was performed using the water level measurement as uncertainty source. Finally, the procedure was evaluated using different infiltration times and data noise. Since a good correlation between the K used in HYDRUS-3D to model the infiltration curves and those estimated by the NSH method was obtained, (R2 =0.98), it can be concluded that this technique is robust enough to estimate the soil hydraulic conductivity from complete infiltration curves. The numerical procedure to detect and remove the influence of the contact sand layer on the K and S estimates seemed to be robust and efficient. An effect of the curve infiltration noise on the K estimate was observed, which uncertainty increased with increasing noise. Finally, the results showed that infiltration time was an important factor to estimate K. Lower values of K or smaller uncertainty needed longer infiltration times.