Science.gov

Sample records for ability estimation methods

  1. A Method of Estimating Item Characteristic Functions Using the Maximum Likelihood Estimate of Ability

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    1977-01-01

    A method of estimating item characteristic functions is proposed, in which a set of test items, whose operating characteristics are known and which give a constant test information function for a wide range of ability, are used. The method is based on maximum likelihood estimation procedures. (Author/JKS)

  2. Ability of geometric morphometric methods to estimate a known covariance matrix.

    PubMed

    Walker, J A

    2000-12-01

    Landmark-based morphometric methods must estimate the amounts of translation, rotation, and scaling (or, nuisance) parameters to remove nonshape variation from a set of digitized figures. Errors in estimates of these nuisance variables will be reflected in the covariance structure of the coordinates, such as the residuals from a superimposition, or any linear combination of the coordinates, such as the partial warp and standard uniform scores. A simulation experiment was used to compare the ability of the generalized resistant fit (GRF) and a relative warp analysis (RWA) to estimate known covariance matrices with various correlations and variance structures. Random covariance matrices were perturbed so as to vary the magnitude of the average correlation among coordinates, the number of landmarks with excessive variance, and the magnitude of the excessive variance. The covariance structure was applied to random figures with between 6 and 20 landmarks. The results show the expected performance of GRF and RWA across a broad spectrum of conditions. The performance of both GRF and RWA depended most strongly on the number of landmarks. RWA performance decreased slightly when one or a few landmarks had excessive variance. GRF performance peaked when approximately 25% of the landmarks had excessive variance. In general, both RWA and GRF performed better at estimating the direction of the first principal axis of the covariance matrix than the structure of the entire covariance matrix. RWA tended to outperform GRF when > approximately 75% of the coordinates had excessive variance. When < 75% of the coordinates had excessive variance, the relative performance of RWA and GRF depended on the magnitude of the excessive variance; when the landmarks with excessive variance had standard deviations (sigma) > or = 4 sigma minimum, GRF regularly outperformed RWA.

  3. Developing an Efficient Computational Method that Estimates the Ability of Students in a Web-Based Learning Environment

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2012-01-01

    This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…

  4. A clinical evaluation of the ability of the Dentobuff method to estimate buffer capacity of saliva.

    PubMed

    Wikner, S; Nedlich, U

    1985-01-01

    The power of a colourimetric method to estimate buffer capacity of saliva (Dentobuff) was compared with an electrometric method in 220 adults. The methods correlated well but Dentobuff frequently underestimated high buffer values which was considered to be of minor practical importance. Dentobuff identified groups with low, intermediate and high buffer capacity as good as the electrometric method.

  5. Ability Estimation for Conventional Tests.

    ERIC Educational Resources Information Center

    Kim, Jwa K.; Nicewander, W. Alan

    1993-01-01

    Bias, standard error, and reliability of five ability estimators were evaluated using Monte Carlo estimates of the unknown conditional means and variances of the estimators. Results indicate that estimates based on Bayesian modal, expected a posteriori, and weighted likelihood estimators were reasonably unbiased with relatively small standard…

  6. Estimation of the binding ability of main transport proteins of blood plasma with liver cirrhosis by the fluorescent probe method

    NASA Astrophysics Data System (ADS)

    Korolenko, E. A.; Korolik, E. V.; Korolik, A. K.; Kirkovskii, V. V.

    2007-07-01

    We present results from an investigation of the binding ability of the main transport proteins (albumin, lipoproteins, and α-1-acid glycoprotein) of blood plasma from patients at different stages of liver cirrhosis by the fluorescent probe method. We used the hydrophobic fluorescent probes anionic 8-anilinonaphthalene-1-sulfonate, which interacts in blood plasma mainly with albumin; cationic Quinaldine red, which interacts with α-1-acid glycoprotein; and neutral Nile red, which redistributes between lipoproteins and albumin in whole blood plasma. We show that the binding ability of albumin and α-1-acid glycoprotein to negatively charged and positively charged hydrophobic metabolites, respectively, increases in the compensation stage of liver cirrhosis. As the pathology process deepens and transitions into the decompensation stage, the transport abilities of albumin and α-1-acid glycoprotein decrease whereas the binding ability of lipoproteins remains high.

  7. Surround-Masking Affects Visual Estimation Ability

    PubMed Central

    Jastrzebski, Nicola R.; Hugrass, Laila E.; Crewther, Sheila G.; Crewther, David P.

    2017-01-01

    Visual estimation of numerosity involves the discrimination of magnitude between two distributions or perceptual sets that vary in number of elements. How performance on such estimation depends on peripheral sensory stimulation is unclear, even in typically developing adults. Here, we varied the central and surround contrast of stimuli that comprised a visual estimation task in order to determine whether mechanisms involved with the removal of unessential visual input functionally contributes toward number acuity. The visual estimation judgments of typically developed adults were significantly impaired for high but not low contrast surround stimulus conditions. The center and surround contrasts of the stimuli also differentially affected the accuracy of numerosity estimation depending on whether fewer or more dots were presented. Remarkably, observers demonstrated the highest mean percentage accuracy across stimulus conditions in the discrimination of more elements when the surround contrast was low and the background luminance of the central region containing the elements was dark (black center). Conversely, accuracy was severely impaired during the discrimination of fewer elements when the surround contrast was high and the background luminance of the central region was mid level (gray center). These findings suggest that estimation ability is functionally related to the quality of low-order filtration of unessential visual information. These surround masking results may help understanding of the poor visual estimation ability commonly observed in developmental dyscalculia. PMID:28360845

  8. Combining Climatic Projections and Dispersal Ability: A Method for Estimating the Responses of Sandfly Vector Species to Climate Change

    PubMed Central

    Fischer, Dominik; Moeller, Philipp; Thomas, Stephanie M.; Naucke, Torsten J.; Beierkuhnlein, Carl

    2011-01-01

    Background In the Old World, sandfly species of the genus Phlebotomus are known vectors of Leishmania, Bartonella and several viruses. Recent sandfly catches and autochthonous cases of leishmaniasis hint on spreading tendencies of the vectors towards Central Europe. However, studies addressing potential future distribution of sandflies in the light of a changing European climate are missing. Methodology Here, we modelled bioclimatic envelopes using MaxEnt for five species with proven or assumed vector competence for Leishmania infantum, which are either predominantly located in (south-) western (Phlebotomus ariasi, P. mascittii and P. perniciosus) or south-eastern Europe (P. neglectus and P. perfiliewi). The determined bioclimatic envelopes were transferred to two climate change scenarios (A1B and B1) for Central Europe (Austria, Germany and Switzerland) using data of the regional climate model COSMO-CLM. We detected the most likely way of natural dispersal (“least-cost path”) for each species and hence determined the accessibility of potential future climatically suitable habitats by integrating landscape features, projected changes in climatic suitability and wind speed. Results and Relevance Results indicate that the Central European climate will become increasingly suitable especially for those vector species with a current south-western focus of distribution. In general, the highest suitability of Central Europe is projected for all species in the second half of the 21st century, except for P. perfiliewi. Nevertheless, we show that sandflies will hardly be able to occupy their climatically suitable habitats entirely, due to their limited natural dispersal ability. A northward spread of species with south-eastern focus of distribution may be constrained but not completely avoided by the Alps. Our results can be used to install specific monitoring systems to the projected risk zones of potential sandfly establishment. This is urgently needed for adaptation

  9. Estimation abilities of large numerosities in Kindergartners

    PubMed Central

    Mejias, Sandrine; Schiltz, Christine

    2013-01-01

    The approximate number system (ANS) is thought to be a building block for the elaboration of formal mathematics. However, little is known about how this core system develops and if it can be influenced by external factors at a young age (before the child enters formal numeracy education). The purpose of this study was to examine numerical magnitude representations of 5–6 year old children at 2 different moments of Kindergarten considering children's early number competence as well as schools' socio-economic index (SEI). This study investigated estimation abilities of large numerosities using symbolic and non-symbolic output formats (8–64). In addition, we assessed symbolic and non-symbolic early number competence (1–12) at the end of the 2nd (N = 42) and the 3rd (N = 32) Kindergarten grade. By letting children freely produce estimates we observed surprising estimation abilities at a very young age (from 5 year on) extending far beyond children's symbolic explicit knowledge. Moreover, the time of testing has an impact on the ANS accuracy since 3rd Kindergarteners were more precise in both estimation tasks. Additionally, children who presented better exact symbolic knowledge were also those with the most refined ANS. However, this was true only for 3rd Kindergarteners who were a few months from receiving math instructions. In a similar vein, higher SEI positively impacted only the oldest children's estimation abilities whereas it played a role for exact early number competences already in 2nd and 3rd graders. Our results support the view that approximate numerical representations are linked to exact number competence in young children before the start of formal math education and might thus serve as building blocks for mathematical knowledge. Since this core number system was also sensitive to external components such as the SEI this implies that it can most probably be targeted and refined through specific educational strategies from preschool on. PMID

  10. Time Estimation Abilities of College Students with ADHD

    ERIC Educational Resources Information Center

    Prevatt, Frances; Proctor, Briley; Baker, Leigh; Garrett, Lori; Yelland, Sherry

    2011-01-01

    Objective: To evaluate the time estimation abilities of college students with ADHD on a novel, complex task that approximated academically oriented activities. Method: Totally 20 college students with ADHD were compared to a sample of 20 non-ADHD students. Both groups completed a task, and scores were obtained for time to complete the task, errors…

  11. Estimating Premorbid Cognitive Abilities in Low-Educated Populations

    PubMed Central

    Apolinario, Daniel; Brucki, Sonia Maria Dozzi; Ferretti, Renata Eloah de Lucena; Farfel, José Marcelo; Magaldi, Regina Miksian; Busse, Alexandre Leopold; Jacob-Filho, Wilson

    2013-01-01

    Objective To develop an informant-based instrument that would provide a valid estimate of premorbid cognitive abilities in low-educated populations. Methods A questionnaire was drafted by focusing on the premorbid period with a 10-year time frame. The initial pool of items was submitted to classical test theory and a factorial analysis. The resulting instrument, named the Premorbid Cognitive Abilities Scale (PCAS), is composed of questions addressing educational attainment, major lifetime occupation, reading abilities, reading habits, writing abilities, calculation abilities, use of widely available technology, and the ability to search for specific information. The validation sample was composed of 132 older Brazilian adults from the following three demographically matched groups: normal cognitive aging (n = 72), mild cognitive impairment (n = 33), and mild dementia (n = 27). The scores of a reading test and a neuropsychological battery were adopted as construct criteria. Post-mortem inter-informant reliability was tested in a sub-study with two relatives from each deceased individual. Results All items presented good discriminative power, with corrected item-total correlation varying from 0.35 to 0.74. The summed score of the instrument presented high correlation coefficients with global cognitive function (r = 0.73) and reading skills (r = 0.82). Cronbach's alpha was 0.90, showing optimal internal consistency without redundancy. The scores did not decrease across the progressive levels of cognitive impairment, suggesting that the goal of evaluating the premorbid state was achieved. The intraclass correlation coefficient was 0.96, indicating excellent inter-informant reliability. Conclusion The instrument developed in this study has shown good properties and can be used as a valid estimate of premorbid cognitive abilities in low-educated populations. The applicability of the PCAS, both as an estimate of premorbid intelligence and cognitive

  12. Error Estimates for Mixed Methods.

    DTIC Science & Technology

    1979-03-01

    This paper presents abstract error estimates for mixed methods for the approximate solution of elliptic boundary value problems. These estimates are...then applied to obtain quasi-optimal error estimates in the usual Sobolev norms for four examples: three mixed methods for the biharmonic problem and a mixed method for 2nd order elliptic problems. (Author)

  13. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  14. Self-estimation of ability among skiers and snowboarders in alpine skiing resorts.

    PubMed

    Sulheim, Steinar; Ekeland, Arne; Bahr, Roald

    2007-05-01

    Skiing ability is thought to be an important risk factor for injuries, but the best method to classify skiing ability is not known. The objective of this study was to validate five different questions designed to self-report skiing ability for ski injury surveillance. To this end 512 alpine skiers, Telemark skiers, snowboarders and skiboarders were asked to selfestimate their skiing ability using five different questions based on skiing skill, piste difficulty, turning technique, skiing experience and falling frequency, each with four categories. The participants then made a test run to test their skiing ability. Observed and self-reported skiing ability were compared using kappa statistics. The correlation between observed and self-reported skiing ability was low to fair, with kappa values of 0.34 for skiing skill), 0.33 for piste difficulty, 0.38 for turning technique, 0.26 for experience and 0.16 for falling frequency. However, the sensitivity and specificity for each of the questionnaires in discriminating between individuals in the poorest skiing ability category on the test and the rest of the group was relatively good (skiing skill: sensitivity 75%, specificity 91%; piste difficulty 68, 96%; turning technique 75, 91%; experience 75, 90%; falling frequency 61, 97%). The results show that the capacity to self-assess skiing ability is limited, but estimation based upon turning technique or skiing skill seem to be best methods for epidemiological studies on injuries in snow sports.

  15. Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation

    ERIC Educational Resources Information Center

    Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting

    2011-01-01

    Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…

  16. PDV Uncertainty Estimation & Methods Comparison

    SciTech Connect

    Machorro, E.

    2011-11-01

    Several methods are presented for estimating the rapidly changing instantaneous frequency of a time varying signal that is contaminated by measurement noise. Useful a posteriori error estimates for several methods are verified numerically through Monte Carlo simulation. However, given the sampling rates of modern digitizers, sub-nanosecond variations in velocity are shown to be reliably measurable in most (but not all) cases. Results support the hypothesis that in many PDV regimes of interest, sub-nanosecond resolution can be achieved.

  17. An effective method for incoherent scattering radar's detecting ability evaluation

    NASA Astrophysics Data System (ADS)

    Lu, Ziqing; Yao, Ming; Deng, Xiaohua

    2016-06-01

    Ionospheric incoherent scatter radar (ISR), which is used to detect ionospheric electrons and ions, generally, has megawatt class transmission power and hundred meter level antenna aperture. The crucial purpose of this detecting technology is to get ionospheric parameters by acquiring the autocorrelation function and power spectrum of the target ionospheric plasma echoes. Whereas the ISR's echoes are very weak because of the small radar cross section of its target, estimating detecting ability will be significantly instructive and meaningful for ISR system design. In this paper, we evaluate the detecting ability through signal-to-noise ratio (SNR). The soft-target radar equation is deduced to be applicable to ISR, through which we use data from International Reference Ionosphere model to simulate signal-to-noise ratio (SNR) of echoes, and then comparing the measured SNR from European Incoherent Scatter Scientific Association and Advanced Modular Incoherent Scatter Radar with the simulation. The simulation results show good consistency with the measured SNR. For ISR, the topic of this paper is the first comparison between the calculated SNR and radar measurements; the detecting ability can be improved through increasing SNR. The effective method for ISR's detecting ability evaluation provides basis for design of radar system.

  18. Ability Estimates That Order Individuals with Consistent Philosophies.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    Latent trait models introduced the concept of the latent trait, or ability, as distinct from the test score. There is a recent tendency to treat the test score as through it were a substitute for ability, largely because the test score is a convenient way to place individuals in order. F. Samejima (1969) has shown that, in general, the amount of…

  19. Effects of Scale Transformation and Test Termination Rule on the Precision of Ability Estimates in CAT. ACT Research Report Series.

    ERIC Educational Resources Information Center

    Yi, Qing; Wang, Tianyou; Ban, Jae-Chun

    Error indices (bias, standard error of estimation, and root mean square error) obtained on different scales of measurement under different test termination rules in a computerized adaptive test (CAT) context were examined. Four ability estimation methods were studied: (1) maximum likelihood estimation (MLE); (2) weighted likelihood estimation…

  20. On the Relationships between Jeffreys Modal and Weighted Likelihood Estimation of Ability under Logistic IRT Models

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2012-01-01

    This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…

  1. Methods for Cloud Cover Estimation

    NASA Technical Reports Server (NTRS)

    Glackin, D. L.; Huning, J. R.; Smith, J. H.; Logan, T. L.

    1984-01-01

    Several methods for cloud cover estimation are described relevant to assessing the performance of a ground-based network of solar observatories. The methods rely on ground and satellite data sources and provide meteorological or climatological information. One means of acquiring long-term observations of solar oscillations is the establishment of a ground-based network of solar observatories. Criteria for station site selection are: gross cloudiness, accurate transparency information, and seeing. Alternative methods for computing this duty cycle are discussed. The cycle, or alternatively a time history of solar visibility from the network, can then be input to a model to determine the effect of duty cycle on derived solar seismology parameters. Cloudiness from space is studied to examine various means by which the duty cycle might be computed. Cloudiness, and to some extent transparency, can potentially be estimated from satellite data.

  2. Development of the WAIS-III general ability index estimate (GAI-E).

    PubMed

    Lange, Rael T; Schoenberg, Mike R; Chelune, Gordon J; Scott, James G; Adams, Russell L

    2005-02-01

    The WAIS-III General Ability Index (GAI; Tulsky, Saklofske, Wilkins, & Weiss, 2001) is a recently developed, 6-subtest measure of global intellectual functioning. However, clinical use of the GAI is currently limited by the absence of a method to estimate premorbid functioning as measured by this index. The purpose of this study was to develop regression equations to estimate GAI scores from demographic variables and WAIS-III subtest performance. Participants consisted of those subjects in the WAIS-III standardization sample that has complete demographic data (N=2,401) and were randomly divided into two groups. The first group (n=1,200) was used to develop the formulas (i.e., Development group) and the second (n=1,201) group was used to validate the prediction algorithms (i.e., Validation group). Demographic variables included age, education, ethnicity, gender and region of country. Subtest variables included vocabulary, information, picture completion, and matrix reasoning raw scores. Ten regression algorithms were generated designed to estimate GAI. The GAI-Estimate (GAI-E) algorithms accounted for 58% to 82% of the variance. The standard error of estimate ranged from 6.44 to 9.57. The correlations between actual and estimated GAI ranged from r=.76 to r=.90. These algorithms provided accurate estimates of GAI in the WAIS-III standardization sample. Implications for estimating GAI in patients with known or suspected neurological dysfunction is discussed and future research is proposed.

  3. Comparing Different Approaches of Bias Correction for Ability Estimation in IRT Models. Research Report. ETS RR-08-13

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; Zhang, Jinming

    2008-01-01

    The method of maximum-likelihood is typically applied to item response theory (IRT) models when the ability parameter is estimated while conditioning on the true item parameters. In practice, the item parameters are unknown and need to be estimated first from a calibration sample. Lewis (1985) and Zhang and Lu (2007) proposed the expected response…

  4. A Note on the Reliability Coefficients for Item Response Model-Based Ability Estimates

    ERIC Educational Resources Information Center

    Kim, Seonghoon

    2012-01-01

    Assuming item parameters on a test are known constants, the reliability coefficient for item response theory (IRT) ability estimates is defined for a population of examinees in two different ways: as (a) the product-moment correlation between ability estimates on two parallel forms of a test and (b) the squared correlation between the true…

  5. Career Interests and Self-Estimated Abilities of Young Adults with Disabilities

    ERIC Educational Resources Information Center

    Turner, Sherri; Unkefer, Lesley Craig; Cichy, Bryan Ervin; Peper, Christine; Juang, Ju-Ping

    2011-01-01

    The purpose of this study was to ascertain vocational interests and self-estimated work-relevant abilities of young adults with disabilities. Results showed that young adults with both low incidence and high incidence disabilities have a wide range of interests and self-estimated work-relevant abilities that are comparable to those in the general…

  6. Unwed Fathers’ Ability to Pay Child Support: New Estimates Accounting for Multiple-Partner Fertility

    PubMed Central

    SINKEWICZ, MARILYN; GARFINKEL, IRWIN

    2009-01-01

    We present new estimates of unwed fathers’ ability to pay child support. Prior research relied on surveys that drastically undercounted nonresident unwed fathers and provided no link to their children who lived in separate households. To overcome these limitations, previous research assumed assortative mating and that each mother partnered with one father who was actually eligible to pay support and had no other child support obligations. Because the Fragile Families and Child Wellbeing Study contains data on couples, multiple-partner fertility, and a rich array of other previously unmeasured characteristics of fathers, it is uniquely suited to address the limitations of previous research. We also use an improved method of dealing with missing data. Our findings suggest that previous research overestimated the aggregate ability of unwed nonresident fathers to pay child support by 33% to 60%. PMID:21305392

  7. A Study of Frequency Estimation Equipercentile Equating When There Are Large Ability Differences. Research Report. ETS RR-09-45

    ERIC Educational Resources Information Center

    Guo, Hongwen; Oh, Hyeonjoo J.

    2009-01-01

    In operational equating, frequency estimation (FE) equipercentile equating is often excluded from consideration when the old and new groups have a large ability difference. This convention may, in some instances, cause the exclusion of one competitive equating method from the set of methods under consideration. In this report, we study the…

  8. Is Bayesian Estimation Proper for Estimating the Individual's Ability? Research Report 80-3.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    The effect of prior information in Bayesian estimation is considered, mainly from the standpoint of objective testing. In the estimation of a parameter belonging to an individual, the prior information is, in most cases, the density function of the population to which the individual belongs. Bayesian estimation was compared with maximum likelihood…

  9. Nonlinear Regression Methods for Estimation

    DTIC Science & Technology

    2005-09-01

    accuracy when the geometric dilution of precision ( GDOP ) causes collinearity, which in turn brings about poor position estimates. The main goal is...measurements are needed to wash-out the 168 measurement noise. Furthermore, the measurement arrangement’s geometry ( GDOP ) strongly impacts the achievable...Newton algorithm, 61 geometric dilution of precision, see GDOP initial parameter estimate, 91 iterative least squares, see ILS Kalman filtering, 10

  10. An improved method of monopulse estimation in PD radar

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Guo, Peng; Lei, Peng; Wei, Shaoming

    2011-10-01

    Monopulse estimation is an angle measurement method with high data rate, measurement precision and anti-jamming ability, since the angle information of target is obtained by comparing echoes received in two or more simultaneous antenna beams. However, the data rate of this method decreases due to coherent integration when applied in pulse Doppler (PD) radar. This paper presents an improved method of monopulse estimation in PD radar. In this method, the received echoes are selected by shift before coherent integration, detection and angle measurement. It can increase data rate while maintain angle measurement precision. And the validity of this method is verified by theoretical analysis and simulation results.

  11. Methods of Estimating Strategic Intentions

    DTIC Science & Technology

    1982-05-01

    of events, coding categories. A V 𔃻 2. Weighting Data: polIcy capturIng, Bayesian methods, correlation and variance analysis. 3. Characterizing Data...memory aids, fuzzy sets, factor analysis. 4. Assessing Covariations: actuarial models, backcasting . bootstrapping. 5. Cause and Effect Assessment...causae search, causal analysis, search trees, stepping analysts, hypothesis, regression analysis. 6. Predictions: Backcast !ng, boot strapping, decision

  12. A Longitudinal Analysis of Estimation, Counting Skills, and Mathematical Ability across the First School Year

    ERIC Educational Resources Information Center

    Muldoon, Kevin; Towse, John; Simms, Victoria; Perra, Oliver; Menzies, Victoria

    2013-01-01

    In response to claims that the quality (and in particular linearity) of children's mental representation of number acts as a constraint on number development, we carried out a longitudinal assessment of the relationships between number line estimation, counting, and mathematical abilities. Ninety-nine 5-year-olds were tested on 4 occasions at 3…

  13. Effect of Rasch Calibration on Ability and DIF Estimation in Computer-Adaptive Tests.

    ERIC Educational Resources Information Center

    Zwick, Rebecca; And Others

    1995-01-01

    In a simulation study of ability and estimation of differential item functioning (DIF) in computerized adaptive tests, Rasch-based DIF statistics were highly correlated with generating DIF, but DIF statistics tended to be slightly smaller than in the three-parameter logistic model analyses. (SLD)

  14. TH-SCORE: A Program for Obtaining Ability Estimates under Different Psychometric Models.

    ERIC Educational Resources Information Center

    Ferrando, Pere J.; Lorenzo, Urbano

    1998-01-01

    A program for obtaining ability estimates and their standard errors under a variety of psychometric models is documented. The general models considered are (1) classical test theory; (2) item factor analysis for continuous censored responses; and (3) unidimensional and multidimensional item response theory graded response models. (SLD)

  15. Effects of Calibration Sample Size and Item Bank Size on Ability Estimation in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Sahin, Alper; Weiss, David J.

    2015-01-01

    This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…

  16. Cross-Validation of the Quick Word Test as an Estimator of Adult Mental Ability

    ERIC Educational Resources Information Center

    Grotelueschen, Arden; McQuarrie, Duncan

    1970-01-01

    This report provides additional evidence that the Quick Word Test (Level 2, Form AM) is valid for estimating adult mental ability as defined by the Wechsler Adult Intelligence Scale. The validation sample is also described to facilitate use of the conversion table developed in the cross-validation analysis. (Author/LY)

  17. Brief Report: Use of DQ for Estimating Cognitive Ability in Young Children with Autism

    ERIC Educational Resources Information Center

    Delmolino, Lara M.

    2006-01-01

    The utility of Developmental Quotients (DQ) from the Psychoeducational Profile--Revised (PEP-R) to estimate cognitive ability in young children with autism was assessed. DQ scores were compared to scores from the Stanford-Binet Intelligence Scales--Fourth Edition (SB-FE) for 27 preschool students with autism. Overall and domain DQ's on the PEP-R…

  18. Trace Metals Monitoring In Water: Ability of Dgt Measurements For The Estimation of Bioavailability.

    NASA Astrophysics Data System (ADS)

    Gilbin, R.; Bakkaus, E.; Tusseau-Vuillemin, M.-H.

    The European Water Framework Directive point out the need for characterisation and monitoring of river waters in order to review the impact of human activity. Concern- ing trace metals pollution, it is now well established that the analysis of total con- centrations does not provide a good estimation of aquatic ecosystems exposure. Trace metals bioavailability depends on their speciation, i.e. their distribution among differ- ent forms (oxidation state, complexation with various ligands). Among these species, only the reactive species on cells surface are regarded as bioavailable (hydrated metal- lic ions and kinetically labile metal complexes). In this context, trace metals bioavail- ability was theorised by the formulation of the SFree Ion Activity ModelT and the & cedil;SBiotic Ligand ModelT. However, analytical methods used for the estimation of the & cedil; labile fraction of trace metals generally require a delicate calibration and are not eas- ily usable for field studies. Recently, a new technique for the measurement of effective hazardous metal concentrations was developed: Diffusion Gradients in Thin Films (Davison and Zhang, 1994). It allows to avoid some of the difficulties related to the traditional techniques, especially for in situ studies and monitoring: several in situ studies with this technique gave encouraging results. But prior to propose this ap- proach for a large use in trace metals monitoring in rivers, we still have to validate the technique by laboratory studies, by model simulations and an enlarge experience in field studies. The aim our work was to compare the experimental measurement by the DGT method with the measurement of biological effects by bioassays (acute toxicity) and the evaluation of free ion by chemical modelling (MINEQL+). Bioavailability of trace metals (Cu, Cd) in water was studied in the presence of characterised ligands (inorganic ligands, citrate, EDTA, standard humic substances). The results obtained should allow to

  19. Estimating Turbulent Surface Fluxes from Small Unmanned Aircraft: Evaluation of Current Abilities

    NASA Astrophysics Data System (ADS)

    de Boer, G.; Lawrence, D.; Elston, J.; Cassano, J. J.; Mack, J.; Wildmann, N.; Nigro, M. A.; Ivey, M.; Wolfe, D. E.; Muschinski, A.

    2014-12-01

    Heat transfer between the atmosphere and Earth's surface represents a key component to understanding Earth energy balance, making it important in understanding and simulating climate. Arguably, the oceanic air-sea interface and Polar sea-ice-air interface are amongst the most challenging in which to measure these fluxes. This difficulty results partially from challenges associated with infrastructure deployment on these surfaces and partially from an inability to obtain spatially representative values over a potentially inhomogeneous surface. Traditionally sensible (temperature) and latent (moisture) fluxes are estimated using one of several techniques. A preferred method involves eddy-correlation where cross-correlation between anomalies in vertical motion (w) and temperature (T) or moisture (q) is used to estimate heat transfer. High-frequency measurements of these quantities can be derived using tower-mounted instrumentation. Such systems have historically been deployed over land surfaces or on ships and buoys to calculate fluxes at the air-land or air-sea interface, but such deployments are expensive and challenging to execute, resulting in a lack of spatially diverse measurements. A second ("bulk") technique involves the observation of horizontal windspeed, temperature and moisture at a given altitude over an extended time period in order to estimate the surface fluxes. Small Unmanned Aircraft Systems (sUAS) represent a unique platform from which to derive these fluxes. These sUAS can be small ( 1 m), lightweight ( 700 g), low cost ( $2000) and relatively easy to deploy to remote locations and over inhomogeneous surfaces. We will give an overview of the ability of sUAS to provide measurements necessary for estimating surface turbulent fluxes. This discussion is based on flights in the vicinity of the 1000 ft. Boulder Atmospheric Observatory (BAO) tower, and over the US Department of Energy facility at Oliktok Point, Alaska. We will present initial comparisons

  20. The Effects of Answer Copying on the Ability Level Estimates of Cheater Examinees in Answer Copying Pairs

    ERIC Educational Resources Information Center

    Zopluoglu, Cengiz; Davenport, Ernest C., Jr.

    2011-01-01

    The purpose of this study was to examine the effects of answer copying on the ability level estimates of cheater examinees in answer copying pairs. The study generated answer copying pairs for each of 1440 conditions, source ability (12) x cheater ability (12) x amount of copying (10). The average difference between the ability level estimates…

  1. Estimation of work capacity and work ability among plantation workers in South India

    PubMed Central

    Anbazhagan, Suguna; Ramesh, Naveen; Surekha, A; Fathima, Farah N.; Melina; Anjali

    2016-01-01

    Background: Work capacity is the ability to perform real physical work, and work ability is a result of interaction of worker to his or her work that is how good a worker is at present, in near future, and how able is he or she to do his or her work with respect to work demands and health and mental resources. Objective: To assess the work capacity and work ability and to study the factors associated with work capacity and work ability of workers at a tea plantation in South India. Materials and Methods: A cross-sectional study was conducted at a tea plantation in Annamalai, South India, from March to May 2015. Data were collected using a structured interview schedule comprising of three parts as follows: sociodemographic data, work ability questionnaire, and work capacity assessment. Results: Of the 199 subjects participated in the study, majority [90 (45.3%)] were in the age group of 46–55 years, and 128 (64.3%) were females. Of the 199 workers, 12.6% had poor aerobic capacity (by Harvard Step test), 88.4% had an endurance of more than 1 h, 70.9% had better work productivity and energetic efficiency, and the voluntary activity workers spent most time on household chores. Of the 199 workers assessed, only 9.6% had good work ability. There is negative correlation between work ability and body mass index (BMI). Conclusion: Our study found 12.6% workers with poor aerobic capacity and 9.6% of workers with good work ability. Periodic health examinations and other screening procedures should be made as routine in workplace to improve work ability and capacity. PMID:28194080

  2. Standard methods for spectral estimation and prewhitening

    SciTech Connect

    Stearns, S.D.

    1986-07-01

    A standard FFT periodogram-averaging method for power spectral estimation is described in detail, with examples that the reader can use to verify his own software. The parameters that must be specified in order to repeat a given spectral estimate are listed. A standard technique for prewhitening is also described, again with repeatable examples and a summary of the parameters that must be specified.

  3. Source estimation methods for atmospheric dispersion

    NASA Astrophysics Data System (ADS)

    Shankar Rao, K.

    Both forward and backward transport modeling methods are being developed for characterization of sources in atmospheric releases of toxic agents. Forward modeling methods, which describe the atmospheric transport from sources to receptors, use forward-running transport and dispersion models or computational fluid dynamics models which are run many times, and the resulting dispersion field is compared to observations from multiple sensors. Forward modeling methods include Bayesian updating and inference schemes using stochastic Monte Carlo or Markov Chain Monte Carlo sampling techniques. Backward or inverse modeling methods use only one model run in the reverse direction from the receptors to estimate the upwind sources. Inverse modeling methods include adjoint and tangent linear models, Kalman filters, and variational data assimilation, among others. This survey paper discusses these source estimation methods and lists the key references. The need for assessing uncertainties in the characterization of sources using atmospheric transport and dispersion models is emphasized.

  4. Estimating Ability with Three Item Response Models when the Models are Wrong and Their Parameters are Inaccurate.

    ERIC Educational Resources Information Center

    Jones, Douglas H.; And Others

    How accurately ability is estimated when the test model does not fit the data is considered. To address this question, this study investigated the accuracy of the maximum likelihood estimator of ability for the one-, two- and three-parameter logistic (PL) models. The models were fitted into generated item characteristic curves derived from the…

  5. The Sensitivity of Parameter Estimates to the Latent Ability Distribution. Research Report. ETS RR-11-40

    ERIC Educational Resources Information Center

    Xu, Xueli; Jia, Yue

    2011-01-01

    Estimation of item response model parameters and ability distribution parameters has been, and will remain, an important topic in the educational testing field. Much research has been dedicated to addressing this task. Some studies have focused on item parameter estimation when the latent ability was assumed to follow a normal distribution,…

  6. [Medicolegal aspects of driving ability and discussion of study methods].

    PubMed

    Berghaus, G

    2008-06-01

    Medicolegal aspects of driving ability primarily concern patients themselves, because they are responsible when driving in traffic while under drug treatment. Pain patients taking analgesic medication prescribed by a doctor do not commit an offence, insofar as they are able to drive. A doctor's main duty consists of informing the patient about the way a given disease or drug intake affects driving ability. Patients have the duty to inform themselves about the drug they are taking and to assess their driving ability each time before they drive a car.

  7. Some Critical Observations of the Test Information Function as a Measure of Local Accuracy in Ability Estimation.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    1994-01-01

    Using the constant information model, constant amounts of test information, and a finite interval of ability, simulated data were produced for 8 ability levels and 20 numbers of test items. Analyses suggest that it is desirable to consider modifying test information functions when they measure accuracy in ability estimation. (SLD)

  8. A simple method to estimate interwell autocorrelation

    SciTech Connect

    Pizarro, J.O.S.; Lake, L.W.

    1997-08-01

    The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.

  9. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    The development of parametric cost estimating methods for advanced space systems in the conceptual design phase is discussed. The process of identifying variables which drive cost and the relationship between weight and cost are discussed. A theoretical model of cost is developed and tested using a historical data base of research and development projects.

  10. Karatsuba's method for estimating Kloosterman sums

    NASA Astrophysics Data System (ADS)

    Korolev, M. A.

    2016-08-01

    Using Karatsuba's method, we obtain estimates for Kloosterman sums modulo a prime, in which the number of terms is less than an arbitrarily small fixed power of the modulus. These bounds refine similar results obtained earlier by Bourgain and Garaev. Bibliography: 16 titles.

  11. A Flexible Method of Estimating Luminosity Functions

    NASA Astrophysics Data System (ADS)

    Kelly, Brandon C.; Fan, Xiaohui; Vestergaard, Marianne

    2008-08-01

    We describe a Bayesian approach to estimating luminosity functions. We derive the likelihood function and posterior probability distribution for the luminosity function, given the observed data, and we compare the Bayesian approach with maximum likelihood by simulating sources from a Schechter function. For our simulations confidence intervals derived from bootstrapping the maximum likelihood estimate can be too narrow, while confidence intervals derived from the Bayesian approach are valid. We develop our statistical approach for a flexible model where the luminosity function is modeled as a mixture of Gaussian functions. Statistical inference is performed using Markov chain Monte Carlo (MCMC) methods, and we describe a Metropolis-Hastings algorithm to perform the MCMC. The MCMC simulates random draws from the probability distribution of the luminosity function parameters, given the data, and we use a simulated data set to show how these random draws may be used to estimate the probability distribution for the luminosity function. In addition, we show how the MCMC output may be used to estimate the probability distribution of any quantities derived from the luminosity function, such as the peak in the space density of quasars. The Bayesian method we develop has the advantage that it is able to place accurate constraints on the luminosity function even beyond the survey detection limits, and that it provides a natural way of estimating the probability distribution of any quantities derived from the luminosity function, including those that rely on information beyond the survey detection limits.

  12. Effects of linguistic complexity and accommodations on estimates of ability for students with learning disabilities.

    PubMed

    Cawthon, Stephanie W; Kaye, Alyssa D; Lockhart, L Leland; Beretvas, S Natasha

    2012-06-01

    Many students with learning disabilities (SLD) participate in standardized assessments using test accommodations such as extended time, having the test items read aloud, or taking the test in a separate setting. Yet there are also aspects of the test items themselves, particularly the language demand, which may contribute to the effects of test accommodations. This study entailed an analysis of linguistic complexity (LC) and accommodation use for SLD in grade four on 2005 National Assessment of Educational Progress (NAEP) reading and mathematics items. The purpose of this study was to investigate (a) the effects of test item LC on reading and mathematics item difficulties for SLD; (b) the impact of accommodations (presentation, response, setting, or timing) on estimates of student ability, after controlling for LC effects; and (c) the impact of differential facet functioning (DFF), a person-by-item-descriptor interaction, on estimates of student ability, after controlling for LC and accommodations' effects. For both reading and mathematics, the higher an item's LC, the more difficult it was for SLD. After controlling for differences due to accommodations, LC was not a significant predictor of mathematics items' difficulties, but it remained a significant predictor for reading items. There was no effect of accommodations on mathematics item performance, but for reading items, students who received presentation and setting accommodations scored lower than those who did not. No significant LC-by-accommodation interactions were found for either subject area, indicating that the effect of LC did not depend on the type of accommodation received.

  13. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    Parametric cost estimating methods for space systems in the conceptual design phase are developed. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance, and time. The relationship between weight and cost is examined in detail. A theoretical model of cost is developed and tested statistically against a historical data base of major research and development programs. It is concluded that the technique presented is sound, but that it must be refined in order to produce acceptable cost estimates.

  14. Computer Adaptive Practice of Maths Ability Using a New Item Response Model for on the Fly Ability and Difficulty Estimation

    ERIC Educational Resources Information Center

    Klinkenberg, S.; Straatemeier, M.; van der Maas, H. L. J.

    2011-01-01

    In this paper we present a model for computerized adaptive practice and monitoring. This model is used in the Maths Garden, a web-based monitoring system, which includes a challenging web environment for children to practice arithmetic. Using a new item response model based on the Elo (1978) rating system and an explicit scoring rule, estimates of…

  15. Implicit solvent methods for free energy estimation

    PubMed Central

    Decherchi, Sergio; Masetti, Matteo; Vyalov, Ivan; Rocchia, Walter

    2014-01-01

    Solvation is a fundamental contribution in many biological processes and especially in molecular binding. Its estimation can be performed by means of several computational approaches. The aim of this review is to give an overview of existing theories and methods to estimate solvent effects giving a specific focus on the category of implicit solvent models and their use in Molecular Dynamics. In many of these models, the solvent is considered as a continuum homogenous medium, while the solute can be represented at the atomic detail and at different levels of theory. Despite their degree of approximation, implicit methods are still widely employed due to their trade-off between accuracy and efficiency. Their derivation is rooted in the statistical mechanics and integral equations disciplines, some of the related details being provided here. Finally, methods that combine implicit solvent models and molecular dynamics simulation, are briefly described. PMID:25193298

  16. Developing Writing-Reading Abilities though Semiglobal Methods

    ERIC Educational Resources Information Center

    Macri, Cecilia; Bocos, Musata

    2013-01-01

    Through this research was intended to underline the importance of the semi-global strategies used within thematic projects for developing writing/reading abilities in the first grade pupils. Four different coordinates were chosen to be the main variables of this research: the level of phonological awareness, the degree in which writing-reading…

  17. New Testing Methods to Assess Technical Problem-Solving Ability.

    ERIC Educational Resources Information Center

    Hambleton, Ronald K.; And Others

    Tests to assess problem-solving ability being provided for the Air Force are described, and some details on the development and validation of these computer-administered diagnostic achievement tests are discussed. Three measurement approaches were employed: (1) sequential problem solving; (2) context-free assessment of fundamental skills and…

  18. Temporal parameter change of human postural control ability during upright swing using recursive least square method

    NASA Astrophysics Data System (ADS)

    Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi

    2009-12-01

    The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.

  19. Temporal parameter change of human postural control ability during upright swing using recursive least square method

    NASA Astrophysics Data System (ADS)

    Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi

    2010-01-01

    The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.

  20. Clustering method for estimating principal diffusion directions

    PubMed Central

    Nazem-Zadeh, Mohammad-Reza; Jafari-Khouzani, Kourosh; Davoodi-Bojd, Esmaeil; Jiang, Quan; Soltanian-Zadeh, Hamid

    2012-01-01

    Diffusion tensor magnetic resonance imaging (DTMRI) is a non-invasive tool for the investigation of white matter structure within the brain. However, the traditional tensor model is unable to characterize anisotropies of orders higher than two in heterogeneous areas containing more than one fiber population. To resolve this issue, high angular resolution diffusion imaging (HARDI) with a large number of diffusion encoding gradients is used along with reconstruction methods such as Q-ball. Using HARDI data, the fiber orientation distribution function (ODF) on the unit sphere is calculated and used to extract the principal diffusion directions (PDDs). Fast and accurate estimation of PDDs is a prerequisite for tracking algorithms that deal with fiber crossings. In this paper, the PDDs are defined as the directions around which the ODF data is concentrated. Estimates of the PDDs based on this definition are less sensitive to noise in comparison with the previous approaches. A clustering approach to estimate the PDDs is proposed which is an extension of fuzzy c-means clustering developed for orientation of points on a sphere. MDL (Minimum description length) principle is proposed to estimate the number of PDDs. Using both simulated and real diffusion data, the proposed method has been evaluated and compared with some previous protocols. Experimental results show that the proposed clustering algorithm is more accurate, more resistant to noise, and faster than some of techniques currently being utilized. PMID:21642005

  1. Child survivorship estimation: methods and data analysis.

    PubMed

    Feeney, G

    1991-01-01

    "The past 20 years have seen extensive elaboration, refinement, and application of the original Brass method for estimating infant and child mortality from child survivorship data. This experience has confirmed the overall usefulness of the methods beyond question, but it has also shown that...estimates must be analyzed in relation to other relevant information before useful conclusions about the level and trend of mortality can be drawn.... This article aims to illustrate the importance of data analysis through a series of examples, including data for the Eastern Malaysian state of Sarawak, Mexico, Thailand, and Indonesia. Specific maneuvers include plotting completed parity distributions and 'time-plotting' mean numbers of children ever born from successive censuses. A substantive conclusion of general interest is that data for older women are not so widely defective as generally supposed."

  2. A method for estimating soil moisture availability

    NASA Technical Reports Server (NTRS)

    Carlson, T. N.

    1985-01-01

    A method for estimating values of soil moisture based on measurements of infrared surface temperature is discussed. A central element in the method is a boundary layer model. Although it has been shown that soil moistures determined by this method using satellite measurements do correspond in a coarse fashion to the antecedent precipitation, the accuracy and exact physical interpretation (with respect to ground water amounts) are not well known. This area of ignorance, which currently impedes the practical application of the method to problems in hydrology, meteorology and agriculture, is largely due to the absence of corresponding surface measurements. Preliminary field measurements made over France have led to the development of a promising vegetation formulation (Taconet et al., 1985), which has been incorporated in the model. It is necessary, however, to test the vegetation component, and the entire method, over a wide variety of surface conditions and crop canopies.

  3. Comparative yield estimation via shock hydrodynamic methods

    SciTech Connect

    Attia, A.V.; Moran, B.; Glenn, L.A.

    1991-06-01

    Shock TOA (CORRTEX) from recent underground nuclear explosions in saturated tuff were used to estimate yield via the simulated explosion-scaling method. The sensitivity of the derived yield to uncertainties in the measured shock Hugoniot, release adiabats, and gas porosity is the main focus of this paper. In this method for determining yield, we assume a point-source explosion in an infinite homogeneous material. The rock is formulated using laboratory experiments on core samples, taken prior to the explosion. Results show that increasing gas porosity from 0% to 2% causes a 15% increase in yield per ms/kt{sup 1/3}. 6 refs., 4 figs.

  4. Improvement of Source Number Estimation Method for Single Channel Signal

    PubMed Central

    Du, Bolun; He, Yunze

    2016-01-01

    Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin’s disk estimation (GDE) and minimum description length (MDL), are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC) obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely. PMID:27736959

  5. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1994-01-01

    NASA is responsible for developing much of the nation's future space technology. Cost estimates for new programs are required early in the planning process so that decisions can be made accurately. Because of the long lead times required to develop space hardware, the cost estimates are frequently required 10 to 15 years before the program delivers hardware. The system design in conceptual phases of a program is usually only vaguely defined and the technology used is so often state-of-the-art or beyond. These factors combine to make cost estimating for conceptual programs very challenging. This paper describes an effort to develop parametric cost estimating methods for space systems in the conceptual design phase. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance and time. The nature of the relationships between the driver variables and cost will be discussed. In particular, the relationship between weight and cost will be examined in detail. A theoretical model of cost will be developed and tested statistically against a historical database of major research and development projects.

  6. Estimation of phenotypic variability in symbiotic nitrogen fixation ability of common bean under drought stress using (15)N natural abundance in grain.

    PubMed

    Polania, Jose; Poschenrieder, Charlotte; Rao, Idupulapati; Beebe, Stephen

    2016-09-01

    Common bean (Phaseolus vulgaris L.) is the most important food legume, cultivated by small farmers and is usually exposed to unfavorable conditions with minimum use of inputs. Drought and low soil fertility, especially phosphorus and nitrogen (N) deficiencies, are major limitations to bean yield in smallholder systems. Beans can derive part of their required N from the atmosphere through symbiotic nitrogen fixation (SNF). Drought stress severely limits SNF ability of plants. The main objectives of this study were to: (i) test and validate the use of (15)N natural abundance in grain to quantify phenotypic differences in SNF ability for its implementation in breeding programs of common bean with bush growth habit aiming to improve SNF, and (ii) quantify phenotypic differences in SNF under drought to identify superior genotypes that could serve as parents. Field studies were conducted at CIAT-Palmira, Colombia using a set of 36 bean genotypes belonging to the Middle American gene pool for evaluation in two seasons with two levels of water supply (irrigated and drought stress). We used (15)N natural abundance method to compare SNF ability estimated from shoot tissue sampled at mid-pod filling growth stage vs. grain tissue sampled at harvest. Our results showed positive and significant correlation between nitrogen derived from the atmosphere (%Ndfa) estimated using shoot tissue at mid-pod filling and %Ndfa estimated using grain tissue at harvest. Both methods showed phenotypic variability in SNF ability under both drought and irrigated conditions and a significant reduction in SNF ability was observed under drought stress. We suggest that the method of estimating Ndfa using grain tissue (Ndfa-G) could be applied in bean breeding programs to improve SNF ability. Using this method of Ndfa-G, we identified four bean lines (RCB 593, SEA 15, NCB 226 and BFS 29) that combine greater SNF ability with greater grain yield under drought stress and these could serve as potential

  7. New Measurement Methods of Network Robustness and Response Ability via Microarray Data

    PubMed Central

    Tu, Chien-Ta; Chen, Bor-Sen

    2013-01-01

    “Robustness”, the network ability to maintain systematic performance in the face of intrinsic perturbations, and “response ability”, the network ability to respond to external stimuli or transduce them to downstream regulators, are two important complementary system characteristics that must be considered when discussing biological system performance. However, at present, these features cannot be measured directly for all network components in an experimental procedure. Therefore, we present two novel systematic measurement methods – Network Robustness Measurement (NRM) and Response Ability Measurement (RAM) – to estimate the network robustness and response ability of a gene regulatory network (GRN) or protein-protein interaction network (PPIN) based on the dynamic network model constructed by the corresponding microarray data. We demonstrate the efficiency of NRM and RAM in analyzing GRNs and PPINs, respectively, by considering aging- and cancer-related datasets. When applied to an aging-related GRN, our results indicate that such a network is more robust to intrinsic perturbations in the elderly than in the young, and is therefore less responsive to external stimuli. When applied to a PPIN of fibroblast and HeLa cells, we observe that the network of cancer cells possesses better robustness than that of normal cells. Moreover, the response ability of the PPIN calculated from the cancer cells is lower than that from healthy cells. Accordingly, we propose that generalized NRM and RAM methods represent effective tools for exploring and analyzing different systems-level dynamical properties via microarray data. Making use of such properties can facilitate prediction and application, providing useful information on clinical strategy, drug target selection, and design specifications of synthetic biology from a systems biology perspective. PMID:23383119

  8. Effects of Test Length and Sample Size on the Estimates of Precision of Latent Ability Scores

    DTIC Science & Technology

    1979-03-01

    describing a test item, and methods used to estimate parameters) we will be even more pleased. * e -32- References Birnbaum, A. Some latent trait models and...IS. SUPPLEMENTARY .- rES A paper presented at an AERA-NCME symposium entitled "Explorations of Latent Trait Models is a Means of Solving Practical...of latent trait moduls is the possibility of specifying a tairget information cumv,’ and thcn selecting items from an item pool to produce a test with

  9. The Measurement of Human Time Estimating Ability Using a Modified Jerison Device.

    DTIC Science & Technology

    1984-12-01

    experiment and the window 03 reduction experiment, the TEA tester was placed on a table in front of the subject. The subject was allowed to position ...simple hand movement and each subject was allowed to place the tester in any position that would accommodate his dominant hand. In the feedback...this position with the data obtained from this experiment. The verbal method of estimation was employed in this experiment. Elapsed time was judged by

  10. Expanding the WAIS-III Estimate of Premorbid Ability for Canadians (EPAC).

    PubMed

    Lange, Rael T; Schoenberg, Mike R; Saklofske, Donald H; Woodward, Todd S; Brickell, Tracey A

    2006-07-01

    Since the release of the Canadian WAIS-III normative data in 2001 (Wechsler, 2001), the clinical application of these norms has been limited by the absence of a method to estimate premorbid functioning. However, Lange, Schoenberg, Woodward, and Brickell (2005) recently developed regression algorithms that estimate premorbid FSIQ, VIQ and PIQ scores for use with the Canadian WAIS-III norms. The purpose of this study was to expand work by Lange and colleagues by developing regression algorithms to estimate premorbid GAI (Saklofske et al., 2005), VCI, and POI scores. Participants were the Canadian WAIS-III standardization sample (n = 1,105). The sample was randomly divided into two groups (Development and Validation group). Using the Development group, a total of 14 regression algorithms were generated to estimate GAI, VCI, and POI scores by combining subtest performance (i.e., Vocabulary, Information, Matrix Reasoning, and Picture Completion) with demographic variables (i.e., age, education, ethnicity, region of the country, and gender). The algorithms accounted for a maximum of 77% of the variance in GAI, 78% of the variance in VCI, and 63% of the variance in POI. In the Validation Group, correlations between predicted and obtained scores were high (GAI = .70 to .88; VCI = .87 to .88; POI = .71 to .80). Evaluation of prediction errors revealed that the majority of estimated GAI, VCI, and POI scores fell within a 95% CI band (93.5% to 97.0%) and within 10 points of obtained index scores (72.3% to 85.6%) depending on the subtests used. These algorithms provide a promising means for estimating premorbid GAI, VCI, and POI scores using the Canadian WAIS-III norms.

  11. A cross-sectional study of mathematics achievement, estimation skills, and academic self-perception in students of varying ability.

    PubMed

    Montague, Marjorie; van Garderen, Delinda

    2003-01-01

    This study investigated students' mathematics achievement, estimation ability, use of estimation strategies, and academic self-perception. Students with learning disabilities (LD), average achievers, and intellectually gifted students (N = 135) in fourth, sixth, and eighth grade participated in the study. They were assessed to determine their mathematics achievement, ability to estimate discrete quantities, knowledge and use of estimation strategies, and perception of academic competence. The results indicated that the students with LD performed significantly lower than their peers on the math achievement measures, as expected, but viewed themselves to be as academically competent as the average achievers did. Students with LD and average achievers scored significantly lower than gifted students on all estimation measures, but they differed significantly from one another only on the estimation strategy use measure. Interestingly, even gifted students did not seem to have a well-developed understanding of estimation and, like the other students, did poorly on the first estimation measure. The accuracy of their estimates seemed to improve, however, when students were asked open-ended questions about the strategies they used to arrive at their estimates. Although students with LD did not differ from average achievers in their estimation accuracy, they used significantly fewer effective estimation strategies. Implications for instruction are discussed.

  12. Demographic estimation methods for plants with dormancy

    USGS Publications Warehouse

    Kery, M.; Gregg, K.B.

    2004-01-01

    Demographic studies in plants appear simple because unlike animals, plants do not run away. Plant individuals can be marked with, e.g., plastic tags, but often the coordinates of an individual may be sufficient to identify it. Vascular plants in temperate latitudes have a pronounced seasonal life–cycle, so most plant demographers survey their study plots once a year often during or shortly after flowering. Life–states are pervasive in plants, hence the results of a demographic study for an individual can be summarized in a familiar encounter history, such as 0VFVVF000. A zero means that an individual was not seen in a year and a letter denotes its state for years when it was seen aboveground. V and F here stand for vegetative and flowering states, respectively. Probabilities of survival and state transitions can then be obtained by mere counting.Problems arise when there is an unobservable dormant state, i.e., when plants may stay belowground for one or more growing seasons. Encounter histories such as 0VF00F000 may then occur where the meaning of zeroes becomes ambiguous. A zero can either mean a dead or a dormant plant. Various ad hoc methods in wide use among plant ecologists have made strong assumptions about when a zero should be equated to a dormant individual. These methods have never been compared among each other. In our talk and in Kéry et al. (submitted), we show that these ad hoc estimators provide spurious estimates of survival and should not be used.In contrast, if detection probabilities for aboveground plants are known or can be estimated, capturerecapture (CR) models can be used to estimate probabilities of survival and state–transitions and the fraction of the population that is dormant. We have used this approach in two studies of terrestrial orchids, Cleistes bifaria (Kéry et al., submitted) and Cypripedium reginae(Kéry & Gregg, submitted) in West Virginia, U.S.A. For Cleistes, our data comprised one population with a total of 620

  13. The Study on Educational Technology Abilities Evaluation Method

    NASA Astrophysics Data System (ADS)

    Jing, Duan

    The traditional methods used to evaluate the test, the test did not really measure that we want to measuring things. Test results and can not serve as a basis for evaluation, so it was worth the natural result of its evaluation of weighing. This system is full use of technical means of education, based on education, psychological theory, to evaluate the object-based, evaluation tools, evaluation of secondary teachers to primary and secondary school teachers in educational technology as the goal, using a variety of evaluation of side France, from various angles established an informal evaluation system.

  14. Efficient resampling methods for nonsmooth estimating functions

    PubMed Central

    ZENG, DONGLIN

    2009-01-01

    Summary We propose a simple and general resampling strategy to estimate variances for parameter estimators derived from nonsmooth estimating functions. This approach applies to a wide variety of semiparametric and nonparametric problems in biostatistics. It does not require solving estimating equations and is thus much faster than the existing resampling procedures. Its usefulness is illustrated with heteroscedastic quantile regression and censored data rank regression. Numerical results based on simulated and real data are provided. PMID:17925303

  15. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.

  16. On the Ability of Ascends to Constrain Fossil Fuel, Ocean and High Latitude Emissions: Flux Estimation Experiments

    NASA Astrophysics Data System (ADS)

    Crowell, S.; Kawa, S. R.; Hammerling, D.; Moore, B., III; Rayner, P. J.

    2014-12-01

    In Hammerling et al., 2014 (H14) the authors demonstrated a geostatistical method for mapping satellite estimates of column integrated CO2 mixing ratio, denoted XCO2, that incorporates the spatial variability in satellite-measured XCO2 as well as measurement precision. The goal of the study was to determine whether the Active Sensing of CO2 over Nights, Days and Seasons (ASCENDS) mission would be able to detect changes in XCO2 given changes in the underlying fluxes for different levels of instrument precision. Three scenarios were proposed: a flux-neutral shift in fossil fuel emissions from Europe to China (shown in the figure); a permafrost melting event; interannual variability in the Southern Oceans. The conclusions of H14 were modest but favorable for detectability in each case by ASCENDS given enough observations and sufficient precision. These signal detection experiments suggest that ASCENDS observations, together with a chemical transport model and data assimilation methodology, would be sufficient to provide quality estimates of the underlying surface fluxes, so long as the ASCENDS observations are precise enough. In this work, we present results that bridge the gap between the previous signal detection work by [Hammerling et al., 2014] and the ability of transport models to recover flux perturbations from ASCENDS observations utilizing the TM5-4DVAR data assimilation system. In particular, we will explore the space of model and observational uncertainties that will yield useful scientific information in each of the flux perturbation scenarios. This work will give a sense of the ability of ASCENDS to answer key questions about some of the foremost questions in carbon cycle science today. References: Hammerling, D., Kawa, S., Schaefer, K., and Michalak, A. (2014). Detectability of CO2 flux signals by a space-based lidar mission. Submitted.

  17. Current methods for estimating the rate of photorespiration in leaves.

    PubMed

    Busch, F A

    2013-07-01

    Photorespiration is a process that competes with photosynthesis, in which Rubisco oxygenates, instead of carboxylates, its substrate ribulose 1,5-bisphosphate. The photorespiratory metabolism associated with the recovery of 3-phosphoglycerate is energetically costly and results in the release of previously fixed CO2. The ability to quantify photorespiration is gaining importance as a tool to help improve plant productivity in order to meet the increasing global food demand. In recent years, substantial progress has been made in the methods used to measure photorespiration. Current techniques are able to measure multiple aspects of photorespiration at different points along the photorespiratory C2 cycle. Six different methods used to estimate photorespiration are reviewed, and their advantages and disadvantages discussed.

  18. Refinement of a Bias-Correction Procedure for the Weighted Likelihood Estimator of Ability. Research Report. ETS RR-07-23

    ERIC Educational Resources Information Center

    Zhang, Jinming; Lu, Ting

    2007-01-01

    In practical applications of item response theory (IRT), item parameters are usually estimated first from a calibration sample. After treating these estimates as fixed and known, ability parameters are then estimated. However, the statistical inferences based on the estimated abilities can be misleading if the uncertainty of the item parameter…

  19. Brain correlates of non-symbolic numerosity estimation in low and high mathematical ability children.

    PubMed

    Kovas, Yulia; Giampietro, Vincent; Viding, Essi; Ng, Virginia; Brammer, Michael; Barker, Gareth J; Happé, Francesca G E; Plomin, Robert

    2009-01-01

    Previous studies have implicated several brain areas as subserving numerical approximation. Most studies have examined brain correlates of adult numerical approximation and have not considered individual differences in mathematical ability. The present study examined non-symbolic numerical approximation in two groups of 10-year-olds: Children with low and high mathematical ability. The aims of this study were to investigate the brain mechanisms associated with approximate numerosity in children and to assess whether individual differences in mathematical ability are associated with differential brain correlates during the approximation task. The results suggest that, similarly to adults, multiple and distributed brain areas are involved in approximation in children. Despite equal behavioral performance, there were differences in the brain activation patterns between low and high mathematical ability groups during the approximation task. This suggests that individual differences in mathematical ability are reflected in differential brain response during approximation.

  20. Using optimal estimation method for upper atmospheric Lidar temperature retrieval

    NASA Astrophysics Data System (ADS)

    Zou, Rongshi; Pan, Weilin; Qiao, Shuai

    2016-07-01

    Conventional ground based Rayleigh lidar temperature retrieval use integrate technique, which has limitations that necessitate abandoning temperatures retrieved at the greatest heights due to the assumption of a seeding value required to initialize the integration at the highest altitude. Here we suggests the use of a method that can incorporate information from various sources to improve the quality of the retrieval result. This approach inverts lidar equation via optimal estimation method(OEM) based on Bayesian theory together with Gaussian statistical model. It presents many advantages over the conventional ones: 1) the possibility of incorporating information from multiple heterogeneous sources; 2) provides diagnostic information about retrieval qualities; 3) ability of determining vertical resolution and maximum height to which the retrieval is mostly independent of the a priori profile. This paper compares one-hour temperature profiles retrieved using conventional and optimal estimation methods at Golmud, Qinghai province, China. Results show that OEM results show a better agreement with SABER profile compared with conventional one, in some region it is much lower than SABER profile, which is a very different results compared with previous studies, further studies are needed to explain this phenomenon. The success of applying OEM on temperature retrieval is a validation for using as retrieval framework in large synthetic observation systems including various active remote sensing instruments by incorporating all available measurement information into the model and analyze groups of measurements simultaneously to improve the results.

  1. The Confounding Effects of Ability, Item Difficulty, and Content Balance within Multiple Dimensions on the Estimation of Unidimensional Thetas

    ERIC Educational Resources Information Center

    Matlock, Ki Lynn

    2013-01-01

    When test forms that have equal total test difficulty and number of items vary in difficulty and length within sub-content areas, an examinee's estimated score may vary across equivalent forms, depending on how well his or her true ability in each sub-content area aligns with the difficulty of items and number of items within these areas.…

  2. Comparisons of Four Methods for Estimating a Dynamic Factor Model

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.

    2008-01-01

    Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…

  3. Statistical methods of estimating mining costs

    USGS Publications Warehouse

    Long, K.R.

    2011-01-01

    Until it was defunded in 1995, the U.S. Bureau of Mines maintained a Cost Estimating System (CES) for prefeasibility-type economic evaluations of mineral deposits and estimating costs at producing and non-producing mines. This system had a significant role in mineral resource assessments to estimate costs of developing and operating known mineral deposits and predicted undiscovered deposits. For legal reasons, the U.S. Geological Survey cannot update and maintain CES. Instead, statistical tools are under development to estimate mining costs from basic properties of mineral deposits such as tonnage, grade, mineralogy, depth, strip ratio, distance from infrastructure, rock strength, and work index. The first step was to reestimate "Taylor's Rule" which relates operating rate to available ore tonnage. The second step was to estimate statistical models of capital and operating costs for open pit porphyry copper mines with flotation concentrators. For a sample of 27 proposed porphyry copper projects, capital costs can be estimated from three variables: mineral processing rate, strip ratio, and distance from nearest railroad before mine construction began. Of all the variables tested, operating costs were found to be significantly correlated only with strip ratio.

  4. Comparison of Measured and Estimated Cognitive Ability in Older Adolescents with and without ADHD

    ERIC Educational Resources Information Center

    Miller, Carlin J.; Marks, David J.; Halperin, Jeffrey M.

    2005-01-01

    Premorbid intellectual function estimation is a crucial part of patient evaluation following a traumatic brain injury (TBI), especially in individuals with ADHD who are at higher risk for TBI compared to their non-ADHD peers. This study investigates the value of using regression-based estimates of intelligence for concurrently predicting measured…

  5. A QUALITATIVE METHOD TO ESTIMATE HSI DISPLAY COMPLEXITY

    SciTech Connect

    Jacques Hugo; David Gertman

    2013-04-01

    There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation.

  6. A Study of Variance Estimation Methods. Working Paper Series.

    ERIC Educational Resources Information Center

    Zhang, Fan; Weng, Stanley; Salvucci, Sameena; Hu, Ming-xiu

    This working paper contains reports of five studies of variance estimation methods. The first, An Empirical Study of Poststratified Estimator, by Fan Zhang uses data from the National Household Education Survey to illustrate use of poststratified estimation. The second paper, BRR Variance Estimation Using BPLX Hadamard Procedure, by Stanley Weng…

  7. How Smart Do You Think You Are? A Meta-Analysis on the Validity of Self-Estimates of Cognitive Ability

    ERIC Educational Resources Information Center

    Freund, Philipp Alexander; Kasten, Nadine

    2012-01-01

    Individuals' perceptions of their own level of cognitive ability are expressed through self-estimates. They play an important role in a person's self-concept because they facilitate an understanding of how one's own abilities relate to those of others. People evaluate their own and other persons' abilities all the time, but self-estimates are also…

  8. Analytic Study of the Tadoma Method: Language Abilities of Three Deaf-Blind Subjects.

    ERIC Educational Resources Information Center

    Chomsky, Carol

    1986-01-01

    The linguistic abilities of three adult deaf-blind subjects who acquired language through the Tadoma method (involves monitoring a speaker's articulatory motions by placing a hand on his face) were examined. The subjects' English language abilities were excellent, suggesting that the tactile sense is adequate in highly trained Tadoma users in…

  9. Nutrient Estimation Using Subsurface Sensing Methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This report investigates the use of precision management techniques for measuring soil conductivity on feedlot surfaces to estimate nutrient value for crop production. An electromagnetic induction soil conductivity meter was used to collect apparent soil electrical conductivity (ECa) from feedlot p...

  10. Evaluating Methods for Estimating Program Effects

    ERIC Educational Resources Information Center

    Reichardt, Charles S.

    2011-01-01

    I define a treatment effect in terms of a comparison of outcomes and provide a typology of all possible comparisons that can be used to estimate treatment effects, including comparisons that are relatively unknown in both the literature and practice. I then assess the relative merit, worth, and value of all possible comparisons based on the…

  11. New High Throughput Methods to Estimate Chemical ...

    EPA Pesticide Factsheets

    EPA has made many recent advances in high throughput bioactivity testing. However, concurrent advances in rapid, quantitative prediction of human and ecological exposures have been lacking, despite the clear importance of both measures for a risk-based approach to prioritizing and screening chemicals. A recent report by the National Research Council of the National Academies, Exposure Science in the 21st Century: A Vision and a Strategy (NRC 2012) laid out a number of applications in chemical evaluation of both toxicity and risk in critical need of quantitative exposure predictions, including screening and prioritization of chemicals for targeted toxicity testing, focused exposure assessments or monitoring studies, and quantification of population vulnerability. Despite these significant needs, for the majority of chemicals (e.g. non-pesticide environmental compounds) there are no or limited estimates of exposure. For example, exposure estimates exist for only 7% of the ToxCast Phase II chemical list. In addition, the data required for generating exposure estimates for large numbers of chemicals is severely lacking (Egeghy et al. 2012). This SAP reviewed the use of EPA's ExpoCast model to rapidly estimate potential chemical exposures for prioritization and screening purposes. The focus was on bounded chemical exposure values for people and the environment for the Endocrine Disruptor Screening Program (EDSP) Universe of Chemicals. In addition to exposure, the SAP

  12. The Ability of Atmospheric Data to Reduce Disagreements in Wetland Methane Flux Estimates over North America

    NASA Astrophysics Data System (ADS)

    Miller, S. M.; Andrews, A. E.; Benmergui, J. S.; Commane, R.; Dlugokencky, E. J.; Janssens-Maenhout, G.; Melton, J. R.; Michalak, A. M.; Sweeney, C.; Worthy, D. E. J.

    2015-12-01

    Existing estimates of methane fluxes from wetlands differ in both magnitude and distribution across North America. We discuss seven different bottom-up methane estimates in the context of atmospheric methane data collected across the US and Canada. In the first component of this study, we explore whether the observation network can even detect a methane pattern from wetlands. We find that the observation network can identify a methane pattern from Canadian wetlands but not reliably from US wetlands. Over Canada, the network can even identify spatial patterns at multi-provence scales. Over the US, by contrast, anthropogenic emissions and modeling errors obscure atmospheric patterns from wetland fluxes. In the second component of the study, we then use these observations to reconcile disagreements in the magnitude, seasonal cycle, and spatial distribution of existing estimates. Most existing estimates predict fluxes that are too large with a seasonal cycle that is too narrow. A model known as LPJ-Bern has a spatial distribution most consistent with atmospheric observations. By contrast, a spatially-constant model outperforms the distribution of most existing flux estimates across Canada. The results presented here provide several pathways to reduce disagreements among existing wetland flux estimates across North America.

  13. Estimation Methods for One-Parameter Testlet Models

    ERIC Educational Resources Information Center

    Jiao, Hong; Wang, Shudong; He, Wei

    2013-01-01

    This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…

  14. Quantum Estimation Methods for Quantum Illumination

    NASA Astrophysics Data System (ADS)

    Sanz, M.; Las Heras, U.; García-Ripoll, J. J.; Solano, E.; Di Candia, R.

    2017-02-01

    Quantum illumination consists in shining quantum light on a target region immersed in a bright thermal bath with the aim of detecting the presence of a possible low-reflective object. If the signal is entangled with the receiver, then a suitable choice of the measurement offers a gain with respect to the optimal classical protocol employing coherent states. Here, we tackle this detection problem by using quantum estimation techniques to measure the reflectivity parameter of the object, showing an enhancement in the signal-to-noise ratio up to 3 dB with respect to the classical case when implementing only local measurements. Our approach employs the quantum Fisher information to provide an upper bound for the error probability, supplies the concrete estimator saturating the bound, and extends the quantum illumination protocol to non-Gaussian states. As an example, we show how Schrödinger's cat states may be used for quantum illumination.

  15. Quantum Estimation Methods for Quantum Illumination.

    PubMed

    Sanz, M; Las Heras, U; García-Ripoll, J J; Solano, E; Di Candia, R

    2017-02-17

    Quantum illumination consists in shining quantum light on a target region immersed in a bright thermal bath with the aim of detecting the presence of a possible low-reflective object. If the signal is entangled with the receiver, then a suitable choice of the measurement offers a gain with respect to the optimal classical protocol employing coherent states. Here, we tackle this detection problem by using quantum estimation techniques to measure the reflectivity parameter of the object, showing an enhancement in the signal-to-noise ratio up to 3 dB with respect to the classical case when implementing only local measurements. Our approach employs the quantum Fisher information to provide an upper bound for the error probability, supplies the concrete estimator saturating the bound, and extends the quantum illumination protocol to non-Gaussian states. As an example, we show how Schrödinger's cat states may be used for quantum illumination.

  16. Nonparametric Estimation by the Method of Sieves.

    DTIC Science & Technology

    1983-07-01

    high speed memory to which Accomando has added a board with 64k 16-bit words. The programs will reconstruct a 60x60 phantom in about fifteen or twenty...1971. 8. Budingere T. F.. Gullberg. G. T., and Huesman. R. H., Emission computed tomography, chapter 5 in J&&" Reconstrution f= Prjeins; Im&lementation...section reconstrution . Phys. Med. Biol. 22. 511-521& 1977. 60 29. Kronmal, R. and Tarter. M., The estimation of probability densities and cumulatives by

  17. Development of advanced acreage estimation methods

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr. (Principal Investigator)

    1980-01-01

    The use of the AMOEBA clustering/classification algorithm was investigated as a basis for both a color display generation technique and maximum likelihood proportion estimation procedure. An approach to analyzing large data reduction systems was formulated and an exploratory empirical study of spatial correlation in LANDSAT data was also carried out. Topics addressed include: (1) development of multiimage color images; (2) spectral spatial classification algorithm development; (3) spatial correlation studies; and (4) evaluation of data systems.

  18. Further Explorations of Perceptual Speed Abilities in the Context of Assessment Methods, Cognitive Abilities, and Individual Differences during Skill Acquisition

    ERIC Educational Resources Information Center

    Ackerman, Phillip L.; Beier, Margaret E.

    2007-01-01

    Measures of perceptual speed ability have been shown to be an important part of assessment batteries for predicting performance on tasks and jobs that require a high level of speed and accuracy. However, traditional measures of perceptual speed ability sometimes have limited cost-effectiveness because of the requirements for administration and…

  19. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.

  20. The Bias Function of the Maximum Likelihood Estimate of Ability for the Dichotomous Response Level.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    1993-01-01

    F. Samejima's approximation for the bias function for the maximum likelihood estimate of the latent trait in the general case where item responses are discrete is explored. Observations are made about the behavior of this bias function for the dichotomous response level in general. Empirical examples are given. (SLD)

  1. Simultaneous Estimation of Overall and Domain Abilities: A Higher-Order IRT Model Approach

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Song, Hao

    2009-01-01

    Assessments consisting of different domains (e.g., content areas, objectives) are typically multidimensional in nature but are commonly assumed to be unidimensional for estimation purposes. The different domains of these assessments are further treated as multi-unidimensional tests for the purpose of obtaining diagnostic information. However, when…

  2. A Comparison of Learning Disabled and Other Children on the Ability to Make Functional Time Estimates.

    ERIC Educational Resources Information Center

    Dodd, John M.; And Others

    1985-01-01

    A reliable instrument was developed to identify elementary-age children who have difficulty with time estimation, as indicated by choices on a pencil-and-paper test. The instrument was used to compare performances of learning disabled and nondisabled children. Findings provide empirical support for temporal difficulties among learning disabled…

  3. Individual Differences in Time Estimation Related to Cognitive Ability, Speed of Information Processing and Working Memory

    ERIC Educational Resources Information Center

    Fink, A.; Neubauer, A. C.

    2005-01-01

    In experimental time estimation research, it has consistently been found that the more a person is engaged in some kind of demanding cognitive activity within a given period of time, the more experienced duration of this time interval decreases. However, the role of individual differences has been largely ignored in this field of research. In a…

  4. [Weighted estimation methods for multistage sampling survey data].

    PubMed

    Hou, Xiao-Yan; Wei, Yong-Yue; Chen, Feng

    2009-06-01

    Multistage sampling techniques are widely applied in the cross-sectional study of epidemiology, while methods based on independent assumption are still used to analyze such complex survey data. This paper aims to introduce the application of weighted estimation methods for the complex survey data. A brief overview of basic theory is described, and then a practical analysis is illustrated to apply to the weighted estimation algorithm in a stratified two-stage clustered sampling data. For multistage sampling survey data, weighted estimation method can be used to obtain unbiased point estimation and more reasonable variance estimation, and so make proper statistical inference by correcting the clustering, stratification and unequal probability effects.

  5. Estimation of diversity and combining abilities in Helianthus annuus L. under water stress and normal conditions.

    PubMed

    Saba, M; Khan, F A; Sadaqat, H A; Rana, I A

    2016-10-24

    Sunflower cannot produce high yields under water-limiting conditions. The aim of the present study was to prevent the impediments on yield and to develop varieties with high-yield potential under water scarce conditions. For achieving this objective, it is necessary to detect parents with desirable traits that mainly depend on the action of genes controlling the trait under improvement, combining ability, and genetic makeup of the parents. Heterosis can also be used to pool the desirable genes from genetically divergent varieties and these divergent parents could be detected by molecular studies. Ten tolerant and five susceptible tester lines were selected, crossed, and tested for genetic diversity using simple sequence repeat primers. We identified two parents (A-10.8 and G-60) that showed maximum (46.7%) genetic dissimilarity. On an average 3.1 alleles per locus were detected for twenty pair of primers. Evaluation of mean values revealed that under stress conditions the mean performances of the genotypes were reduced for all traits under study. Parent A-10.8 was consistent as a good general combiner for achene yield per plant under both non-stress and stress conditions. Line A-10.8 in the hybrid A-10.8 x G-60 proved to be a good combiner as it showed negative specific combining ability (SCA) effects for plant height and internodal length and positive SCA effects for head weight, achene yield per plant, and membrane stability index. Valuable information on gene action, combining ability, and heterosis was generated, which could be used in further breeding programs.

  6. A Normalized Direct Approach for Estimating the Parameters of the Normal Ogive Three-Parameter Model for Ability Tests.

    ERIC Educational Resources Information Center

    Gugel, John F.

    A new method for estimating the parameters of the normal ogive three-parameter model for multiple-choice test items--the normalized direct (NDIR) procedure--is examined. The procedure is compared to a more commonly used estimation procedure, Lord's LOGIST, using computer simulations. The NDIR procedure uses the normalized (mid-percentile)…

  7. A Joint Analytic Method for Estimating Aquitard Hydraulic Parameters.

    PubMed

    Zhuang, Chao; Zhou, Zhifang; Illman, Walter A

    2017-01-10

    The vertical hydraulic conductivity (Kv ), elastic (Sske ), and inelastic (Sskv ) skeletal specific storage of aquitards are three of the most critical parameters in land subsidence investigations. Two new analytic methods are proposed to estimate the three parameters. The first analytic method is based on a new concept of delay time ratio for estimating Kv and Sske of an aquitard subject to long-term stable, cyclic hydraulic head changes at boundaries. The second analytic method estimates the Sskv of the aquitard subject to linearly declining hydraulic heads at boundaries. Both methods are based on analytical solutions for flow within the aquitard, and they are jointly employed to obtain the three parameter estimates. This joint analytic method is applied to estimate the Kv , Sske , and Sskv of a 34.54-m thick aquitard for which the deformation progress has been recorded by an extensometer located in Shanghai, China. The estimated results are then calibrated by PEST (Doherty 2005), a parameter estimation code coupled with a one-dimensional aquitard-drainage model. The Kv and Sske estimated by the joint analytic method are quite close to those estimated via inverse modeling and performed much better in simulating elastic deformation than the estimates obtained from the stress-strain diagram method of Ye and Xue (2005). The newly proposed joint analytic method is an effective tool that provides reasonable initial values for calibrating land subsidence models.

  8. A new parametric method of estimating the joint probability density

    NASA Astrophysics Data System (ADS)

    Alghalith, Moawia

    2017-04-01

    We present simple parametric methods that overcome major limitations of the literature on joint/marginal density estimation. In doing so, we do not assume any form of marginal or joint distribution. Furthermore, using our method, a multivariate density can be easily estimated if we know only one of the marginal densities. We apply our methods to financial data.

  9. Advancing Methods for Estimating Cropland Area

    NASA Astrophysics Data System (ADS)

    King, L.; Hansen, M.; Stehman, S. V.; Adusei, B.; Potapov, P.; Krylov, A.

    2014-12-01

    Measurement and monitoring of complex and dynamic agricultural land systems is essential with increasing demands on food, feed, fuel and fiber production from growing human populations, rising consumption per capita, the expansion of crops oils in industrial products, and the encouraged emphasis on crop biofuels as an alternative energy source. Soybean is an important global commodity crop, and the area of land cultivated for soybean has risen dramatically over the past 60 years, occupying more than 5% of all global croplands (Monfreda et al 2008). Escalating demands for soy over the next twenty years are anticipated to be met by an increase of 1.5 times the current global production, resulting in expansion of soybean cultivated land area by nearly the same amount (Masuda and Goldsmith 2009). Soybean cropland area is estimated with the use of a sampling strategy and supervised non-linear hierarchical decision tree classification for the United States, Argentina and Brazil as the prototype in development of a new methodology for crop specific agricultural area estimation. Comparison of our 30 m2 Landsat soy classification with the National Agricultural Statistical Services Cropland Data Layer (CDL) soy map shows a strong agreement in the United States for 2011, 2012, and 2013. RapidEye 5m2 imagery was also classified for soy presence and absence and used at the field scale for validation and accuracy assessment of the Landsat soy maps, describing a nearly 1 to 1 relationship in the United States, Argentina and Brazil. The strong correlation found between all products suggests high accuracy and precision of the prototype and has proven to be a successful and efficient way to assess soybean cultivated area at the sub-national and national scale for the United States with great potential for application elsewhere.

  10. Development of the WAIS-III estimate of premorbid ability for Canadians (EPAC).

    PubMed

    Lange, Rael T; Schoenberg, Mike R; Woodward, Todd S; Brickell, Tracey A

    2005-12-01

    This study developed regression algorithms for estimating IQ scores using the Canadian WAIS-III norms. Participants were the Canadian WAIS-III standardization sample (n = 1,105). The sample was randomly divided into two groups (Development and Validation groups). The Development group was used to generate 12 regression algorithms for FSIQ and three algorithms each for VIQ and PIQ. Algorithms combined demographic variables with WAIS-III subtest raw scores. The algorithms accounted for 48-78% of the variance in FSIQ, 70-71% in VIQ, and 45-55% in PIQ. In the Validation group, the majority of the sample had predicted IQs that fell within a 95% CI band (FSIQ=92-94%; VIQ=93-95%; PIQ=94-94%). These algorithms yielded reasonably accurate estimates of FSIQ, VIQ, and PIQ in this healthy adult population. It is anticipated that these algorithms will be useful as a means for estimating premorbid IQ scores in a clinical population. However, prior to clinical use, these algorithms must be validated for this purpose.

  11. Estimation of Convective Momentum Fluxes Using Satellite-Based Methods

    NASA Astrophysics Data System (ADS)

    Jewett, C.; Mecikalski, J. R.

    2009-12-01

    as defined by Austin and Houze (1973). However, this method only considers climatological updraft speeds determined from cloud base and cloud top heights. Fortunately, this project also incorporates the unique dataset provided by the space cloud radar, CloudSat. However, with CloudSat pointing only at nadir, it is limited in its abilities to compute a three-dimensional draft-tilt. Nevertheless, this instrument can provide critical information toward estimating CMFs. Efforts are currently being made to correlate the Ice Water Content (IWC; from product 2B-CWC-RO) of convective storms to vertical velocities. It is hypothesized that a positive correlation exists between IWC and vertical velocity (Li 2006). With a positive correlation, vertical velocity estimates can be applied to CloudSat data. These vertical velocity estimates will be included in the TRMM algorithm to create a synergistic approach to estimating convective momentum fluxes. This approach also considers the sub-cloud base fluxes from QuikScat data, derived using the divergence along the surface and calculating vertical motion with continuity equation.

  12. Estimated Accuracy of Three Common Trajectory Statistical Methods

    NASA Technical Reports Server (NTRS)

    Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.

    2011-01-01

    Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h

  13. Rapid Methods for Estimating Navigation Channel Shoaling

    DTIC Science & Technology

    2009-01-01

    of the data dependence and lack of accounting for channel width. Mayor- Mora et al. (1976) developed an analytical method for infilling in a...1.2 cm/day near the end of the monitoring. The predictive expression of Mayor- Mora et al. (1976) is: /(1 ) (1 ) cosa dFh h FR in hdq q e q e...decision-support or initial planning studies that must be done quickly. Vicente and Uva (1984) present a method based on the assumption that a

  14. Optical method of atomic ordering estimation

    SciTech Connect

    Prutskij, T.; Attolini, G.

    2013-12-04

    It is well known that within metal-organic vapor-phase epitaxy (MOVPE) grown semiconductor III-V ternary alloys atomically ordered regions are spontaneously formed during the epitaxial growth. This ordering leads to bandgap reduction and to valence bands splitting, and therefore to anisotropy of the photoluminescence (PL) emission polarization. The same phenomenon occurs within quaternary semiconductor alloys. While the ordering in ternary alloys is widely studied, for quaternaries there have been only a few detailed experimental studies of it, probably because of the absence of appropriate methods of its detection. Here we propose an optical method to reveal atomic ordering within quaternary alloys by measuring the PL emission polarization.

  15. Seismic Methods of Identifying Explosions and Estimating Their Yield

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Ford, S. R.; Pasyanos, M.; Pyle, M. L.; Myers, S. C.; Mellors, R. J.; Pitarka, A.; Rodgers, A. J.; Hauk, T. F.

    2014-12-01

    Seismology plays a key national security role in detecting, locating, identifying and determining the yield of explosions from a variety of causes, including accidents, terrorist attacks and nuclear testing treaty violations (e.g. Koper et al., 2003, 1999; Walter et al. 1995). A collection of mainly empirical forensic techniques has been successfully developed over many years to obtain source information on explosions from their seismic signatures (e.g. Bowers and Selby, 2009). However a lesson from the three DPRK declared nuclear explosions since 2006, is that our historic collection of data may not be representative of future nuclear test signatures (e.g. Selby et al., 2012). To have confidence in identifying future explosions amongst the background of other seismic signals, and accurately estimate their yield, we need to put our empirical methods on a firmer physical footing. Goals of current research are to improve our physical understanding of the mechanisms of explosion generation of S- and surface-waves, and to advance our ability to numerically model and predict them. As part of that process we are re-examining regional seismic data from a variety of nuclear test sites including the DPRK and the former Nevada Test Site (now the Nevada National Security Site (NNSS)). Newer relative location and amplitude techniques can be employed to better quantify differences between explosions and used to understand those differences in term of depth, media and other properties. We are also making use of the Source Physics Experiments (SPE) at NNSS. The SPE chemical explosions are explicitly designed to improve our understanding of emplacement and source material effects on the generation of shear and surface waves (e.g. Snelson et al., 2013). Finally we are also exploring the value of combining seismic information with other technologies including acoustic and InSAR techniques to better understand the source characteristics. Our goal is to improve our explosion models

  16. Robust time and frequency domain estimation methods in adaptive control

    NASA Technical Reports Server (NTRS)

    Lamaire, Richard Orville

    1987-01-01

    A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.

  17. An assessment of the ability of the obstruction-scaling model to estimate solute diffusion coefficients in hydrogels.

    PubMed

    Hadjiev, Nicholas A; Amsden, Brian G

    2015-02-10

    The ability to estimate the diffusion coefficient of a solute within hydrogels has important application in the design and analysis of hydrogels used in drug delivery, tissue engineering, and regenerative medicine. A number of mathematical models have been derived for this purpose; however, they often rely on fitted parameters and so have limited predictive capability. Herein we assess the ability of the obstruction-scaling model to provide reasonable estimates of solute diffusion coefficients within hydrogels, as well as the assumption that a hydrogel can be represented as an entangled polymer solution of an equivalent concentration. Fluorescein isothiocyanate dextran solutes were loaded into sodium alginate solutions as well as hydrogels of different polymer volume fractions formed from photoinitiated cross-linking of methacrylate sodium alginate. The tracer diffusion coefficients of these solutes were measured using fluorescence recovery after photobleaching (FRAP). The measured diffusion coefficients were then compared to the values predicted by the obstruction-scaling model. The model predictions were within ±15% of the measured values, suggesting that the model can provide useful estimates of solute diffusion coefficients within hydrogels and solutions. Moreover, solutes diffusing in both sodium alginate solutions and hydrogels were demonstrated to experience the same degree of solute mobility restriction given the same effective polymer concentration, supporting the assumption that a hydrogel can be represented as an entangled polymer solution of equivalent concentration.

  18. Effect of methods of evaluation on sealing ability of mineral trioxide aggregate apical plug

    PubMed Central

    Nikhil, Vineeta; Jha, Padmanabh; Suri, Navleen Kaur

    2016-01-01

    Aim: The purpose of the study was to evaluate and compare the sealing ability of mineral trioxide aggregate (MTA) with three different methods. Materials and Methods: Forty single canal teeth were decoronated, and root canals were enlarged to simulate immature apex. The samples were randomly divided into Group MD = MTA-angelus mixed with distilled water and Group MC = MTA-angelus mixed with 2% chlorhexidine, and apical seal was recorded with glucose penetration method, fluid filtration method, and dye penetration methods and compared. Results: The three methods of evaluation resulted differently. The glucose penetration method showed that MD sealed better than MC, but difference was statistically insignificant (P > 0.05). The fluid filtration method resulted that Group MC was statistically insignificant superior to Group MD (P > 0.05). The dye penetration method showed that Group MC sealed statistically better than Group MD. Conclusion: No correlation was found among the results obtained with the three methods of evaluation. Addition of chlorhexidine enhanced the sealing ability of MTA according to the fluid filtration test and dye leakage while according to the glucose penetration test, chlorhexidine did not enhance the sealing ability of MTA. This study showed that relying on the results of apical sealing by only method can be misleading. PMID:27217635

  19. System and method for motor parameter estimation

    DOEpatents

    Luhrs, Bin; Yan, Ting

    2014-03-18

    A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values for motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.

  20. Research on evaluation methods for water regulation ability of dams in the Huai River Basin

    NASA Astrophysics Data System (ADS)

    Shan, G. H.; Lv, S. F.; Ma, K.

    2016-08-01

    Water environment protection is a global and urgent problem that requires correct and precise evaluation. Evaluation methods have been studied for many years; however, there is a lack of research on the methods of assessing the water regulation ability of dams. Currently, evaluating the ability of dams has become a practical and significant research orientation because of the global water crisis, and the lack of effective ways to manage a dam's regulation ability has only compounded this. This paper firstly constructs seven evaluation factors and then develops two evaluation approaches to implement the factors according to the features of the problem. Dams of the Yin Shang ecological control section in the Huai He River basin are selected as an example to demonstrate the method. The results show that the evaluation approaches can produce better and more practical suggestions for dam managers.

  1. A Comparative Study of Distribution System Parameter Estimation Methods

    SciTech Connect

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  2. Carbon footprint: current methods of estimation.

    PubMed

    Pandey, Divya; Agrawal, Madhoolika; Pandey, Jai Shanker

    2011-07-01

    Increasing greenhouse gaseous concentration in the atmosphere is perturbing the environment to cause grievous global warming and associated consequences. Following the rule that only measurable is manageable, mensuration of greenhouse gas intensiveness of different products, bodies, and processes is going on worldwide, expressed as their carbon footprints. The methodologies for carbon footprint calculations are still evolving and it is emerging as an important tool for greenhouse gas management. The concept of carbon footprinting has permeated and is being commercialized in all the areas of life and economy, but there is little coherence in definitions and calculations of carbon footprints among the studies. There are disagreements in the selection of gases, and the order of emissions to be covered in footprint calculations. Standards of greenhouse gas accounting are the common resources used in footprint calculations, although there is no mandatory provision of footprint verification. Carbon footprinting is intended to be a tool to guide the relevant emission cuts and verifications, its standardization at international level are therefore necessary. Present review describes the prevailing carbon footprinting methods and raises the related issues.

  3. Research on the estimation method for Earth rotation parameters

    NASA Astrophysics Data System (ADS)

    Yao, Yibin

    2008-12-01

    In this paper, the methods of earth rotation parameter (ERP) estimation based on IGS SINEX file of GPS solution are discussed in details. To estimate ERP, two different ways are involved: one is the parameter transformation method, and the other is direct adjustment method with restrictive conditions. With the IGS daily SINEX files produced by GPS tracking stations can be used to estimate ERP. The parameter transformation method can simplify the process. The process result indicates that the systemic error will exist in the estimated ERP by only using GPS observations. As to the daily GPS SINEX files, why the distinct systemic error is exist in the ERP, or whether this systemic error will affect other parameters estimation, and what its influenced magnitude being, it needs further study in the future.

  4. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  5. A TRMM Rainfall Estimation Method Applicable to Land Areas

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R.; Weinman, J.; Dalu, G.

    1999-01-01

    Methods developed to estimate rain rate on a footprint scale over land with the satellite-borne multispectral dual-polarization Special Sensor Microwave Imager (SSM/1) radiometer have met with limited success. Variability of surface emissivity on land and beam filling are commonly cited as the weaknesses of these methods. On the contrary, we contend a more significant reason for this lack of success is that the information content of spectral and polarization measurements of the SSM/I is limited. because of significant redundancy. As a result, the complex nature and vertical distribution C, of frozen and melting ice particles of different densities, sizes, and shapes cannot resolved satisfactorily. Extinction in the microwave region due to these complex particles can mask the extinction due to rain drops. Because of these reasons, theoretical models that attempt to retrieve rain rate do not succeed on a footprint scale. To illustrate the weakness of these models, as an example we can consider the brightness temperature measurement made by the radiometer in the 85 GHz channel (T85). Models indicate that T85 should be inversely related to the rain rate, because of scattering. However, rain rate derived from 15-minute rain gauges on land indicate that this is not true in a majority of footprints. This is also supported by the ship-borne radar observations of rain in the Tropical Oceans and Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA-COARE) region over the ocean. Based on these observations. we infer that theoretical models that attempt to retrieve rain rate do not succeed on a footprint scale. We do not follow the above path of rain retrieval on a footprint scale. Instead, we depend on the limited ability of the microwave radiometer to detect the presence of rain. This capability is useful to determine the rain area in a mesoscale region. We find in a given rain event that this rain area is closely related to the mesoscale-average rain rate

  6. Evaluation of Methods to Estimate Understory Fruit Biomass

    PubMed Central

    Lashley, Marcus A.; Thompson, Jeffrey R.; Chitwood, M. Colter; DePerno, Christopher S.; Moorman, Christopher E.

    2014-01-01

    Fleshy fruit is consumed by many wildlife species and is a critical component of forest ecosystems. Because fruit production may change quickly during forest succession, frequent monitoring of fruit biomass may be needed to better understand shifts in wildlife habitat quality. Yet, designing a fruit sampling protocol that is executable on a frequent basis may be difficult, and knowledge of accuracy within monitoring protocols is lacking. We evaluated the accuracy and efficiency of 3 methods to estimate understory fruit biomass (Fruit Count, Stem Density, and Plant Coverage). The Fruit Count method requires visual counts of fruit to estimate fruit biomass. The Stem Density method uses counts of all stems of fruit producing species to estimate fruit biomass. The Plant Coverage method uses land coverage of fruit producing species to estimate fruit biomass. Using linear regression models under a censored-normal distribution, we determined the Fruit Count and Stem Density methods could accurately estimate fruit biomass; however, when comparing AIC values between models, the Fruit Count method was the superior method for estimating fruit biomass. After determining that Fruit Count was the superior method to accurately estimate fruit biomass, we conducted additional analyses to determine the sampling intensity (i.e., percentage of area) necessary to accurately estimate fruit biomass. The Fruit Count method accurately estimated fruit biomass at a 0.8% sampling intensity. In some cases, sampling 0.8% of an area may not be feasible. In these cases, we suggest sampling understory fruit production with the Fruit Count method at the greatest feasible sampling intensity, which could be valuable to assess annual fluctuations in fruit production. PMID:24819253

  7. A new method for the estimation of the completeness magnitude

    NASA Astrophysics Data System (ADS)

    Godano, C.

    2017-02-01

    The estimation of the magnitude of completeness mc have strong consequences in any statistical analysis of seismic catalogue and in the evaluation of the seismic hazard. Here a new method for its estimation is presented. The goodness of the method has been tested using 104 simulated catalogues. Then the method has been applied to five experimental seismic catalogues: Greece, Italy, Japan, Northern California and Southern California.

  8. Investigating the Stability of Four Methods for Estimating Item Bias.

    ERIC Educational Resources Information Center

    Perlman, Carole L.; And Others

    The reliability of item bias estimates was studied for four methods: (1) the transformed delta method; (2) Shepard's modified delta method; (3) Rasch's one-parameter residual analysis; and (4) the Mantel-Haenszel procedure. Bias statistics were computed for each sample using all methods. Data were from administration of multiple-choice items from…

  9. An evaluation of methods for estimating decadal stream loads

    NASA Astrophysics Data System (ADS)

    Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.

    2016-11-01

    Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between

  10. The Effects of Presentation Method and Information Density on Visual Search Ability and Working Memory Load

    ERIC Educational Resources Information Center

    Chang, Ting-Wen; Kinshuk; Chen, Nian-Shing; Yu, Pao-Ta

    2012-01-01

    This study investigates the effects of successive and simultaneous information presentation methods on learner's visual search ability and working memory load for different information densities. Since the processing of information in the brain depends on the capacity of visual short-term memory (VSTM), the limited information processing capacity…

  11. Improving Junior High School Students' Mathematical Analogical Ability Using Discovery Learning Method

    ERIC Educational Resources Information Center

    Maarif, Samsul

    2016-01-01

    The aim of this study was to identify the influence of discovery learning method towards the mathematical analogical ability of junior high school's students. This is a research using factorial design 2x2 with ANOVA-Two ways. The population of this research included the entire students of SMPN 13 Jakarta (State Junior High School 13 of Jakarta)…

  12. The Effect of Virtual Language Learning Method on Writing Ability of Iranian Intermediate EFL Learners

    ERIC Educational Resources Information Center

    Khoshsima, Hooshang; Sayadi, Fatemeh

    2016-01-01

    This study aimed at investigating the effect of virtual language learning method on Iranian intermediate EFL learners writing ability. The study was conducted with 20 English Translation students at Chabahar Maritime University who were assigned into two groups, control and experimental, after ensuring of their homogeneity by administering a TOEFL…

  13. Affect Abilities Training--A Competency Based Method for Counseling Persons with Mental Retardation.

    ERIC Educational Resources Information Center

    Corcoran, James R.

    1982-01-01

    Affect Abilities Training (AAT) illustrates the kinds of concrete methods which can be used to further the affective development of persons with mental retardation. The objective of AAT is to develop those emotional behaviors upon which the individual (and society) place value while decreasing those responses which are counterproductive to…

  14. A Study on the Spatial Abilities of Prospective Social Studies Teachers: A Mixed Method Research

    ERIC Educational Resources Information Center

    Yurt, Eyüp; Tünkler, Vural

    2016-01-01

    This study investigated prospective social studies teachers' spatial abilities. It was conducted with 234 prospective teachers attending Social Studies Teaching departments at Education Faculties of two universities in Central and Southern Anatolia. This study, designed according to the explanatory-sequential design, is a mixed research method,…

  15. A source number estimation method for single optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu

    2015-10-01

    The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.

  16. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  17. An automated method of tuning an attitude estimator

    NASA Technical Reports Server (NTRS)

    Mason, Paul A. C.; Mook, D. Joseph

    1995-01-01

    Attitude determination is a major element of the operation and maintenance of a spacecraft. There are several existing methods of determining the attitude of a spacecraft. One of the most commonly used methods utilizes the Kalman filter to estimate the attitude of the spacecraft. Given an accurate model of a system and adequate observations, a Kalman filter can produce accurate estimates of the attitude. If the system model, filter parameters, or observations are inaccurate, the attitude estimates may be degraded. Therefore, it is advantageous to develop a method of automatically tuning the Kalman filter to produce the accurate estimates. In this paper, a three-axis attitude determination Kalman filter, which uses only magnetometer measurements, is developed and tested using real data. The appropriate filter parameters are found via the Process Noise Covariance Estimator (PNCE). The PNCE provides an optimal criterion for determining the best filter parameters.

  18. Evaluating maximum likelihood estimation methods to determine the hurst coefficients

    NASA Astrophysics Data System (ADS)

    Kendziorski, C. M.; Bassingthwaighte, J. B.; Tonellato, P. J.

    1999-12-01

    A maximum likelihood estimation method implemented in S-PLUS ( S-MLE) to estimate the Hurst coefficient ( H) is evaluated. The Hurst coefficient, with 0.5< H<1, characterizes long memory time series by quantifying the rate of decay of the autocorrelation function. S-MLE was developed to estimate H for fractionally differenced (fd) processes. However, in practice it is difficult to distinguish between fd processes and fractional Gaussian noise (fGn) processes. Thus, the method is evaluated for estimating H for both fd and fGn processes. S-MLE gave biased results of H for fGn processes of any length and for fd processes of lengths less than 2 10. A modified method is proposed to correct for this bias. It gives reliable estimates of H for both fd and fGn processes of length greater than or equal to 2 11.

  19. Evaluating methods for estimating local effective population size with and without migration.

    PubMed

    Gilbert, Kimberly J; Whitlock, Michael C

    2015-08-01

    Effective population size is a fundamental parameter in population genetics, evolutionary biology, and conservation biology, yet its estimation can be fraught with difficulties. Several methods to estimate Ne from genetic data have been developed that take advantage of various approaches for inferring Ne . The ability of these methods to accurately estimate Ne , however, has not been comprehensively examined. In this study, we employ seven of the most cited methods for estimating Ne from genetic data (Colony2, CoNe, Estim, MLNe, ONeSAMP, TMVP, and NeEstimator including LDNe) across simulated datasets with populations experiencing migration or no migration. The simulated population demographies are an isolated population with no immigration, an island model metapopulation with a sink population receiving immigrants, and an isolation by distance stepping stone model of populations. We find considerable variance in performance of these methods, both within and across demographic scenarios, with some methods performing very poorly. The most accurate estimates of Ne can be obtained by using LDNe, MLNe, or TMVP; however each of these approaches is outperformed by another in a differing demographic scenario. Knowledge of the approximate demography of population as well as the availability of temporal data largely improves Ne estimates.

  20. A new method for estimating extreme rainfall probabilities

    SciTech Connect

    Harper, G.A.; O'Hara, T.F. ); Morris, D.I. )

    1994-02-01

    As part of an EPRI-funded research program, the Yankee Atomic Electric Company developed a new method for estimating probabilities of extreme rainfall. It can be used, along with other techniques, to improve the estimation of probable maximum precipitation values for specific basins or regions.

  1. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    ERIC Educational Resources Information Center

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  2. Methods for Estimating Medical Expenditures Attributable to Intimate Partner Violence

    ERIC Educational Resources Information Center

    Brown, Derek S.; Finkelstein, Eric A.; Mercy, James A.

    2008-01-01

    This article compares three methods for estimating the medical cost burden of intimate partner violence against U.S. adult women (18 years and older), 1 year postvictimization. To compute the estimates, prevalence data from the National Violence Against Women Survey are combined with cost data from the Medical Expenditure Panel Survey, the…

  3. A Novel Monopulse Angle Estimation Method for Wideband LFM Radars

    PubMed Central

    Zhang, Yi-Xiong; Liu, Qi-Fan; Hong, Ru-Jia; Pan, Ping-Ping; Deng, Zhen-Miao

    2016-01-01

    Traditional monopulse angle estimations are mainly based on phase comparison and amplitude comparison methods, which are commonly adopted in narrowband radars. In modern radar systems, wideband radars are becoming more and more important, while the angle estimation for wideband signals is little studied in previous works. As noise in wideband radars has larger bandwidth than narrowband radars, the challenge lies in the accumulation of energy from the high resolution range profile (HRRP) of monopulse. In wideband radars, linear frequency modulated (LFM) signals are frequently utilized. In this paper, we investigate the monopulse angle estimation problem for wideband LFM signals. To accumulate the energy of the received echo signals from different scatterers of a target, we propose utilizing a cross-correlation operation, which can achieve a good performance in low signal-to-noise ratio (SNR) conditions. In the proposed algorithm, the problem of angle estimation is converted to estimating the frequency of the cross-correlation function (CCF). Experimental results demonstrate the similar performance of the proposed algorithm compared with the traditional amplitude comparison method. It means that the proposed method for angle estimation can be adopted. When adopting the proposed method, future radars may only need wideband signals for both tracking and imaging, which can greatly increase the data rate and strengthen the capability of anti-jamming. More importantly, the estimated angle will not become ambiguous under an arbitrary angle, which can significantly extend the estimated angle range in wideband radars. PMID:27271629

  4. A Novel Monopulse Angle Estimation Method for Wideband LFM Radars.

    PubMed

    Zhang, Yi-Xiong; Liu, Qi-Fan; Hong, Ru-Jia; Pan, Ping-Ping; Deng, Zhen-Miao

    2016-06-03

    Traditional monopulse angle estimations are mainly based on phase comparison and amplitude comparison methods, which are commonly adopted in narrowband radars. In modern radar systems, wideband radars are becoming more and more important, while the angle estimation for wideband signals is little studied in previous works. As noise in wideband radars has larger bandwidth than narrowband radars, the challenge lies in the accumulation of energy from the high resolution range profile (HRRP) of monopulse. In wideband radars, linear frequency modulated (LFM) signals are frequently utilized. In this paper, we investigate the monopulse angle estimation problem for wideband LFM signals. To accumulate the energy of the received echo signals from different scatterers of a target, we propose utilizing a cross-correlation operation, which can achieve a good performance in low signal-to-noise ratio (SNR) conditions. In the proposed algorithm, the problem of angle estimation is converted to estimating the frequency of the cross-correlation function (CCF). Experimental results demonstrate the similar performance of the proposed algorithm compared with the traditional amplitude comparison method. It means that the proposed method for angle estimation can be adopted. When adopting the proposed method, future radars may only need wideband signals for both tracking and imaging, which can greatly increase the data rate and strengthen the capability of anti-jamming. More importantly, the estimated angle will not become ambiguous under an arbitrary angle, which can significantly extend the estimated angle range in wideband radars.

  5. Validity of Using Two Numerical Analysis Techniques To Estimate Item and Ability Parameters via MMLE: Gauss-Hermite Quadrature Formula and Mislevy's Histogram Solution.

    ERIC Educational Resources Information Center

    Seong, Tae-Je

    The similarity of item and ability parameter estimations was investigated using two numerical analysis techniques via marginal maximum likelihood estimation (MMLE) with a large simulated data set (n=1,000 examinees) and changing the number of quadrature points. MMLE estimation uses a numerical analysis technique to integrate examinees' abilities…

  6. Comparison of haemoglobin estimates using direct & indirect cyanmethaemoglobin methods

    PubMed Central

    Bansal, Priyanka Gupta; Toteja, Gurudayal Singh; Bhatia, Neena; Gupta, Sanjeev; Kaur, Manpreet; Adhikari, Tulsi; Garg, Ashok Kumar

    2016-01-01

    Background & objectives: Estimation of haemoglobin is the most widely used method to assess anaemia. Although direct cyanmethaemoglobin method is the recommended method for estimation of haemoglobin, but it may not be feasible under field conditions. Hence, the present study was undertaken to compare indirect cyanmethaemoglobin method against the conventional direct method for haemoglobin estimation. Methods: Haemoglobin levels were estimated for 888 adolescent girls aged 11-18 yr residing in an urban slum in Delhi by both direct and indirect cyanmethaemoglobin methods, and the results were compared. Results: The mean haemoglobin levels for 888 whole blood samples estimated by direct and indirect cyanmethaemoglobin method were 116.1 ± 12.7 and 110.5 ± 12.5 g/l, respectively, with a mean difference of 5.67 g/l (95% confidence interval: 5.45 to 5.90, P<0.001); which is equivalent to 0.567 g%. The prevalence of anaemia was reported as 59.6 and 78.2 per cent by direct and indirect methods, respectively. Sensitivity and specificity of indirect cyanmethaemoglobin method were 99.2 and 56.4 per cent, respectively. Using regression analysis, prediction equation was developed for indirect haemoglobin values. Interpretation & conclusions: The present findings revealed that indirect cyanmethaemoglobin method overestimated the prevalence of anaemia as compared to the direct method. However, if a correction factor is applied, indirect method could be successfully used for estimating true haemoglobin level. More studies should be undertaken to establish agreement and correction factor between direct and indirect cyanmethaemoglobin methods. PMID:28256465

  7. A method for the estimation of urinary testosterone

    PubMed Central

    Ismail, A. A. A.; Harkness, R. A.

    1966-01-01

    1. A method has been developed for the estimation of testosterone in human urine by using acid hydrolysis followed by a quantitative form of a modified Girard reaction that separates a `conjugated-ketone' fraction from a urine extract; this is followed by column chromatography on alumina and paper chromatography. 2. Comparison of methods of estimation of testosterone in the final fraction shows that estimation by gas–liquid chromatography is more reproducible than by colorimetric methods applied to the same eluates from the paper chromatogram. 3. The mean recovery of testosterone by gas–liquid chromatography is 79·5%, and this method appears to be specific for testosterone. 4. The procedure is relatively rapid. Six determinations can be performed by one worker in 2 days. 5. Results of determinations on human urine are briefly presented. In general, they are similar to earlier estimates, but the maximal values are lower. PMID:5964968

  8. Methods for Estimating Uncertainty in Factor Analytic Solutions

    EPA Science Inventory

    The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...

  9. A bootstrap method for estimating uncertainty of water quality trends

    USGS Publications Warehouse

    Hirsch, Robert M.; Archfield, Stacey A.; DeCicco, Laura

    2015-01-01

    Estimation of the direction and magnitude of trends in surface water quality remains a problem of great scientific and practical interest. The Weighted Regressions on Time, Discharge, and Season (WRTDS) method was recently introduced as an exploratory data analysis tool to provide flexible and robust estimates of water quality trends. This paper enhances the WRTDS method through the introduction of the WRTDS Bootstrap Test (WBT), an extension of WRTDS that quantifies the uncertainty in WRTDS-estimates of water quality trends and offers various ways to visualize and communicate these uncertainties. Monte Carlo experiments are applied to estimate the Type I error probabilities for this method. WBT is compared to other water-quality trend-testing methods appropriate for data sets of one to three decades in length with sampling frequencies of 6–24 observations per year. The software to conduct the test is in the EGRETci R-package.

  10. Predictive ability of genomic selection models for breeding value estimation on growth traits of Pacific white shrimp Litopenaeus vannamei

    NASA Astrophysics Data System (ADS)

    Wang, Quanchao; Yu, Yang; Li, Fuhua; Zhang, Xiaojun; Xiang, Jianhai

    2016-10-01

    Genomic selection (GS) can be used to accelerate genetic improvement by shortening the selection interval. The successful application of GS depends largely on the accuracy of the prediction of genomic estimated breeding value (GEBV). This study is a first attempt to understand the practicality of GS in Litopenaeus vannamei and aims to evaluate models for GS on growth traits. The performance of GS models in L. vannamei was evaluated in a population consisting of 205 individuals, which were genotyped for 6 359 single nucleotide polymorphism (SNP) markers by specific length amplified fragment sequencing (SLAF-seq) and phenotyped for body length and body weight. Three GS models (RR-BLUP, BayesA, and Bayesian LASSO) were used to obtain the GEBV, and their predictive ability was assessed by the reliability of the GEBV and the bias of the predicted phenotypes. The mean reliability of the GEBVs for body length and body weight predicted by the different models was 0.296 and 0.411, respectively. For each trait, the performances of the three models were very similar to each other with respect to predictability. The regression coefficients estimated by the three models were close to one, suggesting near to zero bias for the predictions. Therefore, when GS was applied in a L. vannamei population for the studied scenarios, all three models appeared practicable. Further analyses suggested that improved estimation of the genomic prediction could be realized by increasing the size of the training population as well as the density of SNPs.

  11. Evapotranspiration: Mass balance measurements compared with flux estimation methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Evapotranspiration (ET) may be measured by mass balance methods and estimated by flux sensing methods. The mass balance methods are typically restricted in terms of the area that can be represented (e.g., surface area of weighing lysimeter (LYS) or equivalent representative area of neutron probe (NP...

  12. A posteriori pointwise error estimates for the boundary element method

    SciTech Connect

    Paulino, G.H.; Gray, L.J.; Zarikian, V.

    1995-01-01

    This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.

  13. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1988-01-01

    Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.

  14. A hybrid displacement estimation method for ultrasonic elasticity imaging.

    PubMed

    Chen, Lujie; Housden, R; Treece, Graham; Gee, Andrew; Prager, Richard

    2010-04-01

    Axial displacement estimation is fundamental to many freehand quasistatic ultrasonic strain imaging systems. In this paper, we present a novel estimation method that combines the strengths of quality-guided tracking, multi-level correlation, and phase-zero search to achieve high levels of accuracy and robustness. The paper includes a full description of the hybrid method, in vivo examples to illustrate the method's clinical relevance, and finite element simulations to assess its accuracy. Quantitative and qualitative comparisons are made with leading single- and multi-level alternatives. In the in vivo examples, the hybrid method produces fewer obvious peak-hopping errors, and in simulation, the hybrid method is found to reduce displacement estimation errors by 5 to 50%. With typical clinical data, the hybrid method can generate more than 25 strain images per second on commercial hardware; this is comparable with the alternative approaches considered in this paper.

  15. Methods for Estimation of Market Power in Electric Power Industry

    NASA Astrophysics Data System (ADS)

    Turcik, M.; Oleinikova, I.; Junghans, G.; Kolcun, M.

    2012-01-01

    The article is related to a topical issue of the newly-arisen market power phenomenon in the electric power industry. The authors point out to the importance of effective instruments and methods for credible estimation of the market power on liberalized electricity market as well as the forms and consequences of market power abuse. The fundamental principles and methods of the market power estimation are given along with the most common relevant indicators. Furthermore, in the work a proposal for determination of the relevant market place taking into account the specific features of power system and a theoretical example of estimating the residual supply index (RSI) in the electricity market are given.

  16. A Channelization-Based DOA Estimation Method for Wideband Signals

    PubMed Central

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-01-01

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  17. Evaluating the ability of Bayesian clustering methods to detect hybridization and introgression using an empirical red wolf data set.

    PubMed

    Bohling, Justin H; Adams, Jennifer R; Waits, Lisette P

    2013-01-01

    Bayesian clustering methods have emerged as a popular tool for assessing hybridization using genetic markers. Simulation studies have shown these methods perform well under certain conditions; however, these methods have not been evaluated using empirical data sets with individuals of known ancestry. We evaluated the performance of two clustering programs, baps and structure, with genetic data from a reintroduced red wolf (Canis rufus) population in North Carolina, USA. Red wolves hybridize with coyotes (C. latrans), and a single hybridization event resulted in introgression of coyote genes into the red wolf population. A detailed pedigree has been reconstructed for the wild red wolf population that includes individuals of 50-100% red wolf ancestry, providing an ideal case study for evaluating the ability of these methods to estimate admixture. Using 17 microsatellite loci, we tested the programs using different training set compositions and varying numbers of loci. structure was more likely than baps to detect an admixed genotype and correctly estimate an individual's true ancestry composition. However, structure was more likely to misclassify a pure individual as a hybrid. Both programs were outperformed by a maximum-likelihood-based test designed specifically for this system, which never misclassified a hybrid (50-75% red wolf) as a red wolf or vice versa. Training set composition and the number of loci both had an impact on accuracy but their relative importance varied depending on the program. Our findings demonstrate the importance of evaluating methods used for detecting admixture in the context of endangered species management.

  18. A Computationally Efficient Method for Polyphonic Pitch Estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  19. A new class of methods for functional connectivity estimation

    NASA Astrophysics Data System (ADS)

    Lin, Wutu

    Measuring functional connectivity from neural recordings is important in understanding processing in cortical networks. The covariance-based methods are the current golden standard for functional connectivity estimation. However, the link between the pair-wise correlations and the physiological connections inside the neural network is unclear. Therefore, the power of inferring physiological basis from functional connectivity estimation is limited. To build a stronger tie and better understand the relationship between functional connectivity and physiological neural network, we need (1) a realistic model to simulate different types of neural recordings with known ground truth for benchmarking; (2) a new functional connectivity method that produce estimations closely reflecting the physiological basis. In this thesis, (1) I tune a spiking neural network model to match with human sleep EEG data, (2) introduce a new class of methods for estimating connectivity from different kinds of neural signals and provide theory proof for its superiority, (3) apply it to simulated fMRI data as an application.

  20. Demographic estimation methods for plants with unobservable life-states

    USGS Publications Warehouse

    Kery, M.; Gregg, K.B.; Schaub, M.

    2005-01-01

    Demographic estimation of vital parameters in plants with an unobservable dormant state is complicated, because time of death is not known. Conventional methods assume that death occurs at a particular time after a plant has last been seen aboveground but the consequences of assuming a particular duration of dormancy have never been tested. Capture-recapture methods do not make assumptions about time of death; however, problems with parameter estimability have not yet been resolved. To date, a critical comparative assessment of these methods is lacking. We analysed data from a 10 year study of Cleistes bifaria, a terrestrial orchid with frequent dormancy, and compared demographic estimates obtained by five varieties of the conventional methods, and two capture-recapture methods. All conventional methods produced spurious unity survival estimates for some years or for some states, and estimates of demographic rates sensitive to the time of death assumption. In contrast, capture-recapture methods are more parsimonious in terms of assumptions, are based on well founded theory and did not produce spurious estimates. In Cleistes, dormant episodes lasted for 1-4 years (mean 1.4, SD 0.74). The capture-recapture models estimated ramet survival rate at 0.86 (SE~ 0.01), ranging from 0.77-0.94 (SEs # 0.1) in anyone year. The average fraction dormant was estimated at 30% (SE 1.5), ranging 16 -47% (SEs # 5.1) in anyone year. Multistate capture-recapture models showed that survival rates were positively related to precipitation in the current year, but transition rates were more strongly related to precipitation in the previous than in the current year, with more ramets going dormant following dry years. Not all capture-recapture models of interest have estimable parameters; for instance, without excavating plants in years when they do not appear aboveground, it is not possible to obtain independent timespecific survival estimates for dormant plants. We introduce rigorous

  1. A new method for parameter estimation in nonlinear dynamical equations

    NASA Astrophysics Data System (ADS)

    Wang, Liu; He, Wen-Ping; Liao, Le-Jian; Wan, Shi-Quan; He, Tao

    2015-01-01

    Parameter estimation is an important scientific problem in various fields such as chaos control, chaos synchronization and other mathematical models. In this paper, a new method for parameter estimation in nonlinear dynamical equations is proposed based on evolutionary modelling (EM). This will be achieved by utilizing the following characteristics of EM which includes self-organizing, adaptive and self-learning features which are inspired by biological natural selection, and mutation and genetic inheritance. The performance of the new method is demonstrated by using various numerical tests on the classic chaos model—Lorenz equation (Lorenz 1963). The results indicate that the new method can be used for fast and effective parameter estimation irrespective of whether partial parameters or all parameters are unknown in the Lorenz equation. Moreover, the new method has a good convergence rate. Noises are inevitable in observational data. The influence of observational noises on the performance of the presented method has been investigated. The results indicate that the strong noises, such as signal noise ratio (SNR) of 10 dB, have a larger influence on parameter estimation than the relatively weak noises. However, it is found that the precision of the parameter estimation remains acceptable for the relatively weak noises, e.g. SNR is 20 or 30 dB. It indicates that the presented method also has some anti-noise performance.

  2. Comparison of volume estimation methods for pancreatic islet cells

    NASA Astrophysics Data System (ADS)

    Dvořák, JiřÃ.­; Å vihlík, Jan; Habart, David; Kybic, Jan

    2016-03-01

    In this contribution we study different methods of automatic volume estimation for pancreatic islets which can be used in the quality control step prior to the islet transplantation. The total islet volume is an important criterion in the quality control. Also, the individual islet volume distribution is interesting -- it has been indicated that smaller islets can be more effective. A 2D image of a microscopy slice containing the islets is acquired. The input of the volume estimation methods are segmented images of individual islets. The segmentation step is not discussed here. We consider simple methods of volume estimation assuming that the islets have spherical or ellipsoidal shape. We also consider a local stereological method, namely the nucleator. The nucleator does not rely on any shape assumptions and provides unbiased estimates if isotropic sections through the islets are observed. We present a simulation study comparing the performance of the volume estimation methods in different scenarios and an experimental study comparing the methods on a real dataset.

  3. A Group Contribution Method for Estimating Cetane and Octane Numbers

    SciTech Connect

    Kubic, William Louis

    2016-07-28

    Much of the research on advanced biofuels is devoted to the study of novel chemical pathways for converting nonfood biomass into liquid fuels that can be blended with existing transportation fuels. Many compounds under consideration are not found in the existing fuel supplies. Often, the physical properties needed to assess the viability of a potential biofuel are not available. The only reliable information available may be the molecular structure. Group contribution methods for estimating physical properties from molecular structure have been used for more than 60 years. The most common application is estimation of thermodynamic properties. More recently, group contribution methods have been developed for estimating rate dependent properties including cetane and octane numbers. Often, published group contribution methods are limited in terms of types of function groups and range of applicability. In this study, a new, broadly-applicable group contribution method based on an artificial neural network was developed to estimate cetane number research octane number, and motor octane numbers of hydrocarbons and oxygenated hydrocarbons. The new method is more accurate over a greater range molecular weights and structural complexity than existing group contribution methods for estimating cetane and octane numbers.

  4. Motion estimation using point cluster method and Kalman filter.

    PubMed

    Senesh, M; Wolf, A

    2009-05-01

    The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal

  5. The method of assessment of the grinding wheel cutting ability in the plunge grinding

    NASA Astrophysics Data System (ADS)

    Nadolny, Krzysztof

    2012-09-01

    This article presents the method of comparative assessment of the grinding wheel cutting ability in the plunge grinding kinematics. A new method has been developed to facilitate multicriterial assessment of the working conditions of the abrasive grains and the bond bridges, as well as the wear mechanisms of the GWAS, which occur during the grinding process, with simultaneous limitation of the workshop tests range. The work hereby describes the methodology of assessment of the grinding wheel cutting ability in a short grinding test that lasts for 3 seconds, for example, with a specially shaped grinding wheel, in plunge grinding. The grinding wheel macrogeometry modification applied in the developed method consists in forming a cone or a few zones of various diameters on its surface in the dressing cut. It presents an exemplary application of two variants of the method in the internal cylindrical plunge grinding, in 100Cr6 steel. Grinding wheels with microcrystalline corundum grains and ceramic bond underwent assessment. Analysis of the registered machining results showed greater efficacy of the method of cutting using a grinding wheel with zones of various diameters. The method allows for comparative tests upon different grinding wheels, with various grinding parameters and different machined materials.

  6. The estimation of the measurement results with using statistical methods

    NASA Astrophysics Data System (ADS)

    Velychko, O.; Gordiyenko, T.

    2015-02-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.

  7. Evaluation of alternative methods for estimating reference evapotranspiration

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Evapotranspiration is an important component in water-balance and irrigation scheduling models. While the FAO-56 Penman-Monteith method has become the de facto standard for estimating reference evapotranspiration (ETo), it is a complex method requiring several weather parameters. Required weather ...

  8. Comparison of Methods for Estimating and Testing Latent Variable Interactions.

    ERIC Educational Resources Information Center

    Moulder, Bradley C.; Algina, James

    2002-01-01

    Used simulation to compare structural equation modeling methods for estimating and testing hypotheses about an interaction between continuous variables. Findings indicate that the two-stage least squares procedure exhibited more bias and lower power than the other methods. The Jaccard-Wan procedure (J. Jaccard and C. Wan, 1995) and maximum…

  9. Assessing the sensitivity of methods for estimating principal causal effects.

    PubMed

    Stuart, Elizabeth A; Jo, Booil

    2015-12-01

    The framework of principal stratification provides a way to think about treatment effects conditional on post-randomization variables, such as level of compliance. In particular, the complier average causal effect (CACE) - the effect of the treatment for those individuals who would comply with their treatment assignment under either treatment condition - is often of substantive interest. However, estimation of the CACE is not always straightforward, with a variety of estimation procedures and underlying assumptions, but little advice to help researchers select between methods. In this article, we discuss and examine two methods that rely on very different assumptions to estimate the CACE: a maximum likelihood ('joint') method that assumes the 'exclusion restriction,' (ER) and a propensity score-based method that relies on 'principal ignorability.' We detail the assumptions underlying each approach, and assess each methods' sensitivity to both its own assumptions and those of the other method using both simulated data and a motivating example. We find that the ER-based joint approach appears somewhat less sensitive to its assumptions, and that the performance of both methods is significantly improved when there are strong predictors of compliance. Interestingly, we also find that each method performs particularly well when the assumptions of the other approach are violated. These results highlight the importance of carefully selecting an estimation procedure whose assumptions are likely to be satisfied in practice and of having strong predictors of principal stratum membership.

  10. Rapid-estimation method for assessing scour at highway bridges

    USGS Publications Warehouse

    Holnbeck, Stephen R.

    1998-01-01

    A method was developed by the U.S. Geological Survey for rapid estimation of scour at highway bridges using limited site data and analytical procedures to estimate pier, abutment, and contraction scour depths. The basis for the method was a procedure recommended by the Federal Highway Administration for conducting detailed scour investigations, commonly referred to as the Level 2 method. Using pier, abutment, and contraction scour results obtained from Level 2 investigations at 122 sites in 10 States, envelope curves and graphical relations were developed that enable determination of scour-depth estimates at most bridge sites in a matter of a few hours. Rather than using complex hydraulic variables, surrogate variables more easily obtained in the field were related to calculated scour-depth data from Level 2 studies. The method was tested by having several experienced individuals apply the method in the field, and results were compared among the individuals and with previous detailed analyses performed for the sites. Results indicated that the variability in predicted scour depth among individuals applying the method generally was within an acceptable range, and that conservatively greater scour depths generally were obtained by the rapid-estimation method compared to the Level 2 method. The rapid-estimation method is considered most applicable for conducting limited-detail scour assessments and as a screening tool to determine those bridge sites that may require more detailed analysis. The method is designed to be applied only by a qualified professional possessing knowledge and experience in the fields of bridge scour, hydraulics, and flood hydrology, and having specific expertise with the Level 2 method.

  11. Assessment of Methods for Estimating Risk to Birds from ...

    EPA Pesticide Factsheets

    The U.S. EPA Ecological Risk Assessment Support Center (ERASC) announced the release of the final report entitled, Assessment of Methods for Estimating Risk to Birds from Ingestion of Contaminated Grit Particles. This report evaluates approaches for estimating the probability of ingestion by birds of contaminated particles such as pesticide granules or lead particles (i.e. shot or bullet fragments). In addition, it presents an approach for using this information to estimate the risk of mortality to birds from ingestion of lead particles. Response to ERASC Request #16

  12. Precision of two methods for estimating age from burbot otoliths

    USGS Publications Warehouse

    Edwards, W.H.; Stapanian, M.A.; Stoneman, A.T.

    2011-01-01

    Lower reproductive success and older age structure are associated with many burbot (Lota lota L.) populations that are declining or of conservation concern. Therefore, reliable methods for estimating the age of burbot are critical for effective assessment and management. In Lake Erie, burbot populations have declined in recent years due to the combined effects of an aging population (&xmacr; = 10 years in 2007) and extremely low recruitment since 2002. We examined otoliths from burbot (N = 91) collected in Lake Erie in 2007 and compared the estimates of burbot age by two agers, each using two established methods (cracked-and-burned and thin-section) of estimating ages from burbot otoliths. One ager was experienced at estimating age from otoliths, the other was a novice. Agreement (precision) between the two agers was higher for the thin-section method, particularly at ages 6–11 years, based on linear regression analyses and 95% confidence intervals. As expected, precision between the two methods was higher for the more experienced ager. Both agers reported that the thin sections offered clearer views of the annuli, particularly near the margins on otoliths from burbot ages ≥8. Slides for the thin sections required some costly equipment and more than 2 days to prepare. In contrast, preparing the cracked-and-burned samples was comparatively inexpensive and quick. We suggest use of the thin-section method for estimating the age structure of older burbot populations.

  13. Bounded Self-Weights Estimation Method for Non-Local Means Image Denoising Using Minimax Estimators.

    PubMed

    Nguyen, Minh Phuong; Chun, Se Young

    2017-04-01

    A non-local means (NLM) filter is a weighted average of a large number of non-local pixels with various image intensity values. The NLM filters have been shown to have powerful denoising performance, excellent detail preservation by averaging many noisy pixels, and using appropriate values for the weights, respectively. The NLM weights between two different pixels are determined based on the similarities between two patches that surround these pixels and a smoothing parameter. Another important factor that influences the denoising performance is the self-weight values for the same pixel. The recently introduced local James-Stein type center pixel weight estimation method (LJS) outperforms other existing methods when determining the contribution of the center pixels in the NLM filter. However, the LJS method may result in excessively large self-weight estimates since no upper bound is assumed, and the method uses a relatively large local area for estimating the self-weights, which may lead to a strong bias. In this paper, we investigated these issues in the LJS method, and then propose a novel local self-weight estimation methods using direct bounds (LMM-DB) and reparametrization (LMM-RP) based on the Baranchik's minimax estimator. Both the LMM-DB and LMM-RP methods were evaluated using a wide range of natural images and a clinical MRI image together with the various levels of additive Gaussian noise. Our proposed parameter selection methods yielded an improved bias-variance trade-off, a higher peak signal-to-noise (PSNR) ratio, and fewer visual artifacts when compared with the results of the classical NLM and LJS methods. Our proposed methods also provide a heuristic way to select a suitable global smoothing parameters that can yield PSNR values that are close to the optimal values.

  14. Spectral estimation of plasma fluctuations. I. Comparison of methods

    SciTech Connect

    Riedel, K.S.; Sidorenko, A. ); Thomson, D.J. )

    1994-03-01

    The relative root mean squared errors (RMSE) of nonparametric methods for spectral estimation is compared for microwave scattering data of plasma fluctuations. These methods reduce the variance of the periodogram estimate by averaging the spectrum over a frequency bandwidth. As the bandwidth increases, the variance decreases, but the bias error increases. The plasma spectra vary by over four orders of magnitude, and therefore, using a spectral window is necessary. The smoothed tapered periodogram is compared with the adaptive multiple taper methods and hybrid methods. It is found that a hybrid method, which uses four orthogonal tapers and then applies a kernel smoother, performs best. For 300 point data segments, even an optimized smoothed tapered periodogram has a 24% larger relative RMSE than the hybrid method. Two new adaptive multitaper weightings which outperform Thomson's original adaptive weighting are presented.

  15. Improved method for estimating tree crown diameter using high-resolution airborne data

    NASA Astrophysics Data System (ADS)

    Brovkina, Olga; Latypov, Iscander Sh.; Cienciala, Emil; Fabianek, Tomas

    2016-04-01

    Automatic mapping of tree crown size (radius, diameter, or width) from remote sensing can provide a major benefit for practical and scientific purposes, but requires the development of accurate methods. This study presents an improved method for average tree crown diameter estimation at a forest plot level from high-resolution airborne data. The improved method consists of the combination of a window binarization procedure and a granulometric algorithm, and avoids the complicated crown delineation procedure that is currently used to estimate crown size. The systematic error in average crown diameter estimates is corrected with the improved method. The improved method is tested with coniferous, beech, and mixed-species forest plots based on airborne images of various spatial resolutions. The absolute (quantitative) accuracy of the improved crown diameter estimates is comparable or higher for both monospecies plots and mixed-species plots than the current methods. The ability of the improved method to produce good estimates for average crown diameters for monoculture and mixed species, to use remote sensing data of various spatial resolution and to operate in automatic mode promisingly suggests its applicability to a wide range of forest systems.

  16. Analytic study of the Tadoma method: language abilities of three deaf-blind subjects.

    PubMed

    Chomsky, C

    1986-09-01

    This study reports on the linguistic abilities of 3 adult deaf-blind subjects. The subjects perceive spoken language through touch, placing a hand on the face of the speaker and monitoring the speaker's articulatory motions, a method of speechreading known as Tadoma. Two of the subjects, deaf-blind since infancy, acquired language and learned to speak through this tactile system; the third subject has used Tadoma since becoming deaf-blind at age 7. Linguistic knowledge and productive language are analyzed, using standardized tests and several tests constructed for this study. The subjects' language abilities prove to be extensive, comparing favorably in many areas with hearing individuals. The results illustrate a relatively minor effect of limited language exposure on eventual language achievement. The results also demonstrate the adequacy of the tactile sense, in these highly trained Tadoma users, for transmitting information about spoken language sufficient to support the development of language and learning to produce speech.

  17. Estimating the ability of plants to plastically track temperature-mediated shifts in the spring phenological optimum.

    PubMed

    Tansey, Christine J; Hadfield, Jarrod D; Phillimore, Albert B

    2017-02-10

    One consequence of rising spring temperatures is that the optimum timing of key life-history events may advance. Where this is the case, a population's fate may depend on the degree to which it is able to track a change in the optimum timing either via plasticity or via adaptation. Estimating the effect that temperature change will have on optimum timing using standard approaches is logistically challenging, with the result that very few estimates of this important parameter exist. Here we adopt an alternative statistical method that substitutes space for time to estimate the temperature sensitivity of the optimum timing of 22 plant species based on >200 000 spatiotemporal phenological observations from across the United Kingdom. We find that first leafing and flowering dates are sensitive to forcing (spring) temperatures, with optimum timing advancing by an average of 3 days °C(-1) and plastic responses to forcing between -3 and -8 days °C(-1) . Chilling (autumn/winter) temperatures and photoperiod tend to be important cues for species with early and late phenology, respectively. For most species, we find that plasticity is adaptive, and for seven species, plasticity is sufficient to track geographic variation in the optimum phenology. For four species, we find that plasticity is significantly steeper than the optimum slope that we estimate between forcing temperature and phenology, and we examine possible explanations for this countergradient pattern, including local adaptation.

  18. Fault detection in electromagnetic suspension systems with state estimation methods

    SciTech Connect

    Sinha, P.K.; Zhou, F.B.; Kutiyal, R.S. . Dept. of Engineering)

    1993-11-01

    High-speed maglev vehicles need a high level of safety that depends on the whole vehicle system's reliability. There are many ways of attaining high reliability for the system. Conventional method uses redundant hardware with majority vote logic circuits. Hardware redundancy costs more, weigh more and occupy more space than that of analytically redundant methods. Analytically redundant systems use parameter identification and state estimation methods based on the system models to detect and isolate the fault of instruments (sensors), actuator and components. In this paper the authors use the Luenberger observer to estimate three state variables of the electromagnetic suspension system: position (airgap), vehicle velocity, and vertical acceleration. These estimates are compared with the corresponding sensor outputs for fault detection. In this paper, they consider FDI of the accelerometer, the sensor which provides the ride quality.

  19. Simplified triangle method for estimating evaporative fraction over soybean crops

    NASA Astrophysics Data System (ADS)

    Silva-Fuzzo, Daniela Fernanda; Rocha, Jansle Vieira

    2016-10-01

    Accurate estimates are emerging with technological advances in remote sensing, and the triangle method has demonstrated to be a useful tool for the estimation of evaporative fraction (EF). The purpose of this study was to estimate the EF using the triangle method at the regional level. We used data from the Moderate Resolution Imaging Spectroradiometer orbital sensor, referring to indices of surface temperature and vegetation index for a 10-year period (2002/2003 to 2011/2012) of cropping seasons in the state of Paraná, Brazil. The triangle method has shown considerable results for the EF, and the validation of the estimates, as compared to observed data of climatological water balance, showed values >0.8 for modified "d" of Wilmott and R2 values between 0.6 and 0.7 for some counties. The errors were low for all years analyzed, and the test showed that the estimated data are very close to the observed data. Based on statistical validation, we can say that the triangle method is a consistent tool, is useful as it uses only images of remote sensing as variables, and can provide support for monitoring large-scale agroclimatic, specially for countries of great territorial dimensions, such as Brazil, which lacks a more dense network of meteorological ground stations, i.e., the country does not appear to cover a large field for data.

  20. A New Method for Radar Rainfall Estimation Using Merged Radar and Gauge Derived Fields

    NASA Astrophysics Data System (ADS)

    Hasan, M. M.; Sharma, A.; Johnson, F.; Mariethoz, G.; Seed, A.

    2014-12-01

    Accurate estimation of rainfall is critical for any hydrological analysis. The advantage of radar rainfall measurements is their ability to cover large areas. However, the uncertainties in the parameters of the power law, that links reflectivity to rainfall intensity, have to date precluded the widespread use of radars for quantitative rainfall estimates for hydrological studies. There is therefore considerable interest in methods that can combine the strengths of radar and gauge measurements by merging the two data sources. In this work, we propose two new developments to advance this area of research. The first contribution is a non-parametric radar rainfall estimation method (NPZR) which is based on kernel density estimation. Instead of using a traditional Z-R relationship, the NPZR accounts for the uncertainty in the relationship between reflectivity and rainfall intensity. More importantly, this uncertainty can vary for different values of reflectivity. The NPZR method reduces the Mean Square Error (MSE) of the estimated rainfall by 16 % compared to a traditionally fitted Z-R relation. Rainfall estimates are improved at 90% of the gauge locations when the method is applied to the densely gauged Sydney Terrey Hills radar region. A copula based spatial interpolation method (SIR) is used to estimate rainfall from gauge observations at the radar pixel locations. The gauge-based SIR estimates have low uncertainty in areas with good gauge density, whilst the NPZR method provides more reliable rainfall estimates than the SIR method, particularly in the areas of low gauge density. The second contribution of the work is to merge the radar rainfall field with spatially interpolated gauge rainfall estimates. The two rainfall fields are combined using a temporally and spatially varying weighting scheme that can account for the strengths of each method. The weight for each time period at each location is calculated based on the expected estimation error of each method

  1. Application of the Marquardt least-squares method to the estimation of pulse function parameters

    NASA Astrophysics Data System (ADS)

    Lundengârd, Karl; Rančić, Milica; Javor, Vesna; Silvestrov, Sergei

    2014-12-01

    Application of the Marquardt least-squares method (MLSM) to the estimation of non-linear parameters of functions used for representing various lightning current waveshapes is presented in this paper. Parameters are determined for the Pulse, Heidler's and DEXP function representing the first positive, first and subsequent negative stroke currents as given in IEC 62305-1 Standard Ed.2, and also for some other fast- and slow-decaying lightning current waveshapes. The results prove the ability of the MLSM to be used for the estimation of parameters of the functions important in lightning discharge modeling.

  2. New method for estimating low-earth-orbit collision probabilities

    NASA Technical Reports Server (NTRS)

    Vedder, John D.; Tabor, Jill L.

    1991-01-01

    An unconventional but general method is described for estimating the probability of collision between an earth-orbiting spacecraft and orbital debris. This method uses a Monte Caralo simulation of the orbital motion of the target spacecraft and each discrete debris object to generate an empirical set of distances, each distance representing the separation between the spacecraft and the nearest debris object at random times. Using concepts from the asymptotic theory of extreme order statistics, an analytical density function is fitted to this set of minimum distances. From this function, it is possible to generate realistic collision estimates for the spacecraft.

  3. Estimation Method of Body Temperature from Upper Arm Temperature

    NASA Astrophysics Data System (ADS)

    Suzuki, Arata; Ryu, Kazuteru; Kanai, Nobuyuki

    This paper proposes a method for estimation of a body temperature by using a relation between the upper arm temperature and the atmospheric temperature. Conventional method has measured by armpit or oral, because the body temperature from the body surface is influenced by the atmospheric temperature. However, there is a correlation between the body surface temperature and the atmospheric temperature. By using this correlation, the body temperature can estimated from the body surface temperature. Proposed method enables to measure body temperature by the temperature sensor that is embedded in the blood pressure monitor cuff. Therefore, simultaneous measurement of blood pressure and body temperature can be realized. The effectiveness of the proposed method is verified through the actual body temperature experiment. The proposed method might contribute to reduce the medical staff's workloads in the home medical care, and more.

  4. Modified cross-validation as a method for estimating parameter

    NASA Astrophysics Data System (ADS)

    Shi, Chye Rou; Adnan, Robiah

    2014-12-01

    Best subsets regression is an effective approach to distinguish models that can attain objectives with as few predictors as would be prudent. Subset models might really estimate the regression coefficients and predict future responses with smaller variance than the full model using all predictors. The inquiry of how to pick subset size λ depends on the bias and variance. There are various method to pick subset size λ. Regularly pick the smallest model that minimizes an estimate of the expected prediction error. Since data are regularly small, so Repeated K-fold cross-validation method is the most broadly utilized method to estimate prediction error and select model. The data is reshuffled and re-stratified before each round. However, the "one-standard-error" rule of Repeated K-fold cross-validation method always picks the most stingy model. The objective of this research is to modify the existing cross-validation method to avoid overfitting and underfitting model, a modified cross-validation method is proposed. This paper compares existing cross-validation and modified cross-validation. Our results reasoned that the modified cross-validation method is better at submodel selection and evaluation than other methods.

  5. A review of action estimation methods for galactic dynamics

    NASA Astrophysics Data System (ADS)

    Sanders, Jason L.; Binney, James

    2016-04-01

    We review the available methods for estimating actions, angles and frequencies of orbits in both axisymmetric and triaxial potentials. The methods are separated into two classes. Unless an orbit has been trapped by a resonance, convergent, or iterative, methods are able to recover the actions to arbitrarily high accuracy given sufficient computing time. Faster non-convergent methods rely on the potential being sufficiently close to a separable potential, and the accuracy of the action estimate cannot be improved through further computation. We critically compare the accuracy of the methods and the required computation time for a range of orbits in an axisymmetric multicomponent Galactic potential. We introduce a new method for estimating actions that builds on the adiabatic approximation of Schönrich & Binney and discuss the accuracy required for the actions, angles and frequencies using suitable distribution functions for the thin and thick discs, the stellar halo and a star stream. We conclude that for studies of the disc and smooth halo component of the Milky Way, the most suitable compromise between speed and accuracy is the Stäckel Fudge, whilst when studying streams the non-convergent methods do not offer sufficient accuracy and the most suitable method is computing the actions from an orbit integration via a generating function. All the software used in this study can be downloaded from https://github.com/jls713/tact.

  6. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    NASA Astrophysics Data System (ADS)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  7. The deposit size frequency method for estimating undiscovered uranium deposits

    USGS Publications Warehouse

    McCammon, R.B.; Finch, W.I.

    1993-01-01

    The deposit size frequency (DSF) method has been developed as a generalization of the method that was used in the National Uranium Resource Evaluation (NURE) program to estimate the uranium endowment of the United States. The DSF method overcomes difficulties encountered during the NURE program when geologists were asked to provide subjective estimates of (1) the endowed fraction of an area judged favorable (factor F) for the occurrence of undiscovered uranium deposits and (2) the tons of endowed rock per unit area (factor T) within the endowed fraction of the favorable area. Because the magnitudes of factors F and T were unfamiliar to nearly all of the geologists, most geologists responded by estimating the number of undiscovered deposits likely to occur within the favorable area and the average size of these deposits. The DSF method combines factors F and T into a single factor (F??T) that represents the tons of endowed rock per unit area of the undiscovered deposits within the favorable area. Factor F??T, provided by the geologist, is the estimated number of undiscovered deposits per unit area in each of a number of specified deposit-size classes. The number of deposit-size classes and the size interval of each class are based on the data collected from the deposits in known (control) areas. The DSF method affords greater latitude in making subjective estimates than the NURE method and emphasizes more of the everyday experience of exploration geologists. Using the DSF method, new assessments have been made for the "young, organic-rich" surficial uranium deposits in Washington and idaho and for the solution-collapse breccia pipe uranium deposits in the Grand Canyon region in Arizona and adjacent Utah. ?? 1993 Oxford University Press.

  8. Estimation of uncertainty for contour method residual stress measurements

    DOE PAGES

    Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; ...

    2014-12-03

    This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulnessmore » of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).« less

  9. Estimation of uncertainty for contour method residual stress measurements

    SciTech Connect

    Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; Hill, Michael R.

    2014-12-03

    This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulness of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).

  10. Estimating Agricultural Water Use using the Operational Simplified Surface Energy Balance Evapotranspiration Estimation Method

    NASA Astrophysics Data System (ADS)

    Forbes, B. T.

    2015-12-01

    Due to the predominantly arid climate in Arizona, access to adequate water supply is vital to the economic development and livelihood of the State. Water supply has become increasingly important during periods of prolonged drought, which has strained reservoir water levels in the Desert Southwest over past years. Arizona's water use is dominated by agriculture, consuming about seventy-five percent of the total annual water demand. Tracking current agricultural water use is important for managers and policy makers so that current water demand can be assessed and current information can be used to forecast future demands. However, many croplands in Arizona are irrigated outside of areas where water use reporting is mandatory. To estimate irrigation withdrawals on these lands, we use a combination of field verification, evapotranspiration (ET) estimation, and irrigation system qualification. ET is typically estimated in Arizona using the Modified Blaney-Criddle method which uses meteorological data to estimate annual crop water requirements. The Modified Blaney-Criddle method assumes crops are irrigated to their full potential over the entire growing season, which may or may not be realistic. We now use the Operational Simplified Surface Energy Balance (SSEBop) ET data in a remote-sensing and energy-balance framework to estimate cropland ET. SSEBop data are of sufficient resolution (30m by 30m) for estimation of field-scale cropland water use. We evaluate our SSEBop-based estimates using ground-truth information and irrigation system qualification obtained in the field. Our approach gives the end user an estimate of crop consumptive use as well as inefficiencies in irrigation system performance—both of which are needed by water managers for tracking irrigated water use in Arizona.

  11. Preparation of nanocrystalline bredigite powders with apatite-forming ability by a simple combustion method

    SciTech Connect

    Huang Xianghui; Chang Jiang

    2008-06-03

    Nanocrystalline bredigite (Ca{sub 7}MgSi{sub 4}O{sub 16}) powders were synthesized by a simple solution combustion method. Phase pure bredigite powders with particle sizes ranging from 234 to 463 nm could be obtained at a relatively low temperature of 650 deg. C. The apatite-forming ability of the bredigite powders was examined by soaking them in a stimulated body fluid. The compositional and morphological changes of the powders before and after soaking were analyzed by X-ray diffraction and scanning electron microscopy and the results showed that hydroxyapatite was formed after soaking for 4 days.

  12. Fluorimetric method for simultaneous estimation of cortisol, corticosterone, and testosterone in plasma.

    PubMed Central

    Mattingly, D; Martin, H; Tyler, C

    1989-01-01

    The simultaneous estimation of steroids in plasma was carried out by the assay of cortisol, corticosterone, and testosterone. The method entails separation by means of thin layer chromatography, followed by conversion to a fluorophore and fluorimetric measurement. Its major advantages are its high specificity, its ability to detect unknown substances, and the ease with which it can be performed. The method has acceptable levels of accuracy and precision and the normal values obtained by it compare well with those given by methods in general use. PMID:2738170

  13. Inverse method for estimating shear stress in machining

    NASA Astrophysics Data System (ADS)

    Burns, T. J.; Mates, S. P.; Rhorer, R. L.; Whitenton, E. P.; Basak, D.

    2016-01-01

    An inverse method is presented for estimating shear stress in the work material in the region of chip-tool contact along the rake face of the tool during orthogonal machining. The method is motivated by a model of heat generation in the chip, which is based on a two-zone contact model for friction along the rake face, and an estimate of the steady-state flow of heat into the cutting tool. Given an experimentally determined discrete set of steady-state temperature measurements along the rake face of the tool, it is shown how to estimate the corresponding shear stress distribution on the rake face, even when no friction model is specified.

  14. Method for estimating spin-spin interactions from magnetization curves

    NASA Astrophysics Data System (ADS)

    Tamura, Ryo; Hukushima, Koji

    2017-02-01

    We develop a method to estimate the spin-spin interactions in the Hamiltonian from the observed magnetization curve by machine learning based on Bayesian inference. In our method, plausible spin-spin interactions are determined by maximizing the posterior distribution, which is the conditional probability of the spin-spin interactions in the Hamiltonian for a given magnetization curve with observation noise. The conditional probability is obtained with the Markov chain Monte Carlo simulations combined with an exchange Monte Carlo method. The efficiency of our method is tested using synthetic magnetization curve data, and the results show that spin-spin interactions are estimated with a high accuracy. In particular, the relevant terms of the spin-spin interactions are successfully selected from the redundant interaction candidates by the l1 regularization in the prior distribution.

  15. A study of methods to estimate debris flow velocity

    USGS Publications Warehouse

    Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.

    2008-01-01

    Debris flow velocities are commonly back-calculated from superelevation events which require subjective estimates of radii of curvature of bends in the debris flow channel or predicted using flow equations that require the selection of appropriate rheological models and material property inputs. This research investigated difficulties associated with the use of these conventional velocity estimation methods. Radii of curvature estimates were found to vary with the extent of the channel investigated and with the scale of the media used, and back-calculated velocities varied among different investigated locations along a channel. Distinct populations of Bingham properties were found to exist between those measured by laboratory tests and those back-calculated from field data; thus, laboratory-obtained values would not be representative of field-scale debris flow behavior. To avoid these difficulties with conventional methods, a new preliminary velocity estimation method is presented that statistically relates flow velocity to the channel slope and the flow depth. This method presents ranges of reasonable velocity predictions based on 30 previously measured velocities. ?? 2008 Springer-Verlag.

  16. Neural Network Based Method for Estimating Helicopter Low Airspeed

    DTIC Science & Technology

    1996-10-24

    The present invention relates generally to virtual sensors and, more particularly, to a means and method utilizing a neural network for estimating...helicopter airspeed at speeds below about 50 knots using only fixed system parameters (i.e., parameters measured or determined in a reference frame fixed relative to the helicopter fuselage) as inputs to the neural network .

  17. A preliminary comparison of different methods for observer performance estimation

    NASA Astrophysics Data System (ADS)

    Massanes, Francesc; Brankov, Jovan G.

    2013-03-01

    In medical imaging, image quality is assessed by the degree to which a human observer can correctly perform a given diagnostic task. Therefore the image quality is typically quantified by using performance measurements from decision/detection theory like the receiver operation characteristic (ROC) curve and the area under ROC curve (AUC). In this paper we compare five different AUC estimation techniques, widely used in the literature, including parametric and non-parametric methods. We compared each method by equivalence hypothesis testing using a model observer as well as data sets from a previously published human observer study. The main conclusions of this work are 1) if a small number of images are scored, one cannot tell apart different AUC estimation methods due to large variability in AUC estimates, regardless whether image scores are reported on a continuous or quantized scale. 2) If the number of scored images is large and image scores are reported on a continuous scale, all tested AUC estimation methods are statistically equivalent.

  18. Inertial sensor-based methods in walking speed estimation: a systematic review.

    PubMed

    Yang, Shuozhi; Li, Qingguo

    2012-01-01

    Self-selected walking speed is an important measure of ambulation ability used in various clinical gait experiments. Inertial sensors, i.e., accelerometers and gyroscopes, have been gradually introduced to estimate walking speed. This research area has attracted a lot of attention for the past two decades, and the trend is continuing due to the improvement of performance and decrease in cost of the miniature inertial sensors. With the intention of understanding the state of the art of current development in this area, a systematic review on the exiting methods was done in the following electronic engines/databases: PubMed, ISI Web of Knowledge, SportDiscus and IEEE Xplore. Sixteen journal articles and papers in proceedings focusing on inertial sensor based walking speed estimation were fully reviewed. The existing methods were categorized by sensor specification, sensor attachment location, experimental design, and walking speed estimation algorithm.

  19. Applications of truncated QR methods to sinusoidal frequency estimation

    NASA Technical Reports Server (NTRS)

    Hsieh, S. F.; Liu, K. J. R.; Yao, K.

    1990-01-01

    Three truncated QR methods are proposed for sinusoidal frequency estimation: (1) truncated QR without column pivoting (TQR), (2) truncated QR with preordered columns, and (3) truncated QR with column pivoting. It is demonstrated that the benefit of truncated SVD for high frequency resolution is achievable under the truncated QR approach with much lower computational cost. Other attractive features of the proposed methods include the ease of updating, which is difficult for the SVD method, and numerical stability. TQR methods thus offer efficient ways to identify sinusoidals closely clustered in frequencies under stationary and nonstationary conditions.

  20. Stress intensity estimates by a computer assisted photoelastic method

    NASA Technical Reports Server (NTRS)

    Smith, C. W.

    1977-01-01

    Following an introductory history, the frozen stress photoelastic method is reviewed together with analytical and experimental aspects of cracks in photoelastic models. Analytical foundations are then presented upon which a computer assisted frozen stress photoelastic technique is based for extracting estimates of stress intensity factors from three-dimensional cracked body problems. The use of the method is demonstrated for two currently important three-dimensional crack problems.

  1. Three Different Methods of Estimating LAI in a Small Watershed

    NASA Astrophysics Data System (ADS)

    Speckman, H. N.; Ewers, B. E.; Beverly, D.

    2015-12-01

    Leaf area index (LAI) is a critical input of models that improve predictive understanding of ecology, hydrology, and climate change. Multiple techniques exist to quantify LAI, most of which are labor intensive, and all often fail to converge on similar estimates. . Recent large-scale bark beetle induced mortality greatly altered LAI, which is now dominated by younger and more metabolically active trees compared to the pre-beetle forest. Tree mortality increases error in optical LAI estimates due to the lack of differentiation between live and dead branches in dense canopy. Our study aims to quantify LAI using three different LAI methods, and then to compare the techniques to each other and topographic drivers to develop an effective predictive model of LAI. This study focuses on quantifying LAI within a small (~120 ha) beetle infested watershed in Wyoming's Snowy Range Mountains. The first technique estimated LAI using in-situ hemispherical canopy photographs that were then analyzed with Hemisfer software. The second LAI estimation technique was use of the Kaufmann 1982 allometrerics from forest inventories conducted throughout the watershed, accounting for stand basal area, species composition, and the extent of bark beetle driven mortality. The final technique used airborne light detection and ranging (LIDAR) first DMS returns, which were used to estimating canopy heights and crown area. LIDAR final returns provided topographical information and were then ground-truthed during forest inventories. Once data was collected, a fractural analysis was conducted comparing the three methods. Species composition was driven by slope position and elevation Ultimately the three different techniques provided very different estimations of LAI, but each had their advantage: estimates from hemisphere photos were well correlated with SWE and snow depth measurements, forest inventories provided insight into stand health and composition, and LIDAR were able to quickly and

  2. Sealing Ability of Orthograde MTA and CEM Cement in Apically Resected Roots Using Bacterial Leakage Method

    PubMed Central

    Moradi, Saeed; Disfani, Reza; Ghazvini, Kiarash; Lomee, Mahdi

    2013-01-01

    Introduction The aim of this in vitro study was to determine the sealing ability of orthograde ProRoot mineral trioxide aggregate (MTA) and calcium enriched mixture (CEM) cement as root-end filling materials. Materials and Methods Fifty four extracted single-rooted human teeth were used. The samples were randomly divided into 3 experimental groups. In group A and B, 4 mm of WMTA and CEM cement were placed in an orthograde manner and 3 mm of apices were resected after 24 hours. In group C the apical 3 mm of each root was resected and the root-end prepared with ultrasonic tips to a depth of 3 mm and subsequently, then filled with MTA. The apical sealing ability was performed with bacterial leakage method. Statistical analysis was carried out with Chi-square test. Results There were no significant differences in the extent of bacterial leakage between the three experimental groups (P>0.05). Conclusion Based on the limitations of this in vitro study, we concluded that MTA and CEM cement can be placed in an orthograde manner when there is a potential need for root-end surgery. PMID:23922571

  3. A New Method For Cosmological Parameter Estimation From SNIa Data

    NASA Astrophysics Data System (ADS)

    March, Marisa; Trotta, R.; Berkes, P.; Starkman, G. D.; Vaudrevange, P. M.

    2011-01-01

    We present a new methodology to extract constraints on cosmological parameters from SNIa data obtained with the SALT2 lightcurve fitter. The power of our Bayesian method lies in its full exploitation of relevant prior information, which is ignored by the usual chisquare approach. Using realistic simulated data sets we demonstrate that our method outperforms the usual chisquare approach 2/3 of the time while achieving better long-term coverage properties. A further benefit of our methodology is its ability to produce a posterior probability distribution for the intrinsic dispersion of SNe. This feature can also be used to detect hidden systematics in the data.

  4. Intentional Movement Performance Ability (IMPA): a method for robot-aided quantitative assessment of motor function.

    PubMed

    Shin, Sung Yul; Kim, Jung Yoon; Lee, Sanghyeop; Lee, Junwon; Kim, Seung-Jong; Kim, ChangHwan

    2013-06-01

    The purpose of this paper is to propose a new assessment method for evaluating motor function of the patients who are suffering from physical weakness after stroke, incomplete spinal cord injury (iSCI) or other diseases. In this work, we use a robotic device to obtain the information of interaction occur between patient and robot, and use it as a measure for assessing the patients. The Intentional Movement Performance Ability (IMPA) is defined by the root mean square of the interactive torque, while the subject performs given periodic movement with the robot. IMPA is proposed to quantitatively determine the level of subject's impaired motor function. The method is indirectly tested by asking the healthy subjects to lift a barbell to disturb their motor function. The experimental result shows that the IMPA has a potential for providing a proper information of the subject's motor function level.

  5. Comparison of Methods for Estimating Low Flow Characteristics of Streams

    USGS Publications Warehouse

    Tasker, Gary D.

    1987-01-01

    Four methods for estimating the 7-day, 10-year and 7-day, 20-year low flows for streams are compared by the bootstrap method. The bootstrap method is a Monte Carlo technique in which random samples are drawn from an unspecified sampling distribution defined from observed data. The nonparametric nature of the bootstrap makes it suitable for comparing methods based on a flow series for which the true distribution is unknown. Results show that the two methods based on hypothetical distribution (Log-Pearson III and Weibull) had lower mean square errors than did the G. E. P. Box-D. R. Cox transformation method or the Log-W. C. Boughton method which is based on a fit of plotting positions.

  6. Methods for Measuring and Estimating Methane Emission from Ruminants

    PubMed Central

    Storm, Ida M. L. D.; Hellwing, Anne Louise F.; Nielsen, Nicolaj I.; Madsen, Jørgen

    2012-01-01

    Simple Summary Knowledge about methods used in quantification of greenhouse gasses is currently needed due to international commitments to reduce the emissions. In the agricultural sector one important task is to reduce enteric methane emissions from ruminants. Different methods for quantifying these emissions are presently being used and others are under development, all with different conditions for application. For scientist and other persons working with the topic it is very important to understand the advantages and disadvantage of the different methods in use. This paper gives a brief introduction to existing methods but also a description of newer methods and model-based techniques. Abstract This paper is a brief introduction to the different methods used to quantify the enteric methane emission from ruminants. A thorough knowledge of the advantages and disadvantages of these methods is very important in order to plan experiments, understand and interpret experimental results, and compare them with other studies. The aim of the paper is to describe the principles, advantages and disadvantages of different methods used to quantify the enteric methane emission from ruminants. The best-known methods: Chambers/respiration chambers, SF6 technique and in vitro gas production technique and the newer CO2 methods are described. Model estimations, which are used to calculate national budget and single cow enteric emission from intake and diet composition, are also discussed. Other methods under development such as the micrometeorological technique, combined feeder and CH4 analyzer and proxy methods are briefly mentioned. Methods of choice for estimating enteric methane emission depend on aim, equipment, knowledge, time and money available, but interpretation of results obtained with a given method can be improved if knowledge about the disadvantages and advantages are used in the planning of experiments. PMID:26486915

  7. Noninvasive method of estimating human newborn regional cerebral blood flow

    SciTech Connect

    Younkin, D.P.; Reivich, M.; Jaggi, J.; Obrist, W.; Delivoria-Papadopoulos, M.

    1982-12-01

    A noninvasive method of estimating regional cerebral blood flow (rCBF) in premature and full-term babies has been developed. Based on a modification of the /sup 133/Xe inhalation rCBF technique, this method uses eight extracranial NaI scintillation detectors and an i.v. bolus injection of /sup 133/Xe (approximately 0.5 mCi/kg). Arterial xenon concentration was estimated with an external chest detector. Cerebral blood flow was measured in 15 healthy, neurologically normal premature infants. Using Obrist's method of two-compartment analysis, normal values were calculated for flow in both compartments, relative weight and fractional flow in the first compartment (gray matter), initial slope of gray matter blood flow, mean cerebral blood flow, and initial slope index of mean cerebral blood flow. The application of this technique to newborns, its relative advantages, and its potential uses are discussed.

  8. New Statistical Learning Methods for Estimating Optimal Dynamic Treatment Regimes

    PubMed Central

    Zhao, Ying-Qi; Zeng, Donglin; Laber, Eric B.; Kosorok, Michael R.

    2014-01-01

    Dynamic treatment regimes (DTRs) are sequential decision rules for individual patients that can adapt over time to an evolving illness. The goal is to accommodate heterogeneity among patients and find the DTR which will produce the best long term outcome if implemented. We introduce two new statistical learning methods for estimating the optimal DTR, termed backward outcome weighted learning (BOWL), and simultaneous outcome weighted learning (SOWL). These approaches convert individualized treatment selection into an either sequential or simultaneous classification problem, and can thus be applied by modifying existing machine learning techniques. The proposed methods are based on directly maximizing over all DTRs a nonparametric estimator of the expected long-term outcome; this is fundamentally different than regression-based methods, for example Q-learning, which indirectly attempt such maximization and rely heavily on the correctness of postulated regression models. We prove that the resulting rules are consistent, and provide finite sample bounds for the errors using the estimated rules. Simulation results suggest the proposed methods produce superior DTRs compared with Q-learning especially in small samples. We illustrate the methods using data from a clinical trial for smoking cessation. PMID:26236062

  9. An aerial survey method to estimate sea otter abundance

    USGS Publications Warehouse

    Bodkin, J.L.; Udevitz, M.S.; Garner, G.W.; Amstrup, Steven C.; Laake, J.L.; Manly, B. F. J.; McDonald, L.L.; Robertson, Donna G.

    1999-01-01

    Sea otters (Enhydra lutris) occur in shallow coastal habitats and can be highly visible on the sea surface. They generally rest in groups and their detection depends on factors that include sea conditions, viewing platform, observer technique and skill, distance, habitat and group size. While visible on the surface, they are difficult to see while diving and may dive in response to an approaching survey platform. We developed and tested an aerial survey method that uses intensive searches within portions of strip transects to adjust for availability and sightability biases. Correction factors are estimated independently for each survey and observer. In tests of our method using shore-based observers, we estimated detection probabilities of 0.52-0.72 in standard strip-transects and 0.96 in intensive searches. We used the survey method in Prince William Sound, Alaska to estimate a sea otter population size of 9,092 (SE = 1422). The new method represents an improvement over various aspects of previous methods, but additional development and testing will be required prior to its broad application.

  10. Parameter estimation method for blurred cell images from fluorescence microscope

    NASA Astrophysics Data System (ADS)

    He, Fuyun; Zhang, Zhisheng; Luo, Xiaoshu; Zhao, Shulin

    2016-10-01

    Microscopic cell image analysis is indispensable to cell biology. Images of cells can easily degrade due to optical diffraction or focus shift, as this results in low signal-to-noise ratio (SNR) and poor image quality, hence affecting the accuracy of cell analysis and identification. For a quantitative analysis of cell images, restoring blurred images to improve the SNR is the first step. A parameter estimation method for defocused microscopic cell images based on the power law properties of the power spectrum of cell images is proposed. The circular radon transform (CRT) is used to identify the zero-mode of the power spectrum. The parameter of the CRT curve is initially estimated by an improved differential evolution algorithm. Following this, the parameters are optimized through the gradient descent method. Using synthetic experiments, it was confirmed that the proposed method effectively increased the peak SNR (PSNR) of the recovered images with high accuracy. Furthermore, experimental results involving actual microscopic cell images verified that the superiority of the proposed parameter estimation method for blurred microscopic cell images other method in terms of qualitative visual sense as well as quantitative gradient and PSNR.

  11. Method to Estimate the Dissolved Air Content in Hydraulic Fluid

    NASA Technical Reports Server (NTRS)

    Hauser, Daniel M.

    2011-01-01

    In order to verify the air content in hydraulic fluid, an instrument was needed to measure the dissolved air content before the fluid was loaded into the system. The instrument also needed to measure the dissolved air content in situ and in real time during the de-aeration process. The current methods used to measure the dissolved air content require the fluid to be drawn from the hydraulic system, and additional offline laboratory processing time is involved. During laboratory processing, there is a potential for contamination to occur, especially when subsaturated fluid is to be analyzed. A new method measures the amount of dissolved air in hydraulic fluid through the use of a dissolved oxygen meter. The device measures the dissolved air content through an in situ, real-time process that requires no additional offline laboratory processing time. The method utilizes an instrument that measures the partial pressure of oxygen in the hydraulic fluid. By using a standardized calculation procedure that relates the oxygen partial pressure to the volume of dissolved air in solution, the dissolved air content is estimated. The technique employs luminescent quenching technology to determine the partial pressure of oxygen in the hydraulic fluid. An estimated Henry s law coefficient for oxygen and nitrogen in hydraulic fluid is calculated using a standard method to estimate the solubility of gases in lubricants. The amount of dissolved oxygen in the hydraulic fluid is estimated using the Henry s solubility coefficient and the measured partial pressure of oxygen in solution. The amount of dissolved nitrogen that is in solution is estimated by assuming that the ratio of dissolved nitrogen to dissolved oxygen is equal to the ratio of the gas solubility of nitrogen to oxygen at atmospheric pressure and temperature. The technique was performed at atmospheric pressure and room temperature. The technique could be theoretically carried out at higher pressures and elevated

  12. Dental age estimation using Willems method: A digital orthopantomographic study

    PubMed Central

    Mohammed, Rezwana Begum; Krishnamraju, P. V.; Prasanth, P. S.; Sanghvi, Praveen; Lata Reddy, M. Asha; Jyotsna, S.

    2014-01-01

    In recent years, age estimation has become increasingly important in living people for a variety of reasons, including identifying criminal and legal responsibility, and for many other social events such as a birth certificate, marriage, beginning a job, joining the army, and retirement. Objectives: The aim of this study was to assess the developmental stages of left seven mandibular teeth for estimation of dental age (DA) in different age groups and to evaluate the possible correlation between DA and chronological age (CA) in South Indian population using Willems method. Materials and Methods: Digital Orthopantomogram of 332 subjects (166 males, 166 females) who fit the study and the criteria were obtained. Assessment of mandibular teeth (from central incisor to the second molar on left quadrant) development was undertaken and DA was assessed using Willems method. Results and Discussion: The present study showed a significant correlation between DA and CA in both males (r = 0.71 and females (r = 0.88). The overall mean difference between the estimated DA and CA for males was 0.69 ± 2.14 years (P < 0.001) while for females, it was 0.08 ± 1.34 years (P > 0.05). Willems method underestimated the mean age of males by 0.69 years and females by 0.08 years and showed that females mature earlier than males in selected population. The mean difference between DA and CA according to Willems method was 0.39 years and is statistically significant (P < 0.05). Conclusion: This study showed significant relation between DA and CA. Thus, digital radiographic assessment of mandibular teeth development can be used to generate mean DA using Willems method and also the estimated age range for an individual of unknown CA. PMID:25191076

  13. NEW COMPLETENESS METHODS FOR ESTIMATING EXOPLANET DISCOVERIES BY DIRECT DETECTION

    SciTech Connect

    Brown, Robert A.; Soummer, Remi

    2010-05-20

    We report on new methods for evaluating realistic observing programs that search stars for planets by direct imaging, where observations are selected from an optimized star list and stars can be observed multiple times. We show how these methods bring critical insight into the design of the mission and its instruments. These methods provide an estimate of the outcome of the observing program: the probability distribution of discoveries (detection and/or characterization) and an estimate of the occurrence rate of planets ({eta}). We show that these parameters can be accurately estimated from a single mission simulation, without the need for a complete Monte Carlo mission simulation, and we prove the accuracy of this new approach. Our methods provide tools to define a mission for a particular science goal; for example, a mission can be defined by the expected number of discoveries and its confidence level. We detail how an optimized star list can be built and how successive observations can be selected. Our approach also provides other critical mission attributes, such as the number of stars expected to be searched and the probability of zero discoveries. Because these attributes depend strongly on the mission scale (telescope diameter, observing capabilities and constraints, mission lifetime, etc.), our methods are directly applicable to the design of such future missions and provide guidance to the mission and instrument design based on scientific performance. We illustrate our new methods with practical calculations and exploratory design reference missions for the James Webb Space Telescope (JWST) operating with a distant starshade to reduce scattered and diffracted starlight on the focal plane. We estimate that five habitable Earth-mass planets would be discovered and characterized with spectroscopy, with a probability of zero discoveries of 0.004, assuming a small fraction of JWST observing time (7%), {eta} = 0.3, and 70 observing visits, limited by starshade

  14. Methods to estimate irrigated reference crop evapotranspiration - a review.

    PubMed

    Kumar, R; Jat, M K; Shankar, V

    2012-01-01

    Efficient water management of crops requires accurate irrigation scheduling which, in turn, requires the accurate measurement of crop water requirement. Irrigation is applied to replenish depleted moisture for optimum plant growth. Reference evapotranspiration plays an important role for the determination of water requirements for crops and irrigation scheduling. Various models/approaches varying from empirical to physically base distributed are available for the estimation of reference evapotranspiration. Mathematical models are useful tools to estimate the evapotranspiration and water requirement of crops, which is essential information required to design or choose best water management practices. In this paper the most commonly used models/approaches, which are suitable for the estimation of daily water requirement for agricultural crops grown in different agro-climatic regions, are reviewed. Further, an effort has been made to compare the accuracy of various widely used methods under different climatic conditions.

  15. pyGMMis: Mixtures-of-Gaussians density estimation method

    NASA Astrophysics Data System (ADS)

    Melchior, Peter; Goulding, Andy D.

    2016-11-01

    pyGMMis is a mixtures-of-Gaussians density estimation method that accounts for arbitrary incompleteness in the process that creates the samples as long as the incompleteness is known over the entire feature space and does not depend on the sample density (missing at random). pyGMMis uses the Expectation-Maximization procedure and generates its best guess of the unobserved samples on the fly. It can also incorporate an uniform "background" distribution as well as independent multivariate normal measurement errors for each of the observed samples, and then recovers an estimate of the error-free distribution from which both observed and unobserved samples are drawn. The code automatically segments the data into localized neighborhoods, and is capable of performing density estimation with millions of samples and thousands of model components on machines with sufficient memory.

  16. A generic computerized method for estimate of familial risks.

    PubMed Central

    Colombet, Isabelle; Xu, Yigang; Jaulent, Marie-Christine; Desages, Daniel; Degoulet, Patrice; Chatellier, Gilles

    2002-01-01

    Most guidelines developed for cancers screening and for cardiovascular risk management use rules to estimate familial risk. These rules are complex, difficult to memorize, and need to collect a complete pedigree. This paper describes a generic computerized method to estimate familial risks and its implementation in an internet-based application. The program is based on 3 generic models: a model of the family; a model of familial risk; a display model for the pedigree. The model of family allows to represent each member of the family and to construct and display a family tree. The model of familial risk is generic and allows easy update of the program with new diseases or new rules. It was possible to implement guidelines dealing with breast and colorectal cancer and cardiovascular diseases prevention. First evaluation with general practitioners showed that the program was usable. Impact on quality of familial risk estimate should be more documented. PMID:12463810

  17. Kernel density estimator methods for Monte Carlo radiation transport

    NASA Astrophysics Data System (ADS)

    Banerjee, Kaushik

    In this dissertation, the Kernel Density Estimator (KDE), a nonparametric probability density estimator, is studied and used to represent global Monte Carlo (MC) tallies. KDE is also employed to remove the singularities from two important Monte Carlo tallies, namely point detector and surface crossing flux tallies. Finally, KDE is also applied to accelerate the Monte Carlo fission source iteration for criticality problems. In the conventional MC calculation histograms are used to represent global tallies which divide the phase space into multiple bins. Partitioning the phase space into bins can add significant overhead to the MC simulation and the histogram provides only a first order approximation to the underlying distribution. The KDE method is attractive because it can estimate MC tallies in any location within the required domain without any particular bin structure. Post-processing of the KDE tallies is sufficient to extract detailed, higher order tally information for an arbitrary grid. The quantitative and numerical convergence properties of KDE tallies are also investigated and they are shown to be superior to conventional histograms as well as the functional expansion tally developed by Griesheimer. Monte Carlo point detector and surface crossing flux tallies are two widely used tallies but they suffer from an unbounded variance. As a result, the central limit theorem can not be used for these tallies to estimate confidence intervals. By construction, KDE tallies can be directly used to estimate flux at a point but the variance of this point estimate does not converge as 1/N, which is not unexpected for a point quantity. However, an improved approach is to modify both point detector and surface crossing flux tallies directly by using KDE within a variance reduction approach by taking advantage of the fact that KDE estimates the underlying probability density function. This methodology is demonstrated by several numerical examples and demonstrates that

  18. Estimating Intracranial Volume in Brain Research: An Evaluation of Methods.

    PubMed

    Sargolzaei, Saman; Sargolzaei, Arman; Cabrerizo, Mercedes; Chen, Gang; Goryawala, Mohammed; Pinzon-Ardila, Alberto; Gonzalez-Arias, Sergio M; Adjouadi, Malek

    2015-10-01

    Intracranial volume (ICV) is a standard measure often used in morphometric analyses to correct for head size in brain studies. Inaccurate ICV estimation could introduce bias in the outcome. The current study provides a decision aid in defining protocols for ICV estimation across different subject groups in terms of sampling frequencies that can be optimally used on the volumetric MRI data, and type of software most suitable for use in estimating the ICV measure. Four groups of 53 subjects are considered, including adult controls (AC, adults with Alzheimer's disease (AD), pediatric controls (PC) and group of pediatric epilepsy subjects (PE). Reference measurements were calculated for each subject by manually tracing intracranial cavity without sub-sampling. The reliability of reference measurements were assured through intra- and inter- variation analyses. Three publicly well-known software packages (FreeSurfer Ver. 5.3.0, FSL Ver. 5.0, SPM8 and SPM12) were examined in their ability to automatically estimate ICV across the groups. Results on sub-sampling studies with a 95 % confidence showed that in order to keep the accuracy of the inter-leaved slice sampling protocol above 99 %, sampling period cannot exceed 20 mm for AC, 25 mm for PC, 15 mm for AD and 17 mm for the PE groups. The study assumes a priori knowledge about the population under study into the automated ICV estimation. Tuning of the parameters in FSL and the use of proper atlas in SPM showed significant reduction in the systematic bias and the error in ICV estimation via these automated tools. SPM12 with the use of pediatric template is found to be a more suitable candidate for PE group. SPM12 and FSL subjected to tuning are the more appropriate tools for the PC group. The random error is minimized for FS in AD group and SPM8 showed less systematic bias. Across the AC group, both SPM12 and FS performed well but SPM12 reported lesser amount of systematic bias.

  19. Smeared star spot location estimation using directional integral method.

    PubMed

    Hou, Wang; Liu, Haibo; Lei, Zhihui; Yu, Qifeng; Liu, Xiaochun; Dong, Jing

    2014-04-01

    Image smearing significantly affects the accuracy of attitude determination of most star sensors. To ensure the accuracy and reliability of a star sensor under image smearing conditions, a novel directional integral method is presented for high-precision star spot location estimation to improve the accuracy of attitude determination. Simulations based on the orbit data of the challenging mini-satellite payload satellite were performed. Simulation results demonstrated that the proposed method exhibits high performance and good robustness, which indicates that the method can be applied effectively.

  20. Relative Precision of Ability Estimation in Polytomous CAT: A Comparison under the Generalized Partial Credit Model and Graded Response Model.

    ERIC Educational Resources Information Center

    Wang, Shudong; Wang, Tianyou

    The purpose of this Monte Carlo study was to evaluate the relative accuracy of T. Warm's weighted likelihood estimate (WLE) compared to maximum likelihood estimate (MLE), expected a posteriori estimate (EAP), and maximum a posteriori estimate (MAP), using the generalized partial credit model (GPCM) and graded response model (GRM) under a variety…

  1. Methods of Mmax Estimation East of the Rocky Mountains

    USGS Publications Warehouse

    Wheeler, Russell L.

    2009-01-01

    Several methods have been used to estimate the magnitude of the largest possible earthquake (Mmax) in parts of the Central and Eastern United States and adjacent Canada (CEUSAC). Each method has pros and cons. The largest observed earthquake in a specified area provides an unarguable lower bound on Mmax in the area. Beyond that, all methods are undermined by the enigmatic nature of geologic controls on the propagation of large CEUSAC ruptures. Short historical-seismicity records decrease the defensibility of several methods that are based on characteristics of small areas in most of CEUSAC. Methods that use global tectonic analogs of CEUSAC encounter uncertainties in understanding what 'analog' means. Five of the methods produce results that are inconsistent with paleoseismic findings from CEUSAC seismic zones or individual active faults.

  2. A Statistical Method for Estimating Luminosity Functions Using Truncated Data

    NASA Astrophysics Data System (ADS)

    Schafer, Chad M.

    2007-06-01

    The observational limitations of astronomical surveys lead to significant statistical inference challenges. One such challenge is the estimation of luminosity functions given redshift (z) and absolute magnitude (M) measurements from an irregularly truncated sample of objects. This is a bivariate density estimation problem; we develop here a statistically rigorous method which (1) does not assume a strict parametric form for the bivariate density; (2) does not assume independence between redshift and absolute magnitude (and hence allows evolution of the luminosity function with redshift); (3) does not require dividing the data into arbitrary bins; and (4) naturally incorporates a varying selection function. We accomplish this by decomposing the bivariate density φ(z,M) vialogφ(z,M)=f(z)+g(M)+h(z,M,θ), where f and g are estimated nonparametrically and h takes an assumed parametric form. There is a simple way of estimating the integrated mean squared error of the estimator; smoothing parameters are selected to minimize this quantity. Results are presented from the analysis of a sample of quasars.

  3. A Subspace Method for Dynamical Estimation of Evoked Potentials

    PubMed Central

    Georgiadis, Stefanos D.; Ranta-aho, Perttu O.; Tarvainen, Mika P.; Karjalainen, Pasi A.

    2007-01-01

    It is a challenge in evoked potential (EP) analysis to incorporate prior physiological knowledge for estimation. In this paper, we address the problem of single-channel trial-to-trial EP characteristics estimation. Prior information about phase-locked properties of the EPs is assesed by means of estimated signal subspace and eigenvalue decomposition. Then for those situations that dynamic fluctuations from stimulus-to-stimulus could be expected, prior information can be exploited by means of state-space modeling and recursive Bayesian mean square estimation methods (Kalman filtering and smoothing). We demonstrate that a few dominant eigenvectors of the data correlation matrix are able to model trend-like changes of some component of the EPs, and that Kalman smoother algorithm is to be preferred in terms of better tracking capabilities and mean square error reduction. We also demonstrate the effect of strong artifacts, particularly eye blinks, on the quality of the signal subspace and EP estimates by means of independent component analysis applied as a prepossessing step on the multichannel measurements. PMID:18288257

  4. Networked Estimation with an Area-Triggered Transmission Method

    PubMed Central

    Nguyen, Vinh Hao; Suh, Young Soo

    2008-01-01

    This paper is concerned with the networked estimation problem in which sensor data are transmitted over the network. In the event-driven sampling scheme known as level-crossing or send-on-delta, sensor data are transmitted to the estimator node if the difference between the current sensor value and the last transmitted one is greater than a given threshold. The event-driven sampling generally requires less transmission than the time-driven one. However, the transmission rate of the send-on-delta method becomes large when the sensor noise is large since sensor data variation becomes large due to the sensor noise. Motivated by this issue, we propose another event-driven sampling method called area-triggered in which sensor data are sent only when the integral of differences between the current sensor value and the last transmitted one is greater than a given threshold. Through theoretical analysis and simulation results, we show that in the certain cases the proposed method not only reduces data transmission rate but also improves estimation performance in comparison with the conventional event-driven method. PMID:27879742

  5. Estimating the extreme low-temperature event using nonparametric methods

    NASA Astrophysics Data System (ADS)

    D'Silva, Anisha

    This thesis presents a new method of estimating the one-in-N low temperature threshold using a non-parametric statistical method called kernel density estimation applied to daily average wind-adjusted temperatures. We apply our One-in-N Algorithm to local gas distribution companies (LDCs), as they have to forecast the daily natural gas needs of their consumers. In winter, demand for natural gas is high. Extreme low temperature events are not directly related to an LDCs gas demand forecasting, but knowledge of extreme low temperatures is important to ensure that an LDC has enough capacity to meet customer demands when extreme low temperatures are experienced. We present a detailed explanation of our One-in-N Algorithm and compare it to the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution. We show that our One-in-N Algorithm estimates the one-in- N low temperature threshold more accurately than the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution according to root mean square error (RMSE) measure at a 5% level of significance. The One-in- N Algorithm is tested by counting the number of times the daily average wind-adjusted temperature is less than or equal to the one-in- N low temperature threshold.

  6. Advances in Time Estimation Methods for Molecular Data.

    PubMed

    Kumar, Sudhir; Hedges, S Blair

    2016-04-01

    Molecular dating has become central to placing a temporal dimension on the tree of life. Methods for estimating divergence times have been developed for over 50 years, beginning with the proposal of molecular clock in 1962. We categorize the chronological development of these methods into four generations based on the timing of their origin. In the first generation approaches (1960s-1980s), a strict molecular clock was assumed to date divergences. In the second generation approaches (1990s), the equality of evolutionary rates between species was first tested and then a strict molecular clock applied to estimate divergence times. The third generation approaches (since ∼2000) account for differences in evolutionary rates across the tree by using a statistical model, obviating the need to assume a clock or to test the equality of evolutionary rates among species. Bayesian methods in the third generation require a specific or uniform prior on the speciation-process and enable the inclusion of uncertainty in clock calibrations. The fourth generation approaches (since 2012) allow rates to vary from branch to branch, but do not need prior selection of a statistical model to describe the rate variation or the specification of speciation model. With high accuracy, comparable to Bayesian approaches, and speeds that are orders of magnitude faster, fourth generation methods are able to produce reliable timetrees of thousands of species using genome scale data. We found that early time estimates from second generation studies are similar to those of third and fourth generation studies, indicating that methodological advances have not fundamentally altered the timetree of life, but rather have facilitated time estimation by enabling the inclusion of more species. Nonetheless, we feel an urgent need for testing the accuracy and precision of third and fourth generation methods, including their robustness to misspecification of priors in the analysis of large phylogenies and data

  7. Vegetation index methods for estimating evapotranspiration by remote sensing

    USGS Publications Warehouse

    Glenn, Edward P.; Nagler, Pamela L.; Huete, Alfredo R.

    2010-01-01

    Evapotranspiration (ET) is the largest term after precipitation in terrestrial water budgets. Accurate estimates of ET are needed for numerous agricultural and natural resource management tasks and to project changes in hydrological cycles due to potential climate change. We explore recent methods that combine vegetation indices (VI) from satellites with ground measurements of actual ET (ETa) and meteorological data to project ETa over a wide range of biome types and scales of measurement, from local to global estimates. The majority of these use time-series imagery from the Moderate Resolution Imaging Spectrometer on the Terra satellite to project ET over seasons and years. The review explores the theoretical basis for the methods, the types of ancillary data needed, and their accuracy and limitations. Coefficients of determination between modeled ETa and measured ETa are in the range of 0.45–0.95, and root mean square errors are in the range of 10–30% of mean ETa values across biomes, similar to methods that use thermal infrared bands to estimate ETa and within the range of accuracy of the ground measurements by which they are calibrated or validated. The advent of frequent-return satellites such as Terra and planed replacement platforms, and the increasing number of moisture and carbon flux tower sites over the globe, have made these methods feasible. Examples of operational algorithms for ET in agricultural and natural ecosystems are presented. The goal of the review is to enable potential end-users from different disciplines to adapt these methods to new applications that require spatially-distributed ET estimates.

  8. Different Donor Cell Culture Methods Can Influence the Developmental Ability of Cloned Sheep Embryos.

    PubMed

    Ma, LiBing; Liu, XiYu; Wang, FengMei; He, XiaoYing; Chen, Shan; Li, WenDa

    2015-01-01

    It was proposed that arresting nuclear donor cells in G0/G1 phase facilitates the development of embryos that are derived from somatic cell nuclear transfer (SCNT). Full confluency or serum starvation is commonly used to arrest in vitro cultured somatic cells in G0/G1 phase. However, it is controversial as to whether these two methods have the same efficiency in arresting somatic cells in G0/G1 phase. Moreover, it is unclear whether the cloned embryos have comparable developmental ability after somatic cells are subjected to one of these methods and then used as nuclear donors in SCNT. In the present study, in vitro cultured sheep skin fibroblasts were divided into four groups: (1) cultured to 70-80% confluency (control group), (2) cultured to full confluency, (3) starved in low serum medium for 4 d, or (4) cultured to full confluency and then further starved for 4 d. Flow cytometry was used to assay the percentage of fibroblasts in G0/G1 phase, and cell counting was used to assay the viability of the fibroblasts. Then, real-time reverse transcription PCR was used to determine the levels of expression of several cell cycle-related genes. Subsequently, the four groups of fibroblasts were separately used as nuclear donors in SCNT, and the developmental ability and the quality of the cloned embryos were compared. The results showed that the percentage of fibroblasts in G0/G1 phase, the viability of fibroblasts, and the expression levels of cell cycle-related genes was different among the four groups of fibroblasts. Moreover, the quality of the cloned embryos was comparable after these four groups of fibroblasts were separately used as nuclear donors in SCNT. However, cloned embryos derived from fibroblasts that were cultured to full confluency combined with serum starvation had the highest developmental ability. The results of the present study indicate that there are synergistic effects of full confluency and serum starvation on arresting fibroblasts in G0/G1 phase

  9. Geometry optimization method versus predictive ability in QSPR modeling for ionic liquids.

    PubMed

    Rybinska, Anna; Sosnowska, Anita; Barycki, Maciej; Puzyn, Tomasz

    2016-02-01

    Computational techniques, such as Quantitative Structure-Property Relationship (QSPR) modeling, are very useful in predicting physicochemical properties of various chemicals. Building QSPR models requires calculating molecular descriptors and the proper choice of the geometry optimization method, which will be dedicated to specific structure of tested compounds. Herein, we examine the influence of the ionic liquids' (ILs) geometry optimization methods on the predictive ability of QSPR models by comparing three models. The models were developed based on the same experimental data on density collected for 66 ionic liquids, but with employing molecular descriptors calculated from molecular geometries optimized at three different levels of the theory, namely: (1) semi-empirical (PM7), (2) ab initio (HF/6-311+G*) and (3) density functional theory (B3LYP/6-311+G*). The model in which the descriptors were calculated by using ab initio HF/6-311+G* method indicated the best predictivity capabilities ([Formula: see text] = 0.87). However, PM7-based model has comparable values of quality parameters ([Formula: see text] = 0.84). Obtained results indicate that semi-empirical methods (faster and less expensive regarding CPU time) can be successfully employed to geometry optimization in QSPR studies for ionic liquids.

  10. Impedance-estimation methods, modeling methods, articles of manufacture, impedance-modeling devices, and estimated-impedance monitoring systems

    SciTech Connect

    Richardson, John G.

    2009-11-17

    An impedance estimation method includes measuring three or more impedances of an object having a periphery using three or more probes coupled to the periphery. The three or more impedance measurements are made at a first frequency. Three or more additional impedance measurements of the object are made using the three or more probes. The three or more additional impedance measurements are made at a second frequency different from the first frequency. An impedance of the object at a point within the periphery is estimated based on the impedance measurements and the additional impedance measurements.

  11. Simple Method for Soil Moisture Estimation from Sentinel-1 Data

    NASA Astrophysics Data System (ADS)

    Gilewski, Pawei Grzegorz; Kedzior, Mateusz Andrzej; Zawadzki, Jaroslaw

    2016-08-01

    In this paper, authors calculated high resolution volumetric soil moisture (SM) by means of the Sentinel- 1 data for the Kampinos National Park in Poland and verified obtained results.To do so, linear regression coefficients (LRC) between in-situ SM measurements and Sentinel-1 radar backscatter values were calculated. Next, LRC were applied to obtain SM estimates from Sentinel-1 data. Sentinel-1 SM was verified against in-situ measurements and low-resolution SMOS SM estimates using Pearson's linear correlation coefficient. Simple SM retrieval method from radar data used in this study gives better results for meadows and when Sentinel-1 data in VH polarisation are used.Further research should be conducted to prove usefulness of proposed method.

  12. Estimating surface acoustic impedance with the inverse method.

    PubMed

    Piechowicz, Janusz

    2011-01-01

    Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics.

  13. Adaptive error covariances estimation methods for ensemble Kalman filters

    SciTech Connect

    Zhen, Yicun; Harlim, John

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.

  14. Methods to Develop Inhalation Cancer Risk Estimates for ...

    EPA Pesticide Factsheets

    This document summarizes the approaches and rationale for the technical and scientific considerations used to derive inhalation cancer risks for emissions of chromium and nickel compounds from electric utility steam generating units. The purpose of this document is to discuss the methods used to develop inhalation cancer risk estimates associated with emissions of chromium and nickel compounds from coal- and oil-fired electric utility steam generating units (EGUs) in support of EPA's recently proposed Air Toxics Rule.

  15. A comparative study of six data sources' ability for estimating interstate motor carrier VMT (vehicle miles of travel)

    SciTech Connect

    Hu, P.S.; Wright, T.; Miaou, Shaw-Pin.

    1989-01-01

    Several Federal Government agencies require estimates of vehicle miles of travel (VMT) by interstate commercial trucks. These estimates are essential in determining accident exposure and accident rates for these trucks, and in determining highway investment needs and the allocation of highway costs. VMT estimates are currently based on various nationwide transportation surveys and/or data sources using various estimation procedures do not provide consistent estimates. A summary of evaluation results of these data sources and estimation procedures is presented in this paper. 4 refs., 1 tab.

  16. Improving stochastic estimates with inference methods: Calculating matrix diagonals

    NASA Astrophysics Data System (ADS)

    Selig, Marco; Oppermann, Niels; Enßlin, Torsten A.

    2012-02-01

    Estimating the diagonal entries of a matrix, that is not directly accessible but only available as a linear operator in the form of a computer routine, is a common necessity in many computational applications, especially in image reconstruction and statistical inference. Here, methods of statistical inference are used to improve the accuracy or the computational costs of matrix probing methods to estimate matrix diagonals. In particular, the generalized Wiener filter methodology, as developed within information field theory, is shown to significantly improve estimates based on only a few sampling probes, in cases in which some form of continuity of the solution can be assumed. The strength, length scale, and precise functional form of the exploited autocorrelation function of the matrix diagonal is determined from the probes themselves. The developed algorithm is successfully applied to mock and real world problems. These performance tests show that, in situations where a matrix diagonal has to be calculated from only a small number of computationally expensive probes, a speedup by a factor of 2 to 10 is possible with the proposed method.

  17. Comparison of predictive ability of water solubility QSPR models generated by MLR, PLS and ANN methods.

    PubMed

    Erös, Dániel; Kéri, György; Kövesdi, István; Szántai-Kis, Csaba; Mészáros, György; Orfi, László

    2004-02-01

    ADME/Tox computational screening is one of the most hot topics of modern drug research. About one half of the potential drug candidates fail because of poor ADME/Tox properties. Since the experimental determination of water solubility is time-consuming also, reliable computational predictions are needed for the pre-selection of acceptable "drug-like" compounds from diverse combinatorial libraries. Recently many successful attempts were made for predicting water solubility of compounds. A comprehensive review of previously developed water solubility calculation methods is presented here, followed by the description of the solubility prediction method designed and used in our laboratory. We have selected carefully 1381 compounds from scientific publications in a unified database and used this dataset in the calculations. The externally validated models were based on calculated descriptors only. The aim of model optimization was to improve repeated evaluations statistics of the predictions and effective descriptor scoring functions were used to facilitate quick generation of multiple linear regression analysis (MLR), partial least squares method (PLS) and artificial neural network (ANN) models with optimal predicting ability. Standard error of prediction of the best model generated with ANN (with 39-7-1 network structure) was 0.72 in logS units while the cross validated squared correlation coefficient (Q(2)) was better than 0.85. These values give a good chance for successful pre-selection of screening compounds from virtual libraries, based on the predicted water solubility.

  18. Methods for estimating low-flow statistics for Massachusetts streams

    USGS Publications Warehouse

    Ries, Kernell G.; Friesz, Paul J.

    2000-01-01

    Methods and computer software are described in this report for determining flow duration, low-flow frequency statistics, and August median flows. These low-flow statistics can be estimated for unregulated streams in Massachusetts using different methods depending on whether the location of interest is at a streamgaging station, a low-flow partial-record station, or an ungaged site where no data are available. Low-flow statistics for streamgaging stations can be estimated using standard U.S. Geological Survey methods described in the report. The MOVE.1 mathematical method and a graphical correlation method can be used to estimate low-flow statistics for low-flow partial-record stations. The MOVE.1 method is recommended when the relation between measured flows at a partial-record station and daily mean flows at a nearby, hydrologically similar streamgaging station is linear, and the graphical method is recommended when the relation is curved. Equations are presented for computing the variance and equivalent years of record for estimates of low-flow statistics for low-flow partial-record stations when either a single or multiple index stations are used to determine the estimates. The drainage-area ratio method or regression equations can be used to estimate low-flow statistics for ungaged sites where no data are available. The drainage-area ratio method is generally as accurate as or more accurate than regression estimates when the drainage-area ratio for an ungaged site is between 0.3 and 1.5 times the drainage area of the index data-collection site. Regression equations were developed to estimate the natural, long-term 99-, 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, and 50-percent duration flows; the 7-day, 2-year and the 7-day, 10-year low flows; and the August median flow for ungaged sites in Massachusetts. Streamflow statistics and basin characteristics for 87 to 133 streamgaging stations and low-flow partial-record stations were used to develop the equations. The

  19. The composite method: An improved method for stream-water solute load estimation

    USGS Publications Warehouse

    Aulenbach, Brent T.; Hooper, R.P.

    2006-01-01

    The composite method is an alternative method for estimating stream-water solute loads, combining aspects of two commonly used methods: the regression-model method (which is used by the composite method to predict variations in concentrations between collected samples) and a period-weighted approach (which is used by the composite method to apply the residual concentrations from the regression model over time). The extensive dataset collected at the outlet of the Panola Mountain Research Watershed (PMRW) near Atlanta, Georgia, USA, was used in data analyses for illustrative purposes. A bootstrap (subsampling) experiment (using the composite method and the PMRW dataset along with various fixed-interval and large storm sampling schemes) obtained load estimates for the 8-year study period with a magnitude of the bias of less than 1%, even for estimates that included the fewest number of samples. Precisions were always <2% on a study period and annual basis, and <2% precisions were obtained for quarterly and monthly time intervals for estimates that had better sampling. The bias and precision of composite-method load estimates varies depending on the variability in the regression-model residuals, how residuals systematically deviated from the regression model over time, sampling design, and the time interval of the load estimate. The regression-model method did not estimate loads precisely during shorter time intervals, from annually to monthly, because the model could not explain short-term patterns in the observed concentrations. Load estimates using the period-weighted approach typically are biased as a result of sampling distribution and are accurate only with extensive sampling. The formulation of the composite method facilitates exploration of patterns (trends) contained in the unmodelled portion of the load. Published in 2006 by John Wiley & Sons, Ltd.

  20. Evaluation of estimation methods for organic carbon normalized sorption coefficients

    USGS Publications Warehouse

    Baker, James R.; Mihelcic, James R.; Luehrs, Dean C.; Hickey, James P.

    1997-01-01

    A critically evaluated set of 94 soil water partition coefficients normalized to soil organic carbon content (Koc) is presented for 11 classes of organic chemicals. This data set is used to develop and evaluate Koc estimation methods using three different descriptors. The three types of descriptors used in predicting Koc were octanol/water partition coefficient (Kow), molecular connectivity (mXt) and linear solvation energy relationships (LSERs). The best results were obtained estimating Koc from Kow, though a slight improvement in the correlation coefficient was obtained by using a two-parameter regression with Kow and the third order difference term from mXt. Molecular connectivity correlations seemed to be best suited for use with specific chemical classes. The LSER provided a better fit than mXt but not as good as the correlation with Koc. The correlation to predict Koc from Kow was developed for 72 chemicals; log Koc = 0.903* log Kow + 0.094. This correlation accounts for 91% of the variability in the data for chemicals with log Kow ranging from 1.7 to 7.0. The expression to determine the 95% confidence interval on the estimated Koc is provided along with an example for two chemicals of different hydrophobicity showing the confidence interval of the retardation factor determined from the estimated Koc. The data showed that Koc is not likely to be applicable for chemicals with log Kow < 1.7. Finally, the Koc correlation developed using Kow as a descriptor was compared with three nonclass-specific correlations and two 'commonly used' class-specific correlations to determine which method(s) are most suitable.

  1. Reliability of field methods for estimating body fat.

    PubMed

    Loenneke, Jeremy P; Barnes, Jeremy T; Wilson, Jacob M; Lowery, Ryan P; Isaacs, Melissa N; Pujol, Thomas J

    2013-09-01

    When health professionals measure the fitness levels of clients, body composition is usually estimated. In practice, the reliability of the measurement may be more important than the actual validity, as reliability determines how much change is needed to be considered meaningful. Therefore, the purpose of this study was to determine the reliability of two bioelectrical impedance analysis (BIA) devices (in athlete and non-athlete mode) and compare that to 3-site skinfold (SKF) readings. Twenty-one college students attended the laboratory on two occasions and had their measurements taken in the following order: body mass, height, SKF, Tanita body fat-350 (BF-350) and Omron HBF-306C. There were no significant pairwise differences between Visit 1 and Visit 2 for any of the estimates (P>0.05). The Pearson product correlations ranged from r = 0.933 for HBF-350 in the athlete mode (A) to r = 0.994 for SKF. The ICC's ranged from 0.93 for HBF-350(A) to 0.992 for SKF, and the MD's ranged from 1.8% for SKF to 5.1% for BF-350(A). The current study found that SKF and HBF-306C(A) were the most reliable (<2%) methods of estimating BF%, with the other methods (BF-350, BF-350(A), HBF-306C) producing minimal differences greater than 2%. In conclusion, the SKF method presented with the best reliability because of its low minimal difference, suggesting this method may be the best field method to track changes over time if you have an experienced tester. However, if technical error is a concern, the practitioner may use the HBF-306C(A) because it had a minimal difference value comparable to SKF.

  2. Method to estimate center of rigidity using vibration recordings

    USGS Publications Warehouse

    Safak, Erdal; Celebi, Mehmet

    1990-01-01

    A method to estimate the center of rigidity of buildings by using vibration recordings is presented. The method is based on the criterion that the coherence of translational motions with the rotational motion is minimum at the center of rigidity. Since the coherence is a function of frequency, a gross but frequency-independent measure of the coherency is defined as the integral of the coherence function over the frequency. The center of rigidity is determined by minimizing this integral. The formulation is given for two-dimensional motions. Two examples are presented for the method; a rectangular building with ambient-vibration recordings, and a triangular building with earthquake-vibration recordings. Although the examples given are for buildings, the method can be applied to any structure with two-dimensional motions.

  3. Estimating Contraceptive Prevalence Using Logistics Data for Short-Acting Methods: Analysis Across 30 Countries

    PubMed Central

    Cunningham, Marc; Brown, Niquelle; Sacher, Suzy; Hatch, Benjamin; Inglis, Andrew; Aronovich, Dana

    2015-01-01

    -based model for condoms, models were able to estimate public-sector prevalence rates for each short-acting method to within 2 percentage points in at least 85% of countries. Conclusions: Public-sector contraceptive logistics data are strongly correlated with public-sector prevalence rates for short-acting methods, demonstrating the quality of current logistics data and their ability to provide relatively accurate prevalence estimates. The models provide a starting point for generating interim estimates of contraceptive use when timely survey data are unavailable. All models except the condoms CYP model performed well; the regression models were most accurate but the CYP model offers the simplest calculation method. Future work extending the research to other modern methods, relating subnational logistics data with prevalence rates, and tracking that relationship over time is needed. PMID:26374805

  4. A Projection and Density Estimation Method for Knowledge Discovery

    PubMed Central

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675

  5. Methods for cost estimation in software project management

    NASA Astrophysics Data System (ADS)

    Briciu, C. V.; Filip, I.; Indries, I. I.

    2016-02-01

    The speed in which the processes used in software development field have changed makes it very difficult the task of forecasting the overall costs for a software project. By many researchers, this task has been considered unachievable, but there is a group of scientist for which this task can be solved using the already known mathematical methods (e.g. multiple linear regressions) and the new techniques as genetic programming and neural networks. The paper presents a solution for building a model for the cost estimation models in the software project management using genetic algorithms starting from the PROMISE datasets related COCOMO 81 model. In the first part of the paper, a summary of the major achievements in the research area of finding a model for estimating the overall project costs is presented together with the description of the existing software development process models. In the last part, a basic proposal of a mathematical model of a genetic programming is proposed including here the description of the chosen fitness function and chromosome representation. The perspective of model described it linked with the current reality of the software development considering as basis the software product life cycle and the current challenges and innovations in the software development area. Based on the author's experiences and the analysis of the existing models and product lifecycle it was concluded that estimation models should be adapted with the new technologies and emerging systems and they depend largely by the chosen software development method.

  6. Laser heating method for estimation of carbon nanotube purity

    NASA Astrophysics Data System (ADS)

    Terekhov, S. V.; Obraztsova, E. D.; Lobach, A. S.; Konov, V. I.

    A new method of a carbon nanotube purity estimation has been developed on the basis of Raman spectroscopy. The spectra of carbon soot containing different amounts of nanotubes were registered under heating from a probing laser beam with a step-by-step increased power density. The material temperature in the laser spot was estimated from a position of the tangential Raman mode demonstrating a linear thermal shift (-0.012 cm-1/K) from the position 1592 cm-1 (at room temperature). The rate of the material temperature rise versus the laser power density (determining the slope of a corresponding graph) appeared to correlate strongly with the nanotube content in the soot. The influence of the experimental conditions on the slope value has been excluded via a simultaneous measurement of a reference sample with a high nanotube content (95 vol.%). After the calibration (done by a comparison of the Raman and the transmission electron microscopy data for the nanotube percentage in the same samples) the Raman-based method is able to provide a quantitative purity estimation for any nanotube-containing material.

  7. A method to estimate groundwater depletion from confining layers

    USGS Publications Warehouse

    Konikow, L.F.; Neuzil, C.E.

    2007-01-01

    Although depletion of storage in low-permeability confining layers is the source of much of the groundwater produced from many confined aquifer systems, it is all too frequently overlooked or ignored. This makes effective management of groundwater resources difficult by masking how much water has been derived from storage and, in some cases, the total amount of water that has been extracted from an aquifer system. Analyzing confining layer storage is viewed as troublesome because of the additional computational burden and because the hydraulic properties of confining layers are poorly known. In this paper we propose a simplified method for computing estimates of confining layer depletion, as well as procedures for approximating confining layer hydraulic conductivity (K) and specific storage (Ss) using geologic information. The latter makes the technique useful in developing countries and other settings where minimal data are available or when scoping calculations are needed. As such, our approach may be helpful for estimating the global transfer of groundwater to surface water. A test of the method on a synthetic system suggests that the computational errors will generally be small. Larger errors will probably result from inaccuracy in confining layer property estimates, but these may be no greater than errors in more sophisticated analyses. The technique is demonstrated by application to two aquifer systems: the Dakota artesian aquifer system in South Dakota and the coastal plain aquifer system in Virginia. In both cases, depletion from confining layers was substantially larger than depletion from the aquifers.

  8. Estimation of regionalized compositions: A comparison of three methods

    USGS Publications Warehouse

    Pawlowsky, V.; Olea, R.A.; Davis, J.C.

    1995-01-01

    A regionalized composition is a random vector function whose components are positive and sum to a constant at every point of the sampling region. Consequently, the components of a regionalized composition are necessarily spatially correlated. This spatial dependence-induced by the constant sum constraint-is a spurious spatial correlation and may lead to misinterpretations of statistical analyses. Furthermore, the cross-covariance matrices of the regionalized composition are singular, as is the coefficient matrix of the cokriging system of equations. Three methods of performing estimation or prediction of a regionalized composition at unsampled points are discussed: (1) the direct approach of estimating each variable separately; (2) the basis method, which is applicable only when a random function is available that can he regarded as the size of the regionalized composition under study; (3) the logratio approach, using the additive-log-ratio transformation proposed by J. Aitchison, which allows statistical analysis of compositional data. We present a brief theoretical review of these three methods and compare them using compositional data from the Lyons West Oil Field in Kansas (USA). It is shown that, although there are no important numerical differences, the direct approach leads to invalid results, whereas the basis method and the additive-log-ratio approach are comparable. ?? 1995 International Association for Mathematical Geology.

  9. Intensity estimation method of LED array for visible light communication

    NASA Astrophysics Data System (ADS)

    Ito, Takanori; Yendo, Tomohiro; Arai, Shintaro; Yamazato, Takaya; Okada, Hiraku; Fujii, Toshiaki

    2013-03-01

    This paper focuses on a road-to-vehicle visible light communication (VLC) system using LED traffic light as the transmitter and camera as the receiver. The traffic light is composed of a hundred of LEDs on two dimensional plain. In this system, data is sent as two dimensional brightness patterns by controlling each LED of the traffic light individually, and they are received as images by the camera. Here, there are problems that neighboring LEDs on the received image are merged due to less number of pixels in case that the receiver is distant from the transmitter, and/or due to blurring by defocus of the camera. Because of that, bit error rate (BER) increases due to recognition error of intensity of LEDs To solve the problem, we propose a method that estimates the intensity of LEDs by solving the inverse problem of communication channel characteristic from the transmitter to the receiver. The proposed method is evaluated by BER characteristics which are obtained by computer simulation and experiments. In the result, the proposed method can estimate with better accuracy than the conventional methods, especially in case that the received image is blurred a lot, and the number of pixels is small.

  10. Estimating return on investment in translational research: methods and protocols.

    PubMed

    Grazier, Kyle L; Trochim, William M; Dilts, David M; Kirk, Rosalind

    2013-12-01

    Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health (NIH) and its Clinical and Translational Awards (CTSAs). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program, and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This article provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities.

  11. GPS receiver CODE bias estimation: A comparison of two methods

    NASA Astrophysics Data System (ADS)

    McCaffrey, Anthony M.; Jayachandran, P. T.; Themens, D. R.; Langley, R. B.

    2017-04-01

    The Global Positioning System (GPS) is a valuable tool in the measurement and monitoring of ionospheric total electron content (TEC). To obtain accurate GPS-derived TEC, satellite and receiver hardware biases, known as differential code biases (DCBs), must be estimated and removed. The Center for Orbit Determination in Europe (CODE) provides monthly averages of receiver DCBs for a significant number of stations in the International Global Navigation Satellite Systems Service (IGS) network. A comparison of the monthly receiver DCBs provided by CODE with DCBs estimated using the minimization of standard deviations (MSD) method on both daily and monthly time intervals, is presented. Calibrated TEC obtained using CODE-derived DCBs, is accurate to within 0.74 TEC units (TECU) in differenced slant TEC (sTEC), while calibrated sTEC using MSD-derived DCBs results in an accuracy of 1.48 TECU.

  12. A power function method for estimating base flow.

    PubMed

    Lott, Darline A; Stewart, Mark T

    2013-01-01

    Analytical base flow separation techniques are often used to determine the base flow contribution to total stream flow. Most analytical methods derive base flow from discharge records alone without using basin-specific variables other than basin area. This paper derives a power function for estimating base flow, the form being aQ(b) + cQ, an analytical method calibrated against an integrated basin variable, specific conductance, relating base flow to total discharge, and is consistent with observed mathematical behavior of dissolved solids in stream flow with varying discharge. Advantages of the method are being uncomplicated, reproducible, and applicable to hydrograph separation in basins with limited specific conductance data. The power function relationship between base flow and discharge holds over a wide range of basin areas. It better replicates base flow determined by mass balance methods than analytical methods such as filters or smoothing routines that are not calibrated to natural tracers or empirical basin and gauge-specific variables. Also, it can be used with discharge during periods without specific conductance values, including separating base flow from quick flow for single events. However, it may overestimate base flow during very high flow events. Application of geochemical mass balance and power function base flow separation methods to stream flow and specific conductance records from multiple gauges in the same basin suggests that analytical base flow separation methods must be calibrated at each gauge. Using average values of coefficients introduces a potentially significant and unknown error in base flow as compared with mass balance methods.

  13. Odor emission rate estimation of indoor industrial sources using a modified inverse modeling method.

    PubMed

    Li, Xiang; Wang, Tingting; Sattayatewa, Chakkrid; Venkatesan, Dhesikan; Noll, Kenneth E; Pagilla, Krishna R; Moschandreas, Demetrios J

    2011-08-01

    Odor emission rates are commonly measured in the laboratory or occasionally estimated with inverse modeling techniques. A modified inverse modeling approach is used to estimate source emission rates inside of a postdigestion centrifuge building of a water reclamation plant. Conventionally, inverse modeling methods divide an indoor environment in zones on the basis of structural design and estimate source emission rates using models that assume homogeneous distribution of agent concentrations within a zone and experimentally determined link functions to simulate airflows among zones. The modified approach segregates zones as a function of agent distribution rather than building design and identifies near and far fields. Near-field agent concentrations do not satisfy the assumption of homogeneous odor concentrations; far-field concentrations satisfy this assumption and are the only ones used to estimate emission rates. The predictive ability of the modified inverse modeling approach was validated with measured emission rate values; the difference between corresponding estimated and measured odor emission rates is not statistically significant. Similarly, the difference between measured and estimated hydrogen sulfide emission rates is also not statistically significant. The modified inverse modeling approach is easy to perform because it uses odor and odorant field measurements instead of complex chamber emission rate measurements.

  14. Methods of evaluating the spermatogenic ability of male raccoons (Procyon lotor).

    PubMed

    Uno, Taiki; Kato, Takuya; Seki, Yoshikazu; Kawakami, Eiichi; Hayama, Shin-ichi

    2014-01-01

    Feral raccoons (Procyon lotor) have been growing in number in Japan, and they are becoming a problematic invasive species. Consequently, they are commonly captured and killed in pest control programs. For effective population control of feral raccoons, it is necessary to understand their reproductive physiology and ecology. Although the reproductive traits of female raccoons are well known, those of the males are not well understood because specialized knowledge and facilities are required to study them. In this study, we first used a simple evaluation method to assess spermatogenesis and presence of spermatozoa in the tail of the epididymis of feral male raccoons by histologically examining the testis and epididymis. We then evaluated the possibility of using 7 variables-body weight, body length, body mass index, testicular weight, epididymal weight, testicular size and gonadosomatic index (GSI)-to estimate spermatogenesis and presence of spermatozoa in the tail of the epididymis. GSI and body weight were chosen as criteria for spermatogenesis, and GSI was chosen as the criterion for presence of spermatozoa in the tail of the epididymis. Because GSI is calculated from body weight and testicular weight, this model should be able to be used to estimate the reproductive state of male raccoons regardless of season and age when just these two parameters are known. In this study, GSI was demonstrated to be an index of reproductive state in male raccoons. To our knowledge, this is the first report of such a use for GSI in a member of the Carnivora.

  15. A Monte Carlo Simulation Investigating the Validity and Reliability of Ability Estimation in Item Response Theory with Speeded Computer Adaptive Tests

    ERIC Educational Resources Information Center

    Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M.

    2010-01-01

    Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…

  16. A novel state of health estimation method of Li-ion battery using group method of data handling

    NASA Astrophysics Data System (ADS)

    Wu, Ji; Wang, Yujie; Zhang, Xu; Chen, Zonghai

    2016-09-01

    In this paper, the control theory is applied to assist the estimation of state of health (SoH) which is a key parameter to battery management. Battery can be treated as a system, and the internal state, e.g. SoH, can be observed through certain system output data. Based on the philosophy of human health and athletic ability estimation, variables from a specific process, which is a constant current charge subprocess, are obtained to depict battery SoH. These variables are selected according to the differential geometric analysis of battery terminal voltage curves. Moreover, the relationship between the differential geometric properties and battery SoH is modelled by the group method of data handling (GMDH) polynomial neural network. Thus, battery SoH can be estimated by GMDH with inputs of voltage curve properties. Experiments have been conducted on different types of Li-ion battery, and the results show that the proposed method is valid for SoH estimation.

  17. Estimating Fuel Cycle Externalities: Analytical Methods and Issues, Report 2

    SciTech Connect

    Barnthouse, L.W.; Cada, G.F.; Cheng, M.-D.; Easterly, C.E.; Kroodsma, R.L.; Lee, R.; Shriner, D.S.; Tolbert, V.R.; Turner, R.S.

    1994-07-01

    that also have not been fully addressed. This document contains two types of papers that seek to fill part of this void. Some of the papers describe analytical methods that can be applied to one of the five steps of the damage function approach. The other papers discuss some of the complex issues that arise in trying to estimate externalities. This report, the second in a series of eight reports, is part of a joint study by the U.S. Department of Energy (DOE) and the Commission of the European Communities (EC)* on the externalities of fuel cycles. Most of the papers in this report were originally written as working papers during the initial phases of this study. The papers provide descriptions of the (non-radiological) atmospheric dispersion modeling that the study uses; reviews much of the relevant literature on ecological and health effects, and on the economic valuation of those impacts; contains several papers on some of the more complex and contentious issues in estimating externalities; and describes a method for depicting the quality of scientific information that a study uses. The analytical methods and issues that this report discusses generally pertain to more than one of the fuel cycles, though not necessarily to all of them. The report is divided into six parts, each one focusing on a different subject area.

  18. Streamflow-Characteristic Estimation Methods for Unregulated Streams of Tennessee

    USGS Publications Warehouse

    Law, George S.; Tasker, Gary D.; Ladd, David E.

    2009-01-01

    Streamflow-characteristic estimation methods for unregulated rivers and streams of Tennessee were developed by the U.S. Geological Survey in cooperation with the Tennessee Department of Environment and Conservation. Streamflow estimates are provided for 1,224 stream sites. Streamflow characteristics include the 7-consecutive-day, 10-year recurrence-interval low flow, the 30-consecutive-day, 5-year recurrence-interval low flow, the mean annual and mean summer flows, and the 99.5-, 99-, 98-, 95-, 90-, 80-, 70-, 60-, 50-, 40-, 30-, 20-, and 10-percent flow durations. Estimation methods include regional regression (RRE) equations and the region-of-influence (ROI) method. Both methods use zero-flow probability screening to estimate zero-flow quantiles. A low flow and flow duration (LFFD) computer program (TDECv301) performs zero-flow screening and calculation of nonzero-streamflow characteristics using the RRE equations and ROI method and provides quality measures including the 90-percent prediction interval and equivalent years of record. The U.S. Geological Survey StreamStats geographic information system automates the calculation of basin characteristics and streamflow characteristics. In addition, basin characteristics can be manually input to the stand-alone version of the computer program (TDECv301) to calculate streamflow characteristics in Tennessee. The RRE equations were computed using multivariable regression analysis. The two regions used for this study, the western part of the State (West) and the central and eastern part of the State (Central+East), are separated by the Tennessee River as it flows south to north from Hardin County to Stewart County. The West region uses data from 124 of the 1,224 streamflow sites, and the Central+East region uses data from 893 of the 1,224 streamflow sites. The study area also includes parts of the adjacent States of Georgia, North Carolina, Virginia, Alabama, Kentucky, and Mississippi. Total drainage area, a geology

  19. Probabilistic seismic hazard assessment of Italy using kernel estimation methods

    NASA Astrophysics Data System (ADS)

    Zuccolo, Elisa; Corigliano, Mirko; Lai, Carlo G.

    2013-07-01

    A representation of seismic hazard is proposed for Italy based on the zone-free approach developed by Woo (BSSA 86(2):353-362, 1996a), which is based on a kernel estimation method governed by concepts of fractal geometry and self-organized seismicity, not requiring the definition of seismogenic zoning. The purpose is to assess the influence of seismogenic zoning on the results obtained for the probabilistic seismic hazard analysis (PSHA) of Italy using the standard Cornell's method. The hazard has been estimated for outcropping rock site conditions in terms of maps and uniform hazard spectra for a selected site, with 10 % probability of exceedance in 50 years. Both spectral acceleration and spectral displacement have been considered as ground motion parameters. Differences in the results of PSHA between the two methods are compared and discussed. The analysis shows that, in areas such as Italy, characterized by a reliable earthquake catalog and in which faults are generally not easily identifiable, a zone-free approach can be considered a valuable tool to address epistemic uncertainty within a logic tree framework.

  20. Analytical method to estimate resin cement diffusion into dentin

    NASA Astrophysics Data System (ADS)

    de Oliveira Ferraz, Larissa Cristina; Ubaldini, Adriana Lemos Mori; de Oliveira, Bruna Medeiros Bertol; Neto, Antonio Medina; Sato, Fracielle; Baesso, Mauro Luciano; Pascotto, Renata Corrêa

    2016-05-01

    This study analyzed the diffusion of two resin luting agents (resin cements) into dentin, with the aim of presenting an analytical method for estimating the thickness of the diffusion zone. Class V cavities were prepared in the buccal and lingual surfaces of molars (n=9). Indirect composite inlays were luted into the cavities with either a self-adhesive or a self-etch resin cement. The teeth were sectioned bucco-lingually and the cement-dentin interface was analyzed by using micro-Raman spectroscopy (MRS) and scanning electron microscopy. Evolution of peak intensities of the Raman bands, collected from the functional groups corresponding to the resin monomer (C-O-C, 1113 cm-1) present in the cements, and the mineral content (P-O, 961 cm-1) in dentin were sigmoid shaped functions. A Boltzmann function (BF) was then fitted to the peaks encountered at 1113 cm-1 to estimate the resin cement diffusion into dentin. The BF identified a resin cement-dentin diffusion zone of 1.8±0.4 μm for the self-adhesive cement and 2.5±0.3 μm for the self-etch cement. This analysis allowed the authors to estimate the diffusion of the resin cements into the dentin. Fitting the MRS data to the BF contributed to and is relevant for future studies of the adhesive interface.

  1. Comparative study on parameter estimation methods for attenuation relationships

    NASA Astrophysics Data System (ADS)

    Sedaghati, Farhad; Pezeshk, Shahram

    2016-12-01

    In this paper, the performance and advantages and disadvantages of various regression methods to derive coefficients of an attenuation relationship have been investigated. A database containing 350 records out of 85 earthquakes with moment magnitudes of 5-7.6 and Joyner-Boore distances up to 100 km in Europe and the Middle East has been considered. The functional form proposed by Ambraseys et al (2005 Bull. Earthq. Eng. 3 1-53) is selected to compare chosen regression methods. Statistical tests reveal that although the estimated parameters are different for each method, the overall results are very similar. In essence, the weighted least squares method and one-stage maximum likelihood perform better than the other considered regression methods. Moreover, using a blind weighting matrix or a weighting matrix related to the number of records would not yield in improving the performance of the results. Further, to obtain the true standard deviation, the pure error analysis is necessary. Assuming that the correlation between different records of a specific earthquake exists, the one-stage maximum likelihood considering the true variance acquired by the pure error analysis is the most preferred method to compute the coefficients of a ground motion predication equation.

  2. Dental age estimation in Brazilian HIV children using Willems' method.

    PubMed

    de Souza, Rafael Boschetti; da Silva Assunção, Luciana Reichert; Franco, Ademir; Zaroni, Fábio Marzullo; Holderbaum, Rejane Maria; Fernandes, Ângela

    2015-12-01

    The notification of the Human Immunodeficiency Virus (HIV) in Brazilian children was first reported in 1984. Since that time more than 21 thousand children became infected. Approximately 99.6% of the children aged less than 13 years old are vertically infected. In this context, most of the children are abandoned after birth, or lose their relatives in a near future, growing with uncertain identification. The present study aims to estimate the dental age of Brazilian HIV patients in face of healthy patients paired by age and gender. The sample consisted of 160 panoramic radiographs of male (n: 80) and female (n: 80) patients aged between 4 and 15 years (mean age: 8.88 years), divided into HIV (n: 80) and control (n: 80) groups. The sample was analyzed by three trained examiners, using Willems' method, 2001. Intraclass Correlation Coefficient (ICC) was applied to test intra- and inter-examiner agreement, and Student paired t-test was used to determine the age association between HIV and control groups. Intra-examiner (ICC: from 0.993 to 0.997) and inter-examiner (ICC: from 0.991 to 0.995) agreement tests indicated high reproducibility of the method between the examiners (P<0.01). Willems' method revealed discrete statistical overestimation in HIV (2.86 months; P=0.019) and control (1.90 months; P=0.039) groups. However, stratified analysis by gender indicate that overestimation were only concentrated in male HIV (3.85 months; P=0.001) and control (2.86 months; P=0.022) patients. The significant statistical differences are not clinically relevant once only few months of discrepancy are detected applying Willems' method in a Brazilian HIV sample, making this method highly recommended for dental age estimation of both HIV and healthy children with unknown age.

  3. A method to determine the ability of drugs to diffuse through the blood-brain barrier.

    PubMed Central

    Seelig, A; Gottschlich, R; Devant, R M

    1994-01-01

    A method has been devised for predicting the ability of drugs to cross the blood-brain barrier. The criteria depend on the amphiphilic properties of a drug as reflected in its surface activity. The assessment was made with various drugs that either penetrate or do not penetrate the blood-brain barrier. The surface activity of these drugs was quantified by their Gibbs adsorption isotherms in terms of three parameters: (i) the onset of surface activity, (ii) the critical micelle concentration, and (iii) the surface area requirement of the drug at the air/water interface. A calibration diagram is proposed in which the critical micelle concentration is plotted against the concentration required for the onset of surface activity. Three different regions are easily distinguished in this diagram: a region of very hydrophobic drugs which fail to enter the central nervous system because they remain adsorbed to the membrane, a central area of less hydrophobic drugs which can cross the blood-brain barrier, and a region of relatively hydrophilic drugs which do not cross the blood-brain barrier unless applied at high concentrations. This diagram can be used to predict reliably the central nervous system permeability of an unknown compound from a simple measurement of its Gibbs adsorption isotherm. PMID:8278409

  4. A novel method testing the ability to imitate composite emotional expressions reveals an association with empathy.

    PubMed

    Williams, Justin H G; Nicolson, Andrew T A; Clephan, Katie J; de Grauw, Haro; Perrett, David I

    2013-01-01

    Social communication relies on intentional control of emotional expression. Its variability across cultures suggests important roles for imitation in developing control over enactment of subtly different facial expressions and therefore skills in emotional communication. Both empathy and the imitation of an emotionally communicative expression may rely on a capacity to share both the experience of an emotion and the intention or motor plan associated with its expression. Therefore, we predicted that facial imitation ability would correlate with empathic traits. We built arrays of visual stimuli by systematically blending three basic emotional expressions in controlled proportions. Raters then assessed accuracy of imitation by reconstructing the same arrays using photographs of participants' attempts at imitations of the stimuli. Accuracy was measured as the mean proximity of the participant photographs to the target stimuli in the array. Levels of performance were high, and rating was highly reliable. More empathic participants, as measured by the empathy quotient (EQ), were better facial imitators and, in particular, performed better on the more complex, blended stimuli. This preliminary study offers a simple method for the measurement of facial imitation accuracy and supports the hypothesis that empathic functioning may utilise motor control mechanisms which are also used for emotional expression.

  5. A Novel Method Testing the Ability to Imitate Composite Emotional Expressions Reveals an Association with Empathy

    PubMed Central

    Williams, Justin H. G.; Nicolson, Andrew T. A.; Clephan, Katie J.; de Grauw, Haro; Perrett, David I.

    2013-01-01

    Social communication relies on intentional control of emotional expression. Its variability across cultures suggests important roles for imitation in developing control over enactment of subtly different facial expressions and therefore skills in emotional communication. Both empathy and the imitation of an emotionally communicative expression may rely on a capacity to share both the experience of an emotion and the intention or motor plan associated with its expression. Therefore, we predicted that facial imitation ability would correlate with empathic traits. We built arrays of visual stimuli by systematically blending three basic emotional expressions in controlled proportions. Raters then assessed accuracy of imitation by reconstructing the same arrays using photographs of participants’ attempts at imitations of the stimuli. Accuracy was measured as the mean proximity of the participant photographs to the target stimuli in the array. Levels of performance were high, and rating was highly reliable. More empathic participants, as measured by the empathy quotient (EQ), were better facial imitators and, in particular, performed better on the more complex, blended stimuli. This preliminary study offers a simple method for the measurement of facial imitation accuracy and supports the hypothesis that empathic functioning may utilise motor control mechanisms which are also used for emotional expression. PMID:23626756

  6. Ability of LANDSAT-8 Oli Derived Texture Metrics in Estimating Aboveground Carbon Stocks of Coppice Oak Forests

    NASA Astrophysics Data System (ADS)

    Safari, A.; Sohrabi, H.

    2016-06-01

    The role of forests as a reservoir for carbon has prompted the need for timely and reliable estimation of aboveground carbon stocks. Since measurement of aboveground carbon stocks of forests is a destructive, costly and time-consuming activity, aerial and satellite remote sensing techniques have gained many attentions in this field. Despite the fact that using aerial data for predicting aboveground carbon stocks has been proved as a highly accurate method, there are challenges related to high acquisition costs, small area coverage, and limited availability of these data. These challenges are more critical for non-commercial forests located in low-income countries. Landsat program provides repetitive acquisition of high-resolution multispectral data, which are freely available. The aim of this study was to assess the potential of multispectral Landsat 8 Operational Land Imager (OLI) derived texture metrics in quantifying aboveground carbon stocks of coppice Oak forests in Zagros Mountains, Iran. We used four different window sizes (3×3, 5×5, 7×7, and 9×9), and four different offsets ([0,1], [1,1], [1,0], and [1,-1]) to derive nine texture metrics (angular second moment, contrast, correlation, dissimilar, entropy, homogeneity, inverse difference, mean, and variance) from four bands (blue, green, red, and infrared). Totally, 124 sample plots in two different forests were measured and carbon was calculated using species-specific allometric models. Stepwise regression analysis was applied to estimate biomass from derived metrics. Results showed that, in general, larger size of window for deriving texture metrics resulted models with better fitting parameters. In addition, the correlation of the spectral bands for deriving texture metrics in regression models was ranked as b4>b3>b2>b5. The best offset was [1,-1]. Amongst the different metrics, mean and entropy were entered in most of the regression models. Overall, different models based on derived texture metrics

  7. Rainfall estimation by inverting SMOS soil moisture estimates: A comparison of different methods over Australia

    NASA Astrophysics Data System (ADS)

    Brocca, Luca; Pellarin, Thierry; Crow, Wade T.; Ciabatta, Luca; Massari, Christian; Ryu, Dongryeol; Su, Chun-Hsu; Rüdiger, Christoph; Kerr, Yann

    2016-10-01

    Remote sensing of soil moisture has reached a level of maturity and accuracy for which the retrieved products can be used to improve hydrological and meteorological applications. In this study, the soil moisture product from the Soil Moisture and Ocean Salinity (SMOS) satellite is used for improving satellite rainfall estimates obtained from the Tropical Rainfall Measuring Mission multisatellite precipitation analysis product (TMPA) using three different "bottom up" techniques: SM2RAIN, Soil Moisture Analysis Rainfall Tool, and Antecedent Precipitation Index Modification. The implementation of these techniques aims at improving the well-known "top down" rainfall estimate derived from TMPA products (version 7) available in near real time. Ground observations provided by the Australian Water Availability Project are considered as a separate validation data set. The three algorithms are calibrated against the gauge-corrected TMPA reanalysis product, 3B42, and used for adjusting the TMPA real-time product, 3B42RT, using SMOS soil moisture data. The study area covers the entire Australian continent, and the analysis period ranges from January 2010 to November 2013. Results show that all the SMOS-based rainfall products improve the performance of 3B42RT, even at daily time scale (differently from previous investigations). The major improvements are obtained in terms of estimation of accumulated rainfall with a reduction of the root-mean-square error of more than 25%. Also, in terms of temporal dynamic (correlation) and rainfall detection (categorical scores) the SMOS-based products provide slightly better results with respect to 3B42RT, even though the relative performance between the methods is not always the same. The strengths and weaknesses of each algorithm and the spatial variability of their performances are identified in order to indicate the ways forward for this promising research activity. Results show that the integration of bottom up and top down approaches

  8. Some Features of the Sampling Distribution of the Ability Estimate in Computerized Adaptive Testing According to Two Stopping Rules.

    ERIC Educational Resources Information Center

    Blais, Jean-Guy; Raiche, Gilles

    This paper examines some characteristics of the statistics associated with the sampling distribution of the proficiency level estimate when the Rasch model is used. These characteristics allow the judgment of the meaning to be given to the proficiency level estimate obtained in adaptive testing, and as a consequence, they can illustrate the…

  9. The Mayfield method of estimating nesting success: A model, estimators and simulation results

    USGS Publications Warehouse

    Hensler, G.L.; Nichols, J.D.

    1981-01-01

    Using a nesting model proposed by Mayfield we show that the estimator he proposes is a maximum likelihood estimator (m.l.e.). M.l.e. theory allows us to calculate the asymptotic distribution of this estimator, and we propose an estimator of the asymptotic variance. Using these estimators we give approximate confidence intervals and tests of significance for daily survival. Monte Carlo simulation results show the performance of our estimators and tests under many sets of conditions. A traditional estimator of nesting success is shown to be quite inferior to the Mayfield estimator. We give sample sizes required for a given accuracy under several sets of conditions.

  10. Improved methods of estimating critical indices via fractional calculus

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, S. K.; Bhattacharyya, K.

    2002-05-01

    Efficiencies of certain methods for the determination of critical indices from power-series expansions are shown to be considerably improved by a suitable implementation of fractional differentiation. In the context of the ratio method (RM), kinship of the modified strategy with the ad hoc `shifted' RM is established and the advantages are demonstrated. Further, in the course of the estimation of critical points, significant betterment of convergence properties of diagonal Padé approximants is observed on several occasions by invoking this concept. Test calculations are performed on (i) various Ising spin-1/2 lattice models for susceptibility series attended with a ferromagnetic phase transition, (ii) complex model situations involving confluent and antiferromagnetic singularities and (iii) the chain-generating functions for self-avoiding walks on triangular, square and simple cubic lattices.

  11. A method for obtaining time-periodic Lp estimates

    NASA Astrophysics Data System (ADS)

    Kyed, Mads; Sauer, Jonas

    2017-01-01

    We introduce a method for showing a prioriLp estimates for time-periodic, linear, partial differential equations set in a variety of domains such as the whole space, the half space and bounded domains. The method is generic and can be applied to a wide range of problems. We demonstrate it on the heat equation. The main idea is to replace the time axis with a torus in order to reformulate the problem on a locally compact abelian group and to employ Fourier analysis on this group. As a by-product, maximal Lp regularity for the corresponding initial-value problem follows without the notion of R-boundedness. Moreover, we introduce the concept of a time-periodic fundamental solution.

  12. Probabilistic seismic loss estimation via endurance time method

    NASA Astrophysics Data System (ADS)

    Tafakori, Ehsan; Pourzeynali, Saeid; Estekanchi, Homayoon E.

    2017-01-01

    Probabilistic Seismic Loss Estimation is a methodology used as a quantitative and explicit expression of the performance of buildings using terms that address the interests of both owners and insurance companies. Applying the ATC 58 approach for seismic loss assessment of buildings requires using Incremental Dynamic Analysis (IDA), which needs hundreds of time-consuming analyses, which in turn hinders its wide application. The Endurance Time Method (ETM) is proposed herein as part of a demand propagation prediction procedure and is shown to be an economical alternative to IDA. Various scenarios were considered to achieve this purpose and their appropriateness has been evaluated using statistical methods. The most precise and efficient scenario was validated through comparison against IDA driven response predictions of 34 code conforming benchmark structures and was proven to be sufficiently precise while offering a great deal of efficiency. The loss values were estimated by replacing IDA with the proposed ETM-based procedure in the ATC 58 procedure and it was found that these values suffer from varying inaccuracies, which were attributed to the discretized nature of damage and loss prediction functions provided by ATC 58.

  13. Method of Estimating Continuous Cooling Transformation Curves of Glasses

    NASA Technical Reports Server (NTRS)

    Zhu, Dongmei; Zhou, Wancheng; Ray, Chandra S.; Day, Delbert E.

    2006-01-01

    A method is proposed for estimating the critical cooling rate and continuous cooling transformation (CCT) curve from isothermal TTT data of glasses. The critical cooling rates and CCT curves for a group of lithium disilicate glasses containing different amounts of Pt as nucleating agent estimated through this method are compared with the experimentally measured values. By analysis of the experimental and calculated data of the lithium disilicate glasses, a simple relationship between the crystallized amount in the glasses during continuous cooling, X, and the temperature of undercooling, (Delta)T, was found to be X = AR(sup-4)exp(B (Delta)T), where (Delta)T is the temperature difference between the theoretical melting point of the glass composition and the temperature in discussion, R is the cooling rate, and A and B are constants. The relation between the amount of crystallisation during continuous cooling and isothermal hold can be expressed as (X(sub cT)/X(sub iT) = (4/B)(sup 4) (Delta)T(sup -4), where X(sub cT) stands for the crystallised amount in a glass during continuous cooling for a time t when the temperature comes to T, and X(sub iT) is the crystallised amount during isothermal hold at temperature T for a time t.

  14. Study on color difference estimation method of medicine biochemical analysis

    NASA Astrophysics Data System (ADS)

    Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Sun, Jiashi; Zhou, Fengkun

    2006-01-01

    The biochemical analysis in medicine is an important inspection and diagnosis method in hospital clinic. The biochemical analysis of urine is one important item. The Urine test paper shows corresponding color with different detection project or different illness degree. The color difference between the standard threshold and the test paper color of urine can be used to judge the illness degree, so that further analysis and diagnosis to urine is gotten. The color is a three-dimensional physical variable concerning psychology, while reflectance is one-dimensional variable; therefore, the estimation method of color difference in urine test can have better precision and facility than the conventional test method with one-dimensional reflectance, it can make an accurate diagnose. The digital camera is easy to take an image of urine test paper and is used to carry out the urine biochemical analysis conveniently. On the experiment, the color image of urine test paper is taken by popular color digital camera and saved in the computer which installs a simple color space conversion (RGB -> XYZ -> L *a *b *)and the calculation software. Test sample is graded according to intelligent detection of quantitative color. The images taken every time were saved in computer, and the whole illness process will be monitored. This method can also use in other medicine biochemical analyses that have relation with color. Experiment result shows that this test method is quick and accurate; it can be used in hospital, calibrating organization and family, so its application prospect is extensive.

  15. Evaluation of non-destructive methods for estimating biomass in marshes of the upper Texas, USA coast

    USGS Publications Warehouse

    Whitbeck, M.; Grace, J.B.

    2006-01-01

    The estimation of aboveground biomass is important in the management of natural resources. Direct measurements by clipping, drying, and weighing of herbaceous vegetation are time-consuming and costly. Therefore, non-destructive methods for efficiently and accurately estimating biomass are of interest. We compared two non-destructive methods, visual obstruction and light penetration, for estimating aboveground biomass in marshes of the upper Texas, USA coast. Visual obstruction was estimated using the Robel pole method, which primarily measures the density and height of the canopy. Light penetration through the canopy was measured using a Decagon light wand, with readings taken above the vegetation and at the ground surface. Clip plots were also taken to provide direct estimates of total aboveground biomass. Regression relationships between estimated and clipped biomass were significant using both methods. However, the light penetration method was much more strongly correlated with clipped biomass under these conditions (R2 value 0.65 compared to 0.35 for the visual obstruction approach). The primary difference between the two methods in this situation was the ability of the light-penetration method to account for variations in plant litter. These results indicate that light-penetration measurements may be better for estimating biomass in marshes when plant litter is an important component. We advise that, in all cases, investigators should calibrate their methods against clip plots to evaluate applicability to their situation. ?? 2006, The Society of Wetland Scientists.

  16. Method for estimating road salt contamination of Norwegian lakes

    NASA Astrophysics Data System (ADS)

    Kitterød, Nils-Otto; Wike Kronvall, Kjersti; Turtumøygaard, Stein; Haaland, Ståle

    2013-04-01

    Consumption of road salt in Norway, used to improve winter road conditions, has been tripled during the last two decades, and there is a need to quantify limits for optimal use of road salt to avoid further environmental harm. The purpose of this study was to implement methodology to estimate chloride concentration in any given water body in Norway. This goal is feasible to achieve if the complexity of solute transport in the landscape is simplified. The idea was to keep computations as simple as possible to be able to increase spatial resolution of input functions. The first simplification we made was to treat all roads exposed to regular salt application as steady state sources of sodium chloride. This is valid if new road salt is applied before previous contamination is removed through precipitation. The main reasons for this assumption are the significant retention capacity of vegetation; organic matter; and soil. The second simplification we made was that the groundwater table is close to the surface. This assumption is valid for major part of Norway, which means that topography is sufficient to delineate catchment area at any location in the landscape. Given these two assumptions, we applied spatial functions of mass load (mass NaCl pr. time unit) and conditional estimates of normal water balance (volume of water pr. time unit) to calculate steady state chloride concentration along the lake perimeter. Spatial resolution of mass load and estimated concentration along the lake perimeter was 25 m x 25 m while water balance had 1 km x 1 km resolution. The method was validated for a limited number of Norwegian lakes and estimation results have been compared to observations. Initial results indicate significant overlap between measurements and estimations, but only for lakes where the road salt is the major contribution for chloride contamination. For lakes in catchments with high subsurface transmissivity, the groundwater table is not necessarily following the

  17. New Methods for Estimating Seasonal Potential Climate Predictability

    NASA Astrophysics Data System (ADS)

    Feng, Xia

    This study develops two new statistical approaches to assess the seasonal potential predictability of the observed climate variables. One is the univariate analysis of covariance (ANOCOVA) model, a combination of autoregressive (AR) model and analysis of variance (ANOVA). It has the advantage of taking into account the uncertainty of the estimated parameter due to sampling errors in statistical test, which is often neglected in AR based methods, and accounting for daily autocorrelation that is not considered in traditional ANOVA. In the ANOCOVA model, the seasonal signals arising from external forcing are determined to be identical or not to assess any interannual variability that may exist is potentially predictable. The bootstrap is an attractive alternative method that requires no hypothesis model and is available no matter how mathematically complicated the parameter estimator. This method builds up the empirical distribution of the interannual variance from the resamplings drawn with replacement from the given sample, in which the only predictability in seasonal means arises from the weather noise. These two methods are applied to temperature and water cycle components including precipitation and evaporation, to measure the extent to which the interannual variance of seasonal means exceeds the unpredictable weather noise compared with the previous methods, including Leith-Shukla-Gutzler (LSG), Madden, and Katz. The potential predictability of temperature from ANOCOVA model, bootstrap, LSG and Madden exhibits a pronounced tropical-extratropical contrast with much larger predictability in the tropics dominated by El Nino/Southern Oscillation (ENSO) than in higher latitudes where strong internal variability lowers predictability. Bootstrap tends to display highest predictability of the four methods, ANOCOVA lies in the middle, while LSG and Madden appear to generate lower predictability. Seasonal precipitation from ANOCOVA, bootstrap, and Katz, resembling that

  18. A Test of the Passalacqua Age at Death Estimation Method Using the Sacrum.

    PubMed

    Colarusso, Tara

    2016-01-01

    A test of the accuracy of the Passalacqua (J Forensic Sci, 5, 2009, 255) sacrum method in a forensic context was performed on a sample of 153 individuals from the J.C.B. Grant Skeletal Collection. The Passalacqua (J Forensic Sci, 5, 2009, 255) method assesses seven traits of the sacrum using a 7-digit coding system. An accuracy of 97.3% was achieved using the Passalacqua (J Forensic Sci, 5, 2009, 255) method to estimate adult skeletal age. On average each age estimate differed by 12.87 years from the known age. The method underestimated the age of individuals by an average of 4.3 years. An intra-observer error of 6.6% suggests that the method can be performed with precision. Correlation and regression analysis found that the sacral traits used in the Passalacqua (J Forensic Sci, 5, 2009, 255) method did not have a strong relationship with age or an ability to strongly predict age. Overall, the method was not practical for use in a forensic context due to the broad age ranges, despite the high accuracy and low intra-observer error.

  19. Estimation Of Rheological Law By Inverse Method From Flow And Temperature Measurements With An Extrusion Die

    NASA Astrophysics Data System (ADS)

    Pujos, Cyril; Regnier, Nicolas; Mousseau, Pierre; Defaye, Guy; Jarny, Yvon

    2007-05-01

    Simulation quality is determined by the knowledge of the parameters of the model. Yet the rheological models for polymer are often not very accurate, since the viscosity measurements are made under approximations as homogeneous temperature and empirical corrections as Bagley one. Furthermore rheological behaviors are often traduced by mathematical laws as the Cross or the Carreau-Yasuda ones, whose parameters are fitted from viscosity values, obtained with corrected experimental data, and not appropriate for each polymer. To correct these defaults, a table-like rheological model is proposed. This choice makes easier the estimation of model parameters, since each parameter has the same order of magnitude. As the mathematical shape of the model is not imposed, the estimation process is appropriate for each polymer. The proposed method consists in minimizing the quadratic norm of the difference between calculated variables and measured data. In this study an extrusion die is simulated, in order to provide us temperature along the extrusion channel, pressure and flow references. These data allow to characterize thermal transfers and flow phenomena, in which the viscosity is implied. Furthermore the different natures of data allow to estimate viscosity for a large range of shear rates. The estimated rheological model improves the agreement between measurements and simulation: for numerical cases, the error on the flow becomes less than 0.1% for non-Newtonian rheology. This method couples measurements and simulation, constitutes a very accurate mean of rheology determination, and allows to improve the prediction abilities of the model.

  20. Estimating recharge at Yucca Mountain, Nevada, USA: Comparison of methods

    USGS Publications Warehouse

    Flint, A.L.; Flint, L.E.; Kwicklis, E.M.; Fabryka-Martin, J. T.; Bodvarsson, G.S.

    2002-01-01

    Obtaining values of net infiltration, groundwater travel time, and recharge is necessary at the Yucca Mountain site, Nevada, USA, in order to evaluate the expected performance of a potential repository as a containment system for high-level radioactive waste. However, the geologic complexities of this site, its low precipitation and net infiltration, with numerous mechanisms operating simultaneously to move water through the system, provide many challenges for the estimation of the spatial distribution of recharge. A variety of methods appropriate for arid environments has been applied, including water-balance techniques, calculations using Darcy's law in the unsaturated zone, a soil-physics method applied to neutron-hole water-content data, inverse modeling of thermal profiles in boreholes extending through the thick unsaturated zone, chloride mass balance, atmospheric radionuclides, and empirical approaches. These methods indicate that near-surface infiltration rates at Yucca Mountain are highly variable in time and space, with local (point) values ranging from zero to several hundred millimeters per year. Spatially distributed net-infiltration values average 5 mm/year, with the highest values approaching 20 mm/year near Yucca Crest. Site-scale recharge estimates range from less than 1 to about 12 mm/year. These results have been incorporated into a site-scale model that has been calibrated using these data sets that reflect infiltration processes acting on highly variable temporal and spatial scales. The modeling study predicts highly non-uniform recharge at the water table, distributed significantly differently from the non-uniform infiltration pattern at the surface.

  1. Estimating recharge at yucca mountain, nevada, usa: comparison of methods

    SciTech Connect

    Flint, A. L.; Flint, L. E.; Kwicklis, E. M.; Fabryka-Martin, J. T.; Bodvarsson, G. S.

    2001-11-01

    Obtaining values of net infiltration, groundwater travel time, and recharge is necessary at the Yucca Mountain site, Nevada, USA, in order to evaluate the expected performance of a potential repository as a containment system for high-level radioactive waste. However, the geologic complexities of this site, its low precipitation and net infiltration, with numerous mechanisms operating simultaneously to move water through the system, provide many challenges for the estimation of the spatial distribution of recharge. A variety of methods appropriate for and environments has been applied, including water-balance techniques, calculations using Darcy's law in the unsaturated zone, a soil-physics method applied to neutron-hole water-content data, inverse modeling of thermal profiles in boreholes extending through the thick unsaturated zone, chloride mass balance, atmospheric radionuclides, and empirical approaches. These methods indicate that near-surface infiltration rates at Yucca Mountain are highly variable in time and space, with local (point) values ranging from zero to several hundred millimeters per year. Spatially distributed net-infiltration values average 5 mm/year, with the highest values approaching 20 nun/year near Yucca Crest. Site-scale recharge estimates range from less than I to about 12 mm/year. These results have been incorporated into a site-scale model that has been calibrated using these data sets that reflect infiltration processes acting on highly variable temporal and spatial scales. The modeling study predicts highly non-uniform recharge at the water table, distributed significantly differently from the non-uniform infiltration pattern at the surface. [References: 57

  2. Computational methods estimating uncertainties for profile reconstruction in scatterometry

    NASA Astrophysics Data System (ADS)

    Gross, H.; Rathsfeld, A.; Scholze, F.; Model, R.; Bär, M.

    2008-04-01

    The solution of the inverse problem in scatterometry, i.e. the determination of periodic surface structures from light diffraction patterns, is incomplete without knowledge of the uncertainties associated with the reconstructed surface parameters. With decreasing feature sizes of lithography masks, increasing demands on metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line-space structures in order to determine geometric parameters like side-wall angles, heights, top and bottom widths and to evaluate the quality of the manufacturing process. The numerical simulation of the diffraction process is based on the finite element solution of the Helmholtz equation. The inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Restricting the class of gratings and the set of measurements, this inverse problem can be reformulated as a non-linear operator equation in Euclidean spaces. The operator maps the grating parameters to the efficiencies of diffracted plane wave modes. We employ a Gauss-Newton type iterative method to solve this operator equation and end up minimizing the deviation of the measured efficiency or phase shift values from the simulated ones. The reconstruction properties and the convergence of the algorithm, however, is controlled by the local conditioning of the non-linear mapping and the uncertainties of the measured efficiencies or phase shifts. In particular, the uncertainties of the reconstructed geometric parameters essentially depend on the uncertainties of the input data and can be estimated by various methods. We compare the results obtained from a Monte Carlo procedure to the estimations gained from the approximative covariance matrix of the profile parameters close to the optimal solution and apply them to EUV masks illuminated by plane waves with wavelengths in the range of 13 nm.

  3. Estimation of Anthocyanin Content of Berries by NIR Method

    SciTech Connect

    Zsivanovits, G.; Ludneva, D.; Iliev, A.

    2010-01-21

    Anthocyanin contents of fruits were estimated by VIS spectrophotometer and compared with spectra measured by NIR spectrophotometer (600-1100 nm step 10 nm). The aim was to find a relationship between NIR method and traditional spectrophotometric method. The testing protocol, using NIR, is easier, faster and non-destructive. NIR spectra were prepared in pairs, reflectance and transmittance. A modular spectrocomputer, realized on the basis of a monochromator and peripherals Bentham Instruments Ltd (GB) and a photometric camera created at Canning Research Institute, were used. An important feature of this camera is the possibility offered for a simultaneous measurement of both transmittance and reflectance with geometry patterns T0/180 and R0/45. The collected spectra were analyzed by CAMO Unscrambler 9.1 software, with PCA, PLS, PCR methods. Based on the analyzed spectra quality and quantity sensitive calibrations were prepared. The results showed that the NIR method allows measuring of the total anthocyanin content in fresh berry fruits or processed products without destroying them.

  4. Empirical evaluation of the ability to learn a calorie counting system and estimate portion size and food intake.

    PubMed

    Martin, Corby K; Anton, Stephen D; York-Crowe, Emily; Heilbronn, Leonie K; VanSkiver, Claudia; Redman, Leanne M; Greenway, Frank L; Ravussin, Eric; Williamson, Donald A

    2007-08-01

    The aim of this study was to determine if: (1) participants could learn the HMR Calorie System by testing if their use of the system was more accurate after training; and (2) estimated portion size and food intake improved with training. A secondary aim was to use PACE (photographic assessment of calorie estimation) to assess if participants learned the HMR system. The PACE consists of pictures of foods, the energy content of which is known. A within-subjects design was used to test the aims of this study. Participants were 44 overweight (25 estimate portion size and the amount of food eaten. The PACE was also used to quantify accuracy at using the HMR system. Training resulted in more accurate estimation of food intake, use of the HMR system and estimated portion size when presented with food. Additionally, training resulted in significantly more accurate use of the HMR system when measured with PACE. It is concluded that people can learn the HMR Calorie System and improve the accuracy of portion size and food intake estimates. The PACE is a useful assessment tool to test if participants learn a calorie counting system.

  5. Effect of packing density on strain estimation by Fry method

    NASA Astrophysics Data System (ADS)

    Srivastava, Deepak; Ojha, Arun

    2015-04-01

    Fry method is a graphical technique that uses relative movement of material points, typically the grain centres or centroids, and yields the finite strain ellipse as the central vacancy of a point distribution. Application of the Fry method assumes an anticlustered and isotropic grain centre distribution in undistorted samples. This assumption is, however, difficult to test in practice. As an alternative, the sedimentological degree of sorting is routinely used as an approximation for the degree of clustering and anisotropy. The effect of the sorting on the Fry method has already been explored by earlier workers. This study tests the effect of the tightness of packing, the packing density%, which equals to the ratio% of the area occupied by all the grains and the total area of the sample. A practical advantage of using the degree of sorting or the packing density% is that these parameters, unlike the degree of clustering or anisotropy, do not vary during a constant volume homogeneous distortion. Using the computer graphics simulations and the programming, we approach the issue of packing density in four steps; (i) generation of several sets of random point distributions such that each set has same degree of sorting but differs from the other sets with respect to the packing density%, (ii) two-dimensional homogeneous distortion of each point set by various known strain ratios and orientation, (iii) estimation of strain in each distorted point set by the Fry method, and, (iv) error estimation by comparing the known strain and those given by the Fry method. Both the absolute errors and the relative root mean squared errors give consistent results. For a given degree of sorting, the Fry method gives better results in the samples having greater than 30% packing density. This is because the grain centre distributions show stronger clustering and a greater degree of anisotropy with the decrease in the packing density. As compared to the degree of sorting alone, a

  6. An extended stochastic method for seismic hazard estimation

    NASA Astrophysics Data System (ADS)

    Abd el-aal, A. K.; El-Eraki, M. A.; Mostafa, S. I.

    2015-12-01

    In this contribution, we developed an extended stochastic technique for seismic hazard assessment purposes. This technique depends on the hypothesis of stochastic technique of Boore (2003) "Simulation of ground motion using the stochastic method. Appl. Geophy. 160:635-676". The essential characteristics of extended stochastic technique are to obtain and simulate ground motion in order to minimize future earthquake consequences. The first step of this technique is defining the seismic sources which mostly affect the study area. Then, the maximum expected magnitude is defined for each of these seismic sources. It is followed by estimating the ground motion using an empirical attenuation relationship. Finally, the site amplification is implemented in calculating the peak ground acceleration (PGA) at each site of interest. We tested and applied this developed technique at Cairo, Suez, Port Said, Ismailia, Zagazig and Damietta cities to predict the ground motion. Also, it is applied at Cairo, Zagazig and Damietta cities to estimate the maximum peak ground acceleration at actual soil conditions. In addition, 0.5, 1, 5, 10 and 20 % damping median response spectra are estimated using the extended stochastic simulation technique. The calculated highest acceleration values at bedrock conditions are found at Suez city with a value of 44 cm s-2. However, these acceleration values decrease towards the north of the study area to reach 14.1 cm s-2 at Damietta city. This comes in agreement with the results of previous studies of seismic hazards in northern Egypt and is found to be comparable. This work can be used for seismic risk mitigation and earthquake engineering purposes.

  7. A practical method of estimating energy expenditure during tennis play.

    PubMed

    Novas, A M P; Rowbottom, D G; Jenkins, D G

    2003-03-01

    This study aimed to develop a practical method of estimating energy expenditure (EE) during tennis. Twenty-four elite female tennis players first completed a tennis-specific graded test in which five different Intensity levels were applied randomly. Each intensity level was intended to simulate a "game" of singles tennis and comprised six 14 s periods of activity alternated with 20 s of active rest. Oxygen consumption (VO2) and heart rate (HR) were measured continuously and each player's rate of perceived exertion (RPE) was recorded at the end of each intensity level. Rate of energy expenditure (EE(VO2)) during the test was calculated using the sum of VO2 during play and the 'O2 debt' during recovery, divided by the duration of the activity. There were significant individual linear relationships between EE(VO2) and RPE, EE(VO2) and HR (r > or = 0.89 & r > or = 0.93; p < 0.05). On a second occasion, six players completed a 60-min singles tennis match during which VO2, HR and RPE were recorded; EE(VO2) was compared with EE predicted from the previously derived RPE and HR regression equations. Analysis found that EE(VO2) was overestimated by EE(RPE) (92 +/- 76 kJ x h(-1)) and EE(HR) (435 +/- 678 kJ x h(-1)), but the error of estimation for EE(RPE) (t = -3.01; p = 0.03) was less than 5% whereas for EE(HR) such error was 20.7%. The results of the study show that RPE can be used to estimate the energetic cost of playing tennis.

  8. Estimates of tropical bromoform emissions using an inversion method

    NASA Astrophysics Data System (ADS)

    Ashfold, M. J.; Harris, N. R. P.; Manning, A. J.; Robinson, A. D.; Warwick, N. J.; Pyle, J. A.

    2014-01-01

    Bromine plays an important role in ozone chemistry in both the troposphere and stratosphere. When measured by mass, bromoform (CHBr3) is thought to be the largest organic source of bromine to the atmosphere. While seaweed and phytoplankton are known to be dominant sources, the size and the geographical distribution of CHBr3 emissions remains uncertain. Particularly little is known about emissions from the Maritime Continent, which have usually been assumed to be large, and which appear to be especially likely to reach the stratosphere. In this study we aim to reduce this uncertainty by combining the first multi-annual set of CHBr3 measurements from this region, and an inversion process, to investigate systematically the distribution and magnitude of CHBr3 emissions. The novelty of our approach lies in the application of the inversion method to CHBr3. We find that local measurements of a short-lived gas like CHBr3 can be used to constrain emissions from only a relatively small, sub-regional domain. We then obtain detailed estimates of CHBr3 emissions within this area, which appear to be relatively insensitive to the assumptions inherent in the inversion process. We extrapolate this information to produce estimated emissions for the entire tropics (defined as 20° S-20° N) of 225 Gg CHBr3 yr-1. The ocean in the area we base our extrapolations upon is typically somewhat shallower, and more biologically productive, than the tropical average. Despite this, our tropical estimate is lower than most other recent studies, and suggests that CHBr3 emissions in the coastline-rich Maritime Continent may not be stronger than emissions in other parts of the tropics.

  9. Child Mortality Estimation 2013: An Overview of Updates in Estimation Methods by the United Nations Inter-Agency Group for Child Mortality Estimation

    PubMed Central

    Alkema, Leontine; New, Jin Rou; Pedersen, Jon; You, Danzhen

    2014-01-01

    Background In September 2013, the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME) published an update of the estimates of the under-five mortality rate (U5MR) and under-five deaths for all countries. Compared to the UN IGME estimates published in 2012, updated data inputs and a new method for estimating the U5MR were used. Methods We summarize the new U5MR estimation method, which is a Bayesian B-spline Bias-reduction model, and highlight differences with the previously used method. Differences in UN IGME U5MR estimates as published in 2012 and those published in 2013 are presented and decomposed into differences due to the updated database and differences due to the new estimation method to explain and motivate changes in estimates. Findings Compared to the previously used method, the new UN IGME estimation method is based on a different trend fitting method that can track (recent) changes in U5MR more closely. The new method provides U5MR estimates that account for data quality issues. Resulting differences in U5MR point estimates between the UN IGME 2012 and 2013 publications are small for the majority of countries but greater than 10 deaths per 1,000 live births for 33 countries in 2011 and 19 countries in 1990. These differences can be explained by the updated database used, the curve fitting method as well as accounting for data quality issues. Changes in the number of deaths were less than 10% on the global level and for the majority of MDG regions. Conclusions The 2013 UN IGME estimates provide the most recent assessment of levels and trends in U5MR based on all available data and an improved estimation method that allows for closer-to-real-time monitoring of changes in the U5MR and takes account of data quality issues. PMID:25013954

  10. How reliable are the methods for estimating repertoire size?

    PubMed

    Botero, Carlos A; Mudge, Andrew E; Koltz, Amanda M; Hochachka, Wesley M; Vehrencamp, Sandra L

    2008-12-01

    Quantifying signal repertoire size is a critical first step towards understanding the evolution of signal complexity. However, counting signal types can be so complicated and time consuming when repertoire size is large, that this trait is often estimated rather than measured directly. We studied how three common methods for repertoire size quantification (i.e., simple enumeration, curve-fitting and capture-recapture analysis) are affected by sample size and presentation style using simulated repertoires of known sizes. As expected, estimation error decreased with increasing sample size and varied among presentation styles. More surprisingly, for all but one of the presentation styles studied, curve-fitting and capture-recapture analysis yielded errors of similar or greater magnitude than the errors researchers would make by simply assuming that the number of types in an incomplete sample is the true repertoire size. Our results also indicate that studies based on incomplete samples are likely to yield incorrect ranking of individuals and spurious correlations with other parameters regardless of the technique of choice. Finally, we argue that biological receivers face similar difficulties in quantifying repertoire size than human observers and we explore some of the biological implications of this hypothesis.

  11. An automatic iris occlusion estimation method based on high-dimensional density estimation.

    PubMed

    Li, Yung-Hui; Savvides, Marios

    2013-04-01

    Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation.

  12. Estimation in Latent Trait Models.

    ERIC Educational Resources Information Center

    Rigdon, Steven E.; Tsutakawa, Robert K.

    Estimation of ability and item parameters in latent trait models is discussed. When both ability and item parameters are considered fixed but unknown, the method of maximum likelihood for the logistic or probit models is well known. Discussed are techniques for estimating ability and item parameters when the ability parameters or item parameters…

  13. [Methods for the estimation of the renal function].

    PubMed

    Fontseré Baldellou, Néstor; Bonal I Bastons, Jordi; Romero González, Ramón

    2007-10-13

    The chronic kidney disease represents one of the pathologies with greater incidence and prevalence in the present sanitary systems. The ambulatory application of different methods that allow a suitable detection, monitoring and stratification of the renal functionalism is of crucial importance. On the basis of the vagueness obtained by means of the application of the serum creatinine, a set of predictive equations for the estimation of the glomerular filtration rate have been developed. Nevertheless, it is essential for the physician to know its limitations, in situations of normal renal function and hyperfiltration, certain associate pathologies and extreme situations of nutritional status and age. In these cases, the application of the isotopic techniques for the calculation of the renal function is more recommendable.

  14. Some methods of computing platform transmitter terminal location estimates

    NASA Astrophysics Data System (ADS)

    Hoisington, C. M.

    A position estimation algorithm was developed to track a humpback whale tagged with an ARGOS platform after a transmitter deployment failure and the whale's diving behavior precluded standard methods. The algorithm is especially useful where a transmitter location program exists; it determines the classical keplarian elements from the ARGOS spacecraft position vectors included with the probationary file messages. A minimum of three distinct messages are required. Once the spacecraft orbit is determined, the whale is located using standard least squares regression techniques. Experience suggests that in instances where circumstances inherent in the experiment yield message data unsuitable for the standard ARGOS reduction, (message data may be too sparse, span an insufficient period, or include variable-length messages). System ARGOS can still provide much valuable location information if the user is willing to accept the increased location uncertainties.

  15. Flotation kinetics: Methods for estimating distribution of rate constants

    SciTech Connect

    Chander, S.; Polat, M.

    1995-12-31

    Many models have been suggested in the past to obtain a satisfactory fit to flotation data. Of these, first-order kinetics models with a distribution of flotation rate constants are most common. A serious limitation of these models is that type of the distribution must be pre-supposed. Methods to overcome this limitation are discussed and a procedure is suggested for estimating the actual distribution of flotation rate constants. It is demonstrated that the classical first-order model fits the data well when applied to coal flotation in narrow size-specific gravity intervals. When applied to material which is fractionated on the basis of size alone, the use of three parameter models, which were modified from their two parameter analogs such as rectangular, sinusoidal, and triangular, gave most reliable results.

  16. Application of throughfall methods to estimate dry deposition of mercury

    SciTech Connect

    Lindberg, S.E.; Owens, J.G.; Stratton, W.

    1992-12-31

    Several dry deposition methods for Mercury (Hg) are being developed and tested in our laboratory. These include big-leaf and multilayer resistance models, micrometeorological methods such as Bowen ratio gradient approaches, laboratory controlled plant chambers, and throughfall. We have previously described our initial results using modeling and gradient methods. Throughfall may be used to estimate Hg dry deposition if some simplifying assumptions are met. We describe here the application and initial results of throughfull studies at the Walker Branch Watershed forest, and discuss the influence of certain assumptions on interpretation of the data. Throughfall appears useful in that it can place a lower bound to dry deposition under field conditions. Our preliminary throughfall data indicate net dry deposition rates to a pine canopy which increase significantly from winter to summer, as previously predicted by our resistance model. Atmospheric data suggest that rainfall washoff of fine aerosol dry deposition at this site is not sufficient to account for all of the Hg in net throughfall. Potential additional sources include dry deposited gas-phase compounds, soil-derived coarse aerosols, and oxidation reactions at the leaf surface.

  17. Uncertainty in Propensity Score Estimation: Bayesian Methods for Variable Selection and Model Averaged Causal Effects

    PubMed Central

    Zigler, Corwin Matthew; Dominici, Francesca

    2014-01-01

    Causal inference with observational data frequently relies on the notion of the propensity score (PS) to adjust treatment comparisons for observed confounding factors. As decisions in the era of “big data” are increasingly reliant on large and complex collections of digital data, researchers are frequently confronted with decisions regarding which of a high-dimensional covariate set to include in the PS model in order to satisfy the assumptions necessary for estimating average causal effects. Typically, simple or ad-hoc methods are employed to arrive at a single PS model, without acknowledging the uncertainty associated with the model selection. We propose three Bayesian methods for PS variable selection and model averaging that 1) select relevant variables from a set of candidate variables to include in the PS model and 2) estimate causal treatment effects as weighted averages of estimates under different PS models. The associated weight for each PS model reflects the data-driven support for that model’s ability to adjust for the necessary variables. We illustrate features of our proposed approaches with a simulation study, and ultimately use our methods to compare the effectiveness of surgical vs. nonsurgical treatment for brain tumors among 2,606 Medicare beneficiaries. Supplementary materials are available online. PMID:24696528

  18. A new rapid method for rockfall energies and distances estimation

    NASA Astrophysics Data System (ADS)

    Giacomini, Anna; Ferrari, Federica; Thoeni, Klaus; Lambert, Cedric

    2016-04-01

    Rockfalls are characterized by long travel distances and significant energies. Over the last decades, three main methods have been proposed in the literature to assess the rockfall runout: empirical, process-based and GIS-based methods (Dorren, 2003). Process-based methods take into account the physics of rockfall by simulating the motion of a falling rock along a slope and they are generally based on a probabilistic rockfall modelling approach that allows for taking into account the uncertainties associated with the rockfall phenomenon. Their application has the advantage of evaluating the energies, bounce heights and distances along the path of a falling block, hence providing valuable information for the design of mitigation measures (Agliardi et al., 2009), however, the implementation of rockfall simulations can be time-consuming and data-demanding. This work focuses on the development of a new methodology for estimating the expected kinetic energies and distances of the first impact at the base of a rock cliff, subject to the conditions that the geometry of the cliff and the properties of the representative block are known. The method is based on an extensive two-dimensional sensitivity analysis, conducted by means of kinematic simulations based on probabilistic modelling of two-dimensional rockfall trajectories (Ferrari et al., 2016). To take into account for the uncertainty associated with the estimation of the input parameters, the study was based on 78400 rockfall scenarios performed by systematically varying the input parameters that are likely to affect the block trajectory, its energy and distance at the base of the rock wall. The variation of the geometry of the rock cliff (in terms of height and slope angle), the roughness of the rock surface and the properties of the outcropping material were considered. A simplified and idealized rock wall geometry was adopted. The analysis of the results allowed finding empirical laws that relate impact energies

  19. Inverse method for estimating respiration rates from decay time series

    NASA Astrophysics Data System (ADS)

    Forney, D. C.; Rothman, D. H.

    2012-09-01

    Long-term organic matter decomposition experiments typically measure the mass lost from decaying organic matter as a function of time. These experiments can provide information about the dynamics of carbon dioxide input to the atmosphere and controls on natural respiration processes. Decay slows down with time, suggesting that organic matter is composed of components (pools) with varied lability. Yet it is unclear how the appropriate rates, sizes, and number of pools vary with organic matter type, climate, and ecosystem. To better understand these relations, it is necessary to properly extract the decay rates from decomposition data. Here we present a regularized inverse method to identify an optimally-fitting distribution of decay rates associated with a decay time series. We motivate our study by first evaluating a standard, direct inversion of the data. The direct inversion identifies a discrete distribution of decay rates, where mass is concentrated in just a small number of discrete pools. It is consistent with identifying the best fitting "multi-pool" model, without prior assumption of the number of pools. However we find these multi-pool solutions are not robust to noise and are over-parametrized. We therefore introduce a method of regularized inversion, which identifies the solution which best fits the data but not the noise. This method shows that the data are described by a continuous distribution of rates, which we find is well approximated by a lognormal distribution, and consistent with the idea that decomposition results from a continuum of processes at different rates. The ubiquity of the lognormal distribution suggest that decay may be simply described by just two parameters: a mean and a variance of log rates. We conclude by describing a procedure that estimates these two lognormal parameters from decay data. Matlab codes for all numerical methods and procedures are provided.

  20. Inverse method for estimating respiration rates from decay time series

    NASA Astrophysics Data System (ADS)

    Forney, D. C.; Rothman, D. H.

    2012-03-01

    Long-term organic matter decomposition experiments typically measure the mass lost from decaying organic matter as a function of time. These experiments can provide information about the dynamics of carbon dioxide input to the atmosphere and controls on natural respiration processes. Decay slows down with time, suggesting that organic matter is composed of components (pools) with varied lability. Yet it is unclear how the appropriate rates, sizes, and number of pools vary with organic matter type, climate, and ecosystem. To better understand these relations, it is necessary to properly extract the decay rates from decomposition data. Here we present a regularized inverse method to identify an optimally-fitting distribution of decay rates associated with a decay time series. We motivate our study by first evaluating a standard, direct inversion of the data. The direct inversion identifies a discrete distribution of decay rates, where mass is concentrated in just a small number of discrete pools. It is consistent with identifying the best fitting "multi-pool" model, without prior assumption of the number of pools. However we find these multi-pool solutions are not robust to noise and are over-parametrized. We therefore introduce a method of regularized inversion, which identifies the solution which best fits the data but not the noise. This method shows that the data are described by a continuous distribution of rates which we find is well approximated by a lognormal distribution, and consistent with the idea that decomposition results from a continuum of processes at different rates. The ubiquity of the lognormal distribution suggest that decay may be simply described by just two parameters; a mean and a variance of log rates. We conclude by describing a procedure that estimates these two lognormal parameters from decay data. Matlab codes for all numerical methods and procedures are provided.

  1. Evaluation of Load Estimation Methods and Sampling Strategies by Confidence Intervals in Estimating Solute Flux from a Small Forested Catchment

    NASA Astrophysics Data System (ADS)

    Tada, A.; Tanakamaru, H.

    2008-12-01

    Total mass flux (load) from a catchment is a basic factor in evaluating chemical weathering or in TMDLs implementation. So far, many combinations of load estimation methods with sampling strategies were tested to obtain an unbiased flux estimate. To utilize such flux estimates in the political or scientific application, the information of uncertainty of flux estimates should also be provided. Giving the interval estimate of total flux may be a desirable solution to this situation. Total solute flux from a small, undisturbed forested catchment (12.8ha) during 10 months were calculated based on high-temporal resolution data and used in validation of 95% confidence intervals (CIs) of flux estimates. Water quality data (sodium, potassium, and chloride concentration) were collected and measured every 15 minutes during 10 months in 2004 by the on-site monitoring system using FIP (flow injection potentiometry) method with ion-selective electrodes. Water quantity data (the flow rate data) were measured continuously by V-notch weir at the catchment outlet. Flux estimates and 95% CIs were calculated for three indices with 41 methods; sample average, flow- weighted average, the Beale ratio estimator, rating curve method with simple linear regression between flux and the flow rate, and nine regression models in the USGS Load Estimator (Loadest). Smearing estimates, MVUE estimates, and estimates by composite method were also evaluated concerning nine regression models in Load Estimator. Two sampling strategies were tested; periodical sampling (daily and weekly) and flow stratified sampling. After data were sorted in ascending order of the flow rate, five strata were configured so that each stratum contained same number of data in flow stratified sampling. The performance of these 95% CIs was evaluated by the rate of inclusion of true flux value within these CIs, which should be expected as 0.95. A simple bootstrap method was adopted to construct the CIs with 2,000 bootstrap

  2. Effects of Using Invention Learning Approach on Inventive Abilities: A Mixed Method Study

    ERIC Educational Resources Information Center

    Wongkraso, Paisan; Sitti, Somsong; Piyakun, Araya

    2015-01-01

    This study aims to enhance inventive abilities for secondary students by using the Invention Learning Approach. Its activities focus on creating new inventions based on the students' interests by using constructional tools. The participants were twenty secondary students who took an elective science course that provided instructional units…

  3. A Method for Estimation of Death Tolls in Disastrous Earthquake

    NASA Astrophysics Data System (ADS)

    Pai, C.; Tien, Y.; Teng, T.

    2004-12-01

    Fatality tolls caused by the disastrous earthquake are the one of the most important items among the earthquake damage and losses. If we can precisely estimate the potential tolls and distribution of fatality in individual districts as soon as the earthquake occurrences, it not only make emergency programs and disaster management more effective but also supply critical information to plan and manage the disaster and the allotments of disaster rescue manpower and medicine resources in a timely manner. In this study, we intend to reach the estimation of death tolls caused by the Chi-Chi earthquake in individual districts based on the Attributive Database of Victims, population data, digital maps and Geographic Information Systems. In general, there were involved many factors including the characteristics of ground motions, geological conditions, types and usage habits of buildings, distribution of population and social-economic situations etc., all are related to the damage and losses induced by the disastrous earthquake. The density of seismic stations in Taiwan is the greatest in the world at present. In the meantime, it is easy to get complete seismic data by earthquake rapid-reporting systems from the Central Weather Bureau: mostly within about a minute or less after the earthquake happened. Therefore, it becomes possible to estimate death tolls caused by the earthquake in Taiwan based on the preliminary information. Firstly, we form the arithmetic mean of the three components of the Peak Ground Acceleration (PGA) to give the PGA Index for each individual seismic station, according to the mainshock data of the Chi-Chi earthquake. To supply the distribution of Iso-seismic Intensity Contours in any districts and resolve the problems for which there are no seismic station within partial districts through the PGA Index and geographical coordinates in individual seismic station, the Kriging Interpolation Method and the GIS software, The population density depends on

  4. A Novel Parameter Estimation Method for Boltzmann Machines.

    PubMed

    Takenouchi, Takashi

    2015-11-01

    We propose a novel estimator for a specific class of probabilistic models on discrete spaces such as the Boltzmann machine. The proposed estimator is derived from minimization of a convex risk function and can be constructed without calculating the normalization constant, whose computational cost is exponential order. We investigate statistical properties of the proposed estimator such as consistency and asymptotic normality in the framework of the estimating function. Small experiments show that the proposed estimator can attain comparable performance to the maximum likelihood expectation at a much lower computational cost and is applicable to high-dimensional data.

  5. A method to accurately estimate the muscular torques of human wearing exoskeletons by torque sensors.

    PubMed

    Hwang, Beomsoo; Jeon, Doyoung

    2015-04-09

    In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.

  6. Methods for estimating dispersal probabilities and related parameters using marked animals

    USGS Publications Warehouse

    Bennetts, R.E.; Nichols, J.D.; Pradel, R.; Lebreton, J.D.; Kitchens, W.M.; Clobert, Jean; Danchin, Etienne; Dhondt, Andre A.; Nichols, James D.

    2001-01-01

    Deriving valid inferences about the causes and consequences of dispersal from empirical studies depends largely on our ability reliably to estimate parameters associated with dispersal. Here, we present a review of the methods available for estimating dispersal and related parameters using marked individuals. We emphasize methods that place dispersal in a probabilistic framework. In this context, we define a dispersal event as a movement of a specified distance or from one predefined patch to another, the magnitude of the distance or the definition of a `patch? depending on the ecological or evolutionary question(s) being addressed. We have organized the chapter based on four general classes of data for animals that are captured, marked, and released alive: (1) recovery data, in which animals are recovered dead at a subsequent time, (2) recapture/resighting data, in which animals are either recaptured or resighted alive on subsequent sampling occasions, (3) known-status data, in which marked animals are reobserved alive or dead at specified times with probability 1.0, and (4) combined data, in which data are of more than one type (e.g., live recapture and ring recovery). For each data type, we discuss the data required, the estimation techniques, and the types of questions that might be addressed from studies conducted at single and multiple sites.

  7. Bounded Influence Propagation tau -Estimation: A New Robust Method for ARMA Model Estimation

    NASA Astrophysics Data System (ADS)

    Muma, Michael; Zoubir, Abdelhak M.

    2017-04-01

    A new robust and statistically efficient estimator for ARMA models called the bounded influence propagation (BIP) {\\tau}-estimator is proposed. The estimator incorporates an auxiliary model, which prevents the propagation of outliers. Strong consistency and asymptotic normality of the estimator for ARMA models that are driven by independently and identically distributed (iid) innovations with symmetric distributions are established. To analyze the infinitesimal effect of outliers on the estimator, the influence function is derived and computed explicitly for an AR(1) model with additive outliers. To obtain estimates for the AR(p) model, a robust Durbin-Levinson type and a forward-backward algorithm are proposed. An iterative algorithm to robustly obtain ARMA(p,q) parameter estimates is also presented. The problem of finding a robust initialization is addressed, which for orders p+q>2 is a non-trivial matter. Numerical experiments are conducted to compare the finite sample performance of the proposed estimator to existing robust methodologies for different types of outliers both in terms of average and of worst-case performance, as measured by the maximum bias curve. To illustrate the practical applicability of the proposed estimator, a real-data example of outlier cleaning for R-R interval plots derived from electrocardiographic (ECG) data is considered. The proposed estimator is not limited to biomedical applications, but is also useful in any real-world problem whose observations can be modeled as an ARMA process disturbed by outliers or impulsive noise.

  8. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  9. The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.

    ERIC Educational Resources Information Center

    Blackwood, Larry G.; Bradley, Edwin L.

    1989-01-01

    Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)

  10. Methods for Predicting Job-Ability Requirements: I. Ability Requirements as a Function of Changes in the Characteristics of an Auditory Signal Identification Task.

    ERIC Educational Resources Information Center

    Wheaton, George R.; And Others

    The relationship between variations in an auditory signal identification task and consequent changes in the abilities related to identification performance was investigated. Characteristics of the signal identification task were manipulated by varying signal duration and signal-to-noise ratio. Subjects received a battery of reference ability tests…

  11. An ultrasonic guided wave method to estimate applied biaxial loads

    NASA Astrophysics Data System (ADS)

    Shi, Fan; Michaels, Jennifer E.; Lee, Sang Jun

    2012-05-01

    Guided waves propagating in a homogeneous plate are known to be sensitive to both temperature changes and applied stress variations. Here we consider the inverse problem of recovering homogeneous biaxial stresses from measured changes in phase velocity at multiple propagation directions using a single mode at a specific frequency. Although there is no closed form solution relating phase velocity changes to applied stresses, prior results indicate that phase velocity changes can be closely approximated by a sinusoidal function with respect to angle of propagation. Here it is shown that all sinusoidal coefficients can be estimated from a single uniaxial loading experiment. The general biaxial inverse problem can thus be solved by fitting an appropriate sinusoid to measured phase velocity changes versus propagation angle, and relating the coefficients to the unknown stresses. The phase velocity data are obtained from direct arrivals between guided wave transducers whose direct paths of propagation are oriented at different angles. This method is applied and verified using sparse array data recorded during a fatigue test. The additional complication of the resulting fatigue cracks interfering with some of the direct arrivals is addressed via proper selection of transducer pairs. Results show that applied stresses can be successfully recovered from the measured changes in guided wave signals.

  12. Effect of Rasch Calibration on Ability and DIF Estimation in Computer-Adaptive Tests. Research Report RR-94-32.

    ERIC Educational Resources Information Center

    Zwick, Rebecca; And Others

    A previous simulation study of methods for assessing item functioning (DIF) in computer-adaptive tests (CATs) showed that modified versions of the Mantel-Haenszel and standardization methods work well with CAT data. In that study, data were generated using the three-parameter logistic (3PL) model, and this same model was assumed in obtaining item…

  13. Novel method of channel estimation for WCDMA downlink

    NASA Astrophysics Data System (ADS)

    Sheng, Bin; You, XiaoHu

    2001-10-01

    A novel scheme for channel estimation is proposed in this paper for WCDMA Downlink where a pilot channel is simultaneously transmitted with a dada traffic channel. The proposed scheme exploits channel information in both pilot and data traffic channels by combining channel estimates from these two channels. It is demonstrated by computer simulations that the performance of the Rake receiver is improved obviously.

  14. Effects of Mathematics Integration in a Teaching Methods Course on Mathematics Ability of Preservice Agricultural Education Teachers

    ERIC Educational Resources Information Center

    Stripling, Christopher T.; Roberts, T. Grady

    2014-01-01

    The purpose of this study was to determine the effects of incorporating mathematics teaching and integration strategies (MTIS) in a teaching methods course on preservice agricultural teachers' mathematics ability. The research design was quasi-experimental and utilized a nonequivalent control group. The MTIS treatment had a positive effect on the…

  15. Software Effort Estimation Accuracy: A Comparative Study of Estimations Based on Software Sizing and Development Methods

    ERIC Educational Resources Information Center

    Lafferty, Mark T.

    2010-01-01

    The number of project failures and those projects completed over cost and over schedule has been a significant issue for software project managers. Among the many reasons for failure, inaccuracy in software estimation--the basis for project bidding, budgeting, planning, and probability estimates--has been identified as a root cause of a high…

  16. Application of age estimation methods based on teeth eruption: how easy is Olze method to use?

    PubMed

    De Angelis, D; Gibelli, D; Merelli, V; Botto, M; Ventura, F; Cattaneo, C

    2014-09-01

    The development of new methods for age estimation has become with time an urgent issue because of the increasing immigration, in order to estimate accurately the age of those subjects who lack valid identity documents. Methods of age estimation are divided in skeletal and dental ones, and among the latter, Olze's method is one of the most recent, since it was introduced in 2010 with the aim to identify the legal age of 18 and 21 years by evaluating the different stages of development of the periodontal ligament of the third molars with closed root apices. The present study aims at verifying the applicability of the method to the daily forensic practice, with special focus on the interobserver repeatability. Olze's method was applied by three different observers (two physicians and one dentist without a specific training in Olze's method) to 61 orthopantomograms from subjects of mixed ethnicity aged between 16 and 51 years. The analysis took into consideration the lower third molars. The results provided by the different observers were then compared in order to verify the interobserver error. Results showed that interobserver error varies between 43 and 57 % for the right lower third molar (M48) and between 23 and 49 % for the left lower third molar (M38). Chi-square test did not show significant differences according to the side of teeth and type of professional figure. The results prove that Olze's method is not easy to apply when used by not adequately trained personnel, because of an intrinsic interobserver error. Since it is however a crucial method in age determination, it should be used only by experienced observers after an intensive and specific training.

  17. Reliability and Discriminative Ability of a New Method for Soccer Kicking Evaluation

    PubMed Central

    Radman, Ivan; Wessner, Barbara; Bachl, Norbert; Ruzic, Lana; Hackl, Markus; Baca, Arnold; Markovic, Goran

    2016-01-01

    The study aimed to evaluate the test–retest reliability of a newly developed 356 Soccer Shooting Test (356-SST), and the discriminative ability of this test with respect to the soccer players' proficiency level and leg dominance. Sixty-six male soccer players, divided into three groups based on their proficiency level (amateur, n = 24; novice semi-professional, n = 18; and experienced semi-professional players, n = 24), performed 10 kicks following a two-step run up. Forty-eight of them repeated the test on a separate day. The following shooting variables were derived: ball velocity (BV; measured via radar gun), shooting accuracy (SA; average distance from the ball-entry point to the goal centre), and shooting quality (SQ; shooting accuracy divided by the time elapsed from hitting the ball to the point of entry). No systematic bias was evident in the selected shooting variables (SA: 1.98±0.65 vs. 2.00±0.63 m; BV: 24.6±2.3 vs. 24.5±1.9 m s-1; SQ: 2.92±1.0 vs. 2.93±1.0 m s-1; all p>0.05). The intra-class correlation coefficients were high (ICC = 0.70–0.88), and the coefficients of variation were low (CV = 5.3–5.4%). Finally, all three 356-SST variables identify, with adequate sensitivity, differences in soccer shooting ability with respect to the players' proficiency and leg dominance. The results suggest that the 356-SST is a reliable and sensitive test of specific shooting ability in men’s soccer. Future studies should test the validity of these findings in a fatigued state, as well as in other populations. PMID:26812247

  18. Reliability and Discriminative Ability of a New Method for Soccer Kicking Evaluation.

    PubMed

    Radman, Ivan; Wessner, Barbara; Bachl, Norbert; Ruzic, Lana; Hackl, Markus; Baca, Arnold; Markovic, Goran

    2016-01-01

    The study aimed to evaluate the test-retest reliability of a newly developed 356 Soccer Shooting Test (356-SST), and the discriminative ability of this test with respect to the soccer players' proficiency level and leg dominance. Sixty-six male soccer players, divided into three groups based on their proficiency level (amateur, n = 24; novice semi-professional, n = 18; and experienced semi-professional players, n = 24), performed 10 kicks following a two-step run up. Forty-eight of them repeated the test on a separate day. The following shooting variables were derived: ball velocity (BV; measured via radar gun), shooting accuracy (SA; average distance from the ball-entry point to the goal centre), and shooting quality (SQ; shooting accuracy divided by the time elapsed from hitting the ball to the point of entry). No systematic bias was evident in the selected shooting variables (SA: 1.98±0.65 vs. 2.00±0.63 m; BV: 24.6±2.3 vs. 24.5±1.9 m s-1; SQ: 2.92±1.0 vs. 2.93±1.0 m s-1; all p>0.05). The intra-class correlation coefficients were high (ICC = 0.70-0.88), and the coefficients of variation were low (CV = 5.3-5.4%). Finally, all three 356-SST variables identify, with adequate sensitivity, differences in soccer shooting ability with respect to the players' proficiency and leg dominance. The results suggest that the 356-SST is a reliable and sensitive test of specific shooting ability in men's soccer. Future studies should test the validity of these findings in a fatigued state, as well as in other populations.

  19. Comparison of methods for the estimation of measurement uncertainty for an analytical method for sulphonamides.

    PubMed

    Dabalus Islam, M; Schweikert Turcu, M; Cannavan, A

    2008-12-01

    A simple and inexpensive liquid chromatographic method for the determination of seven sulphonamides in animal tissues was validated. The measurement uncertainty of the method was estimated using two approaches: a 'top-down' approach based on in-house validation data, which used either repeatability data or intra-laboratory reproducibility; and a 'bottom-up' approach, which included repeatability data from spiking experiments. The decision limits (CCalpha) applied in the European Union were calculated for comparison. The bottom-up approach was used to identify critical steps in the analytical procedure, which comprised extraction, concentration, hexane-wash and HPLC-UV analysis. Six replicates of porcine kidney were fortified at the maximum residue limit (100 microg kg(-1)) at three different stages of the analytical procedure, extraction, evaporation, and final wash/HPLC analysis, to provide repeatability data for each step. The uncertainties of the gravimetric and volumetric measurements were estimated and integrated in the calculation of the total combined uncertainties by the bottom-up approach. Estimates for systematic error components were included in both approaches. Combined uncertainty estimates for the seven compounds using the 'top-down' approach ranged from 7.9 to 12.5% (using reproducibility) and from 5.4 to 9.5% (using repeatability data) and from 5.1 to 9.0% using the bottom-up approach. CCalpha values ranged from 105.6 to 108.5 microg kg(-1). The major contributor to the combined uncertainty for each analyte was identified as the extraction step. Since there was no statistical difference between the uncertainty values obtained by either approach, the analyst would be justified in applying the 'top-down' estimation using method validation data, rather than performing additional experiments to obtain uncertainty data.

  20. [Preparation method of stalk environmental biomaterial and its sorption ability for polycyclic aromatic hydrocarbons in water].

    PubMed

    He, Jiao; Kong, Huo-Liang; Han, Jin; Gao, Yan-Zheng

    2011-01-01

    The soybean, sesame and corn stalks were pyrolyzed and charred for 8 h at 300-700 degrees C to obtain stalk environmental biomaterials. The BET specific surface areas, methylene blue, and iodine adsorption capacity of the stalk environmental biomaterials were determined. The sorption efficiency of these materials on single polycyclic aromatic hydrocarbon (PAH) and mixing PAHs were investigated. The BET specific surface areas of stalk biomaterials enlarged, and the sorption ability on methylene blue and iodine enhanced with increasing the treatment temperature. The obtained stalk biomaterials could effectively remove the PAHs from water. For instance, 91.28%, 89.01% and 99.66% of naphthalene, acenaphthene, and phenanthrene in 32 mL water were removed by 0.01 g biomaterials obtained by soybean stalk at 700 degrees C. The removal efficiencies of biomaterials for mixed PAHs in water were in the order of phenanthrene > naphthalene > acenaphthene. However, the sorption ability of produced stalk biomaterials differed significantly, and followed the order of corn > soybeans > sesame for the removal of naphthalene and acenaphthene, and soybean > corn > sesame for phenanthrene removal in water. Results of this work would provide some insight into the reuse of crop stalks, and also open a new view on the treatment of organic polluted water utilizing biomaterials.

  1. Variational methods to estimate terrestrial ecosystem model parameters

    NASA Astrophysics Data System (ADS)

    Delahaies, Sylvain; Roulstone, Ian

    2016-04-01

    Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.

  2. Analytic Method to Estimate Particle Acceleration in Flux Ropes

    NASA Technical Reports Server (NTRS)

    Guidoni, S. E.; Karpen, J. T.; DeVore, C. R.

    2015-01-01

    The mechanism that accelerates particles to the energies required to produce the observed high-energy emission in solar flares is not well understood. Drake et al. (2006) proposed a kinetic mechanism for accelerating electrons in contracting magnetic islands formed by reconnection. In this model, particles that gyrate around magnetic field lines transit from island to island, increasing their energy by Fermi acceleration in those islands that are contracting. Based on these ideas, we present an analytic model to estimate the energy gain of particles orbiting around field lines inside a flux rope (2.5D magnetic island). We calculate the change in the velocity of the particles as the flux rope evolves in time. The method assumes a simple profile for the magnetic field of the evolving island; it can be applied to any case where flux ropes are formed. In our case, the flux-rope evolution is obtained from our recent high-resolution, compressible 2.5D MHD simulations of breakout eruptive flares. The simulations allow us to resolve in detail the generation and evolution of large-scale flux ropes as a result of sporadic and patchy reconnection in the flare current sheet. Our results show that the initial energy of particles can be increased by 2-5 times in a typical contracting island, before the island reconnects with the underlying arcade. Therefore, particles need to transit only from 3-7 islands to increase their energies by two orders of magnitude. These macroscopic regions, filled with a large number of particles, may explain the large observed rates of energetic electron production in flares. We conclude that this mechanism is a promising candidate for electron acceleration in flares, but further research is needed to extend our results to 3D flare conditions.

  3. Optimal Filtering Methods to Structural Damage Estimation under Ground Excitation

    PubMed Central

    Hsieh, Chien-Shu; Liaw, Der-Cherng; Lin, Tzu-Hsuan

    2013-01-01

    This paper considers the problem of shear building damage estimation subject to earthquake ground excitation using the Kalman filtering approach. The structural damage is assumed to take the form of reduced elemental stiffness. Two damage estimation algorithms are proposed: one is the multiple model approach via the optimal two-stage Kalman estimator (OTSKE), and the other is the robust two-stage Kalman filter (RTSKF), an unbiased minimum-variance filtering approach to determine the locations and extents of the damage stiffness. A numerical example of a six-storey shear plane frame structure subject to base excitation is used to illustrate the usefulness of the proposed results. PMID:24453869

  4. Methods to Estimate the Variance of Some Indices of the Signal Detection Theory: A Simulation Study

    ERIC Educational Resources Information Center

    Suero, Manuel; Privado, Jesús; Botella, Juan

    2017-01-01

    A simulation study is presented to evaluate and compare three methods to estimate the variance of the estimates of the parameters d and "C" of the signal detection theory (SDT). Several methods have been proposed to calculate the variance of their estimators, "d'" and "c." Those methods have been mostly assessed by…

  5. Detection of main tidal frequencies using least squares harmonic estimation method

    NASA Astrophysics Data System (ADS)

    Mousavian, R.; Hossainali, M. Mashhadi

    2012-11-01

    In this paper the efficiency of the method of Least Squares Harmonic Estimation (LS-HE) for detecting the main tidal frequencies is investigated. Using this method, the tidal spectrum of the sea level data is evaluated at two tidal stations: Bandar Abbas in south of Iran and Workington on the eastern coast of the UK. The amplitudes of the tidal constituents at these two tidal stations are not the same. Moreover, in contrary to the Workington station, the Bandar Abbas tidal record is not an equispaced time series. Therefore, the analysis of the hourly tidal observations in Bandar Abbas and Workington can provide a reasonable insight into the efficiency of this method for analyzing the frequency content of tidal time series. Furthermore, applying the method of Fourier transform to the Workington tidal record provides an independent source of information for evaluating the tidal spectrum proposed by the LS-HE method. According to the obtained results, the spectrums of these two tidal records contain the components with the maximum amplitudes among the expected ones in this time span and some new frequencies in the list of known constituents. In addition, in terms of frequencies with maximum amplitude; the power spectrums derived from two aforementioned methods are the same. These results demonstrate the ability of LS-HE for identifying the frequencies with maximum amplitude in both tidal records.

  6. Dynamic State Estimation Utilizing High Performance Computing Methods

    SciTech Connect

    Schneider, Kevin P.; Huang, Zhenyu; Yang, Bo; Hauer, Matthew L.; Nieplocha, Jaroslaw

    2009-03-18

    The state estimation tools which are currently deployed in power system control rooms are based on a quasi-steady-state assumption. As a result, the suite of operational tools that rely on state estimation results as inputs do not have dynamic information available and their accuracy is compromised. This paper presents an overview of the Kalman Filtering process and then focuses on the implementation of the predication component on multiple processors.

  7. Life Estimation Method for Optical Disk and Data Migration Method for Digitally Recorded Media

    NASA Astrophysics Data System (ADS)

    Watanabe, Atsumi

    Results of Study about lifetime estimation method for DVD disk are described. The study was performed by Digital Content Association of Japan (DCAj) under commission from The Mechanical Social Systems Foundation with subsidies from JKA's industry promotional funds raised from KEIRIN RACE. DVD disks for which quality control is well performed have lifetime of 50-100 years or more. Data migration method for digitally recorded media is also described. Here, error check is requested every 3 years. If the measured error rate is larger than determined value, immediate data migration to new media is requested.

  8. Can the gradient method improve our ability to predict soil respiration?

    NASA Astrophysics Data System (ADS)

    Phillips, Claire; Nickerson, Nicholas; Risk, Dave

    2015-04-01

    Soil surface flux measurements integrate respiration across steep vertical gradients of soil texture, moisture, temperature, and carbon substrates. Although there are benefits to integrating complex soil processes in a single surface measure, i.e. for constructing soil carbon budgets, one serious drawback of studying only surface respiration is the difficulty in generating predictive relationships from environmental drivers. For example, the relationship between depth-integrated soil respiration and temperature measured at a single discreet depth (apparent temperature sensitivity) can bear little resemblance to the temperature sensitivity of soil respiration within soil layers (actual temperature sensitivity). Here we present several examples of how the inferred environmental sensitivity of soil respiration can be improved from observations of CO2 flux profiles in contrast to surface fluxes alone. We present a theoretical approach for estimating the temperature sensitivity of soil respiration in situ, called the weighted heat flux approach, which avoids much of the hysteresis produced by typical respiration-temperature comparisons. The weighted heat flux approach gives more accurate estimates of within-soil temperature sensitivity, and is arguably the most theoretically robust analytical temperature model available. We also show how soil drying influences the effectiveness of the weighted heat flux approach, as well as the relative activity of discreet soil layers and specific soil organisms, such as mycorrhizal fungi. The additional information provided by within-soil flux profiles can improve the fidelity of both probabilistic and mechanistic soil respiration models

  9. Estimation of Organ Activity using Four Different Methods of Background Correction in Conjugate View Method

    PubMed Central

    Shanei, Ahmad; Afshin, Maryam; Moslehi, Masoud; Rastaghi, Sedighe

    2015-01-01

    To make an accurate estimation of the uptake of radioactivity in an organ using the conjugate view method, corrections of physical factors, such as background activity, scatter, and attenuation are needed. The aim of this study was to evaluate the accuracy of four different methods for background correction in activity quantification of the heart in myocardial perfusion scans. The organ activity was calculated using the conjugate view method. A number of 22 healthy volunteers were injected with 17–19 mCi of 99mTc-methoxy-isobutyl-isonitrile (MIBI) at rest or during exercise. Images were obtained by a dual-headed gamma camera. Four methods for background correction were applied: (1) Conventional correction (referred to as the Gates' method), (2) Buijs method, (3) BgdA subtraction, (4) BgdB subtraction. To evaluate the accuracy of these methods, the results of the calculations using the above-mentioned methods were compared with the reference results. The calculated uptake in the heart using conventional method, Buijs method, BgdA subtraction, and BgdB subtraction methods was 1.4 ± 0.7% (P < 0.05), 2.6 ± 0.6% (P < 0.05), 1.3 ± 0.5% (P < 0.05), and 0.8 ± 0.3% (P < 0.05) of injected dose (I.D) at rest and 1.8 ± 0.6% (P > 0.05), 3.1 ± 0.8% (P > 0.05), 1.9 ± 0.8% (P < 0.05), and 1.2 ± 0.5% (P < 0.05) of I.D, during exercise. The mean estimated myocardial uptake of 99mTc-MIBI was dependent on the correction method used. Comparison among the four different methods of background activity correction applied in this study showed that the Buijs method was the most suitable method for background correction in myocardial perfusion scan. PMID:26955568

  10. Estimation of Organ Activity using Four Different Methods of Background Correction in Conjugate View Method.

    PubMed

    Shanei, Ahmad; Afshin, Maryam; Moslehi, Masoud; Rastaghi, Sedighe

    2015-01-01

    To make an accurate estimation of the uptake of radioactivity in an organ using the conjugate view method, corrections of physical factors, such as background activity, scatter, and attenuation are needed. The aim of this study was to evaluate the accuracy of four different methods for background correction in activity quantification of the heart in myocardial perfusion scans. The organ activity was calculated using the conjugate view method. A number of 22 healthy volunteers were injected with 17-19 mCi of (99m)Tc-methoxy-isobutyl-isonitrile (MIBI) at rest or during exercise. Images were obtained by a dual-headed gamma camera. Four methods for background correction were applied: (1) Conventional correction (referred to as the Gates' method), (2) Buijs method, (3) BgdA subtraction, (4) BgdB subtraction. To evaluate the accuracy of these methods, the results of the calculations using the above-mentioned methods were compared with the reference results. The calculated uptake in the heart using conventional method, Buijs method, BgdA subtraction, and BgdB subtraction methods was 1.4 ± 0.7% (P < 0.05), 2.6 ± 0.6% (P < 0.05), 1.3 ± 0.5% (P < 0.05), and 0.8 ± 0.3% (P < 0.05) of injected dose (I.D) at rest and 1.8 ± 0.6% (P > 0.05), 3.1 ± 0.8% (P > 0.05), 1.9 ± 0.8% (P < 0.05), and 1.2 ± 0.5% (P < 0.05) of I.D, during exercise. The mean estimated myocardial uptake of (99m)Tc-MIBI was dependent on the correction method used. Comparison among the four different methods of background activity correction applied in this study showed that the Buijs method was the most suitable method for background correction in myocardial perfusion scan.

  11. A Hybrid Method to Estimate Specific Differential Phase and Rainfall With Linear Programming and Physics Constraints

    DOE PAGES

    Huang, Hao; Zhang, Guifu; Zhao, Kun; ...

    2016-10-20

    A hybrid method of combining linear programming (LP) and physical constraints is developed to estimate specific differential phase (KDP) and to improve rain estimation. Moreover, the hybrid KDP estimator and the existing estimators of LP, least squares fitting, and a self-consistent relation of polarimetric radar variables are evaluated and compared using simulated data. Our simulation results indicate the new estimator's superiority, particularly in regions where backscattering phase (δhv) dominates. Further, a quantitative comparison between auto-weather-station rain-gauge observations and KDP-based radar rain estimates for a Meiyu event also demonstrate the superiority of the hybrid KDP estimator over existing methods.

  12. Iterative methods for distributed parameter estimation in parabolic PDE

    SciTech Connect

    Vogel, C.R.; Wade, J.G.

    1994-12-31

    The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.

  13. PST - a new method for estimating PSA source terms

    SciTech Connect

    1996-12-31

    The Parametric Source Term (PST) code has been developed for estimating radioactivity release fractions. The PST code is a framework of equations based on activity transport between volumes in the release pathway from the core, through the vessel, through the containment, and to the environment. The code is fast-running because it obtains exact solutions to differential equations for activity transport in each volume for each time interval. It has successfully been applied to estimate source terms for the six Pressurized Water Reactors (PWRs) that were selected for initial consideration in the Accident Sequence Precursor (ASP) Level 2 model development effort. This paper describes the PST code and the manner in which it has been applied to estimate radioactivity release fractions for the six PWRs initially considered in the ASP Program.

  14. Regional and longitudinal estimation of product lifespan distribution: a case study for automobiles and a simplified estimation method.

    PubMed

    Oguchi, Masahiro; Fuse, Masaaki

    2015-02-03

    Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.

  15. Issues and advances in research methods on video games and cognitive abilities.

    PubMed

    Sobczyk, Bart; Dobrowolski, Paweł; Skorko, Maciek; Michalak, Jakub; Brzezicka, Aneta

    2015-01-01

    The impact of video game playing on cognitive abilities has been the focus of numerous studies over the last 10 years. Some cross-sectional comparisons indicate the cognitive advantages of video game players (VGPs) over non-players (NVGPs) and the benefits of video game trainings, while others fail to replicate these findings. Though there is an ongoing discussion over methodological practices and their impact on observable effects, some elementary issues, such as the representativeness of recruited VGP groups and lack of genre differentiation have not yet been widely addressed. In this article we present objective and declarative gameplay time data gathered from large samples in order to illustrate how playtime is distributed over VGP populations. The implications of this data are then discussed in the context of previous studies in the field. We also argue in favor of differentiating video games based on their genre when recruiting study samples, as this form of classification reflects the core mechanics that they utilize and therefore provides a measure of insight into what cognitive functions are likely to be engaged most. Additionally, we present the Covert Video Game Experience Questionnaire as an example of how this sort of classification can be applied during the recruitment process.

  16. Issues and advances in research methods on video games and cognitive abilities

    PubMed Central

    Sobczyk, Bart; Dobrowolski, Paweł; Skorko, Maciek; Michalak, Jakub; Brzezicka, Aneta

    2015-01-01

    The impact of video game playing on cognitive abilities has been the focus of numerous studies over the last 10 years. Some cross-sectional comparisons indicate the cognitive advantages of video game players (VGPs) over non-players (NVGPs) and the benefits of video game trainings, while others fail to replicate these findings. Though there is an ongoing discussion over methodological practices and their impact on observable effects, some elementary issues, such as the representativeness of recruited VGP groups and lack of genre differentiation have not yet been widely addressed. In this article we present objective and declarative gameplay time data gathered from large samples in order to illustrate how playtime is distributed over VGP populations. The implications of this data are then discussed in the context of previous studies in the field. We also argue in favor of differentiating video games based on their genre when recruiting study samples, as this form of classification reflects the core mechanics that they utilize and therefore provides a measure of insight into what cognitive functions are likely to be engaged most. Additionally, we present the Covert Video Game Experience Questionnaire as an example of how this sort of classification can be applied during the recruitment process. PMID:26483717

  17. PHREATOPHYTE WATER USE ESTIMATED BY EDDY-CORRELATION METHODS.

    USGS Publications Warehouse

    Weaver, H.L.; Weeks, E.P.; Campbell, G.S.; Stannard, D.I.; Tanner, B.D.

    1986-01-01

    Water-use was estimated for three phreatophyte communities: a saltcedar community and an alkali-Sacaton grass community in New Mexico, and a greasewood rabbit-brush-saltgrass community in Colorado. These water-use estimates were calculated from eddy-correlation measurements using three different analyses, since the direct eddy-correlation measurements did not satisfy a surface energy balance. The analysis that seems to be most accurate indicated the saltcedar community used from 58 to 87 cm (23 to 34 in. ) of water each year. The other two communities used about two-thirds this quantity.

  18. Emotions and encounters with healthcare professionals as predictors for the self-estimated ability to return to work: a cross-sectional study of people with heart failure

    PubMed Central

    Söderlund, Anne

    2016-01-01

    Objectives To live with heart failure means that life is delimited. Still, people with heart failure can have a desire to stay active in working life as long as possible. Although a number of factors affect sick leave and rehabilitation processes, little is known about sick leave and vocational rehabilitation concerning people with heart failure. This study aimed to identify emotions and encounters with healthcare professionals as possible predictors for the self-estimated ability to return to work in people on sick leave due to heart failure. Design A population-based cross-sectional study design was used. Setting The study was conducted in Sweden. Data were collected in 2012 from 3 different sources: 2 official registries and 1 postal questionnaire. Participants A total of 590 individuals were included. Statistics Descriptive statistics, correlation analysis and linear multiple regression analysis were used. Results 3 variables, feeling strengthened in the situation (β=−0.21, p=0.02), feeling happy (β=−0.24, p=0.02) and receiving encouragement about work (β=−0.32, p≤0.001), were identified as possible predictive factors for the self-estimated ability to return to work. Conclusions To feel strengthened, happy and to receive encouragement about work can affect the return to work process for people on sick leave due to heart failure. In order to develop and implement rehabilitation programmes to meet these needs, more research is needed. PMID:28186921

  19. Preservice Early Childhood Teachers' Learning of Science in a Methods Course: Examining the Predictive Ability of an Intentional Learning Model

    NASA Astrophysics Data System (ADS)

    Saçkes, Mesut; Trundle, Kathy Cabe

    2014-06-01

    This study investigated the predictive ability of an intentional learning model in the change of preservice early childhood teachers' conceptual understanding of lunar phases. Fifty-two preservice early childhood teachers who were enrolled in an early childhood science methods course participated in the study. Results indicated that the use of metacognitive strategies facilitated preservice early childhood teachers' use of deep-level cognitive strategies, which in turn promoted conceptual change. Also, preservice early childhood teachers with high motivational beliefs were more likely to use cognitive and metacognitive strategies. Thus, they were more likely to engage in conceptual change. The results provided evidence that the hypothesized model of intentional learning has a high predictive ability in explaining the change in preservice early childhood teachers' conceptual understandings from the pre to post-interviews. Implications for designing a science methods course for preservice early childhood teachers are provided.

  20. COMPARISON OF METHODS FOR ESTIMATING GROUND-WATER PUMPAGE FOR IRRIGATION.

    USGS Publications Warehouse

    Frenzel, Steven A.

    1985-01-01

    Ground-water pumpage for irrigation was measured at 32 sites on the eastern Snake River Plain in southern Idaho during 1983. Pumpage at these sites also was estimated by three commonly used methods, and pumpage estimates were compared to measured values to determine the accuracy of each estimate. Statistical comparisons of estimated and metered pumpage using an F-test showed that only estimates made using the instantaneous discharge method were not significantly different ( alpha equals 0. 01) from metered values. Pumpage estimates made using the power consumption method reflect variability in pumping efficiency among sites. Pumpage estimates made using the crop-consumptive use method reflect variability in water-management practices. Pumpage estimates made using the instantaneous discharge method reflect variability in discharges at each site during the irrigation season.

  1. Assessing Methods for Generalizing Experimental Impact Estimates to Target Populations

    ERIC Educational Resources Information Center

    Kern, Holger L.; Stuart, Elizabeth A.; Hill, Jennifer; Green, Donald P.

    2016-01-01

    Randomized experiments are considered the gold standard for causal inference because they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations. In education research,…

  2. Assessment of in silico methods to estimate aquatic species sensitivity

    EPA Science Inventory

    Determining the sensitivity of a diversity of species to environmental contaminants continues to be a significant challenge in ecological risk assessment because toxicity data are generally limited to a few standard species. In many cases, QSAR models are used to estimate toxici...

  3. A Modified Frequency Estimation Equating Method for the Common-Item Nonequivalent Groups Design

    ERIC Educational Resources Information Center

    Wang, Tianyou; Brennan, Robert L.

    2009-01-01

    Frequency estimation, also called poststratification, is an equating method used under the common-item nonequivalent groups design. A modified frequency estimation method is proposed here, based on altering one of the traditional assumptions in frequency estimation in order to correct for equating bias. A simulation study was carried out to…

  4. Pain from the life cycle perspective: Evaluation and Measurement through psychophysical methods of category estimation and magnitude estimation 1

    PubMed Central

    Sousa, Fátima Aparecida Emm Faleiros; da Silva, Talita de Cássia Raminelli; Siqueira, Hilze Benigno de Oliveira Moura; Saltareli, Simone; Gomez, Rodrigo Ramon Falconi; Hortense, Priscilla

    2016-01-01

    Abstract Objective: to describe acute and chronic pain from the perspective of the life cycle. Methods: participants: 861 people in pain. The Multidimensional Pain Evaluation Scale (MPES) was used. Results: in the category estimation method the highest descriptors of chronic pain for children/ adolescents were "Annoying" and for adults "Uncomfortable". The highest descriptors of acute pain for children/adolescents was "Complicated"; and for adults was "Unbearable". In magnitude estimation method, the highest descriptors of chronic pain was "Desperate" and for descriptors of acute pain was "Terrible". Conclusions: the MPES is a reliable scale it can be applied during different stages of development. PMID:27556875

  5. Knowledge, Skills, and Abilities for Entry-Level Business Analytics Positions: A Multi-Method Study

    ERIC Educational Resources Information Center

    Cegielski, Casey G.; Jones-Farmer, L. Allison

    2016-01-01

    It is impossible to deny the significant impact from the emergence of big data and business analytics on the fields of Information Technology, Quantitative Methods, and the Decision Sciences. Both industry and academia seek to hire talent in these areas with the hope of developing organizational competencies. This article describes a multi-method…

  6. Variability in Reading Ability Gains as a Function of Computer-Assisted Instruction Method of Presentation

    ERIC Educational Resources Information Center

    Johnson, Erin Phinney; Perry, Justin; Shamir, Haya

    2010-01-01

    This study examines the effects on early reading skills of three different methods of presenting material with computer-assisted instruction (CAI): (1) learner-controlled picture menu, which allows the student to choose activities, (2) linear sequencer, which progresses the students through lessons at a pre-specified pace, and (3) mastery-based…

  7. Ability, Demography, Learning Style, and Personality Trait Correlates of Student Preference for Assessment Method

    ERIC Educational Resources Information Center

    Furnham, Adrian; Christopher, Andrew; Garwood, Jeanette; Martin, Neil G.

    2008-01-01

    More than 400 students from four universities in America and Britain completed measures of learning style preference, general knowledge (as a proxy for intelligence), and preference for examination method. Learning style was consistently associated with preferences: surface learners preferred multiple choice and group work options, and viewed…

  8. Flood frequency estimation by hydrological continuous simulation and classical methods

    NASA Astrophysics Data System (ADS)

    Brocca, L.; Camici, S.; Melone, F.; Moramarco, T.; Tarpanelli, A.

    2009-04-01

    In recent years, the effects of flood damages have motivated the development of new complex methodologies for the simulation of the hydrologic/hydraulic behaviour of river systems, fundamental to direct the territorial planning as well as for the floodplain management and risk analysis. The valuation of the flood-prone areas can be carried out through various procedures that are usually based on the estimation of the peak discharge for an assigned probability of exceedence. In the case of ungauged or scarcely gauged catchments this is not straightforward, as the limited availability of historical peak flow data induces a relevant uncertainty in the flood frequency analysis. A possible solution to overcome this problem is the application of hydrological simulation studies in order to generate long synthetic discharge time series. For this purpose, recently, new methodologies based on the stochastic generation of rainfall and temperature data have been proposed. The inferred information can be used as input for a continuous hydrological model to generate a synthetic time series of peak river flow and, hence, the flood frequency distribution at a given site. In this study stochastic rainfall data have been generated via the Neyman-Scott Rectangular Pulses (NSRP) model characterized by a flexible structure in which the model parameters broadly relate to underlying physical features observed in rainfall fields and it is capable of preserving statistical properties of a rainfall time series over a range of time scales. The peak river flow time series have been generated through a continuous hydrological model aimed at flood prediction and developed for the purpose (hereinafter named MISDc) (Brocca, L., Melone, F., Moramarco, T., Singh, V.P., 2008. A continuous rainfall-runoff model as tool for the critical hydrological scenario assessment in natural channels. In: M. Taniguchi, W.C. Burnett, Y. Fukushima, M. Haigh, Y. Umezawa (Eds), From headwater to the ocean

  9. New Analysis Methods Estimate a Critical Property of Ethanol Fuel Blends

    SciTech Connect

    2016-03-01

    To date there have been no adequate methods for measuring the heat of vaporization of complex mixtures. This research developed two separate methods for measuring this key property of ethanol and gasoline blends, including the ability to estimate heat of vaporization at multiple temperatures. Methods for determining heat of vaporization of gasoline-ethanol blends by calculation from a compositional analysis and by direct calorimetric measurement were developed. Direct measurement produced values for pure compounds in good agreement with literature. A range of hydrocarbon gasolines were shown to have heat of vaporization of 325 kJ/kg to 375 kJ/kg. The effect of adding ethanol at 10 vol percent to 50 vol percent was significantly larger than the variation between hydrocarbon gasolines (E50 blends at 650 kJ/kg to 700 kJ/kg). The development of these new and accurate methods allows researchers to begin to both quantify the effect of fuel evaporative cooling on knock resistance, and exploit this effect for combustion of hydrocarbon-ethanol fuel blends in high-efficiency SI engines.

  10. Effects of instruction on learners' ability to generate an effective pathway in the method of loci.

    PubMed

    Massen, Cristina; Vaterrodt-Plünnecke, Bianca; Krings, Lucia; Hilbig, Benjamin E

    2009-10-01

    One of the most effective mnemonic techniques is the well-known method of loci. Learning and retention, especially of sequentially ordered information, is facilitated by this technique which involves mentally combining salient loci on a well-known path with the material to be learned. There are several variants of this technique that differ in the kind of path that is suggested to the user and it is implicitly assumed that these variants are comparable in effectiveness. The experiments reported in this study were designed to test this assumption. The data of two experiments show that participants who are instructed to generate and apply loci on a route to their work recall significantly more items in a memory test than participants who are instructed to generate and apply loci on a route in their house. These results have practical implications for the instruction and application of the method of loci.

  11. Quantitative estimation of poikilocytosis by the coherent optical method

    NASA Astrophysics Data System (ADS)

    Safonova, Larisa P.; Samorodov, Andrey V.; Spiridonov, Igor N.

    2000-05-01

    The investigation upon the necessity and the reliability required of the determination of the poikilocytosis in hematology has shown that existing techniques suffer from grave shortcomings. To determine a deviation of the erythrocytes' form from the normal (rounded) one in blood smears it is expedient to use an integrative estimate. The algorithm which is based on the correlation between erythrocyte morphological parameters with properties of the spatial-frequency spectrum of blood smear is suggested. During analytical and experimental research an integrative form parameter (IFP) which characterizes the increase of the relative concentration of cells with the changed form over 5% and the predominating type of poikilocytes was suggested. An algorithm of statistically reliable estimation of the IFP on the standard stained blood smears has been developed. To provide the quantitative characterization of the morphological features of cells a form vector has been proposed, and its validity for poikilocytes differentiation was shown.

  12. Evaluation of the ability of antioxidants to counteract lipid oxidation: existing methods, new trends and challenges.

    PubMed

    Laguerre, M; Lecomte, J; Villeneuve, P

    2007-09-01

    Oxidative degradation of lipids, especially that induced by reactive oxygen species (ROS), leads to quality deterioration of foods and cosmetics and could have harmful effects on health. Currently, a very promising way to overcome this is to use vegetable antioxidants for nutritional, therapeutic or food quality preservation purposes. A major challenge is to develop tools to assess the antioxidant capacity and real efficacy of these molecules. Many rapid in vitro tests are now available, but they are often performed in dissimilar conditions and different properties are thus frequently measured. The so-called 'direct' methods, which use oxidizable substrates, seem to be the only ones capable of measuring real antioxidant power. Some oxidizable substrates correspond to molecules or natural extracts exhibiting biological activity, such as lipids, proteins or nucleic acids, while others are model substrates that are not encountered in biological systems or foods. Only lipid oxidation and direct methods using lipid-like substrates will be discussed in this review. The main mechanisms of autoxidation and antioxidation are recapitulated, then the four components of a standard test (oxidizable substrate, medium, oxidation conditions and antioxidant) applied to a single antioxidant or complex mixtures are dealt with successively. The study is focused particularly on model lipids, but also on dietary and biological lipids isolated from their natural environment, including lipoproteins and phospholipidic membranes. Then the advantages and drawbacks of existing methods and new approaches are compared according to the context. Finally, recent trends based on the chemometric strategy are introduced as a highly promising prospect for harmonizing in vitro methods.

  13. Application of Density Estimation Methods to Datasets from a Glider

    DTIC Science & Technology

    2013-09-30

    Glider Elizabeth Thorp Küsel and Martin Siderius Portland State University Electrical and Computer Engineering Department 1900 SW 4th Ave...Fitting the glider with two recording sensors, instead of one, provides the opportunity to investigate other density estimation modalities ( Thomas and...Am. 134, 3506-3512. Carretta, J. V., Forney, K. A., Lowry , M. S., Barlow, J., Baker, J., Johnston, D., Hanson, B., Brownell Jr., R. L., Robbins

  14. Methods for estimating drought streamflow probabilities for Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  15. Allometric method to estimate leaf area index for row crops

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Leaf area index (LAI) is critical for predicting plant metabolism, biomass production, evapotranspiration, and greenhouse gas sequestration, but direct LAI measurements are difficult and labor intensive. Several methods are available to measure LAI indirectly or calculate LAI using allometric method...

  16. Relative performance of mutual information estimation methods for quantifying the dependence among short and noisy data

    NASA Astrophysics Data System (ADS)

    Khan, Shiraj; Bandyopadhyay, Sharba; Ganguly, Auroop R.; Saigal, Sunil; Erickson, David J., III; Protopopescu, Vladimir; Ostrouchov, George

    2007-08-01

    Commonly used dependence measures, such as linear correlation, cross-correlogram, or Kendall’s τ , cannot capture the complete dependence structure in data unless the structure is restricted to linear, periodic, or monotonic. Mutual information (MI) has been frequently utilized for capturing the complete dependence structure including nonlinear dependence. Recently, several methods have been proposed for the MI estimation, such as kernel density estimators (KDEs), k -nearest neighbors (KNNs), Edgeworth approximation of differential entropy, and adaptive partitioning of the XY plane. However, outstanding gaps in the current literature have precluded the ability to effectively automate these methods, which, in turn, have caused limited adoptions by the application communities. This study attempts to address a key gap in the literature—specifically, the evaluation of the above methods to choose the best method, particularly in terms of their robustness for short and noisy data, based on comparisons with the theoretical MI estimates, which can be computed analytically, as well with linear correlation and Kendall’s τ . Here we consider smaller data sizes, such as 50, 100, and 1000, and within this study we characterize 50 and 100 data points as very short and 1000 as short. We consider a broader class of functions, specifically linear, quadratic, periodic, and chaotic, contaminated with artificial noise with varying noise-to-signal ratios. Our results indicate KDEs as the best choice for very short data at relatively high noise-to-signal levels whereas the performance of KNNs is the best for very short data at relatively low noise levels as well as for short data consistently across noise levels. In addition, the optimal smoothing parameter of a Gaussian kernel appears to be the best choice for KDEs while three nearest neighbors appear optimal for KNNs. Thus, in situations where the approximate data sizes are known in advance and exploratory data analysis and

  17. Comparison of Two Parametric Methods to Estimate Pesticide Mass Loads in California's Central Valley

    USGS Publications Warehouse

    Saleh, D.K.; Lorenz, D.L.; Domagalski, J.L.

    2011-01-01

    Mass loadings were calculated for four pesticides in two watersheds with different land uses in the Central Valley, California, by using two parametric models: (1) the Seasonal Wave model (SeaWave), in which a pulse signal is used to describe the annual cycle of pesticide occurrence in a stream, and (2) the Sine Wave model, in which first-order Fourier series sine and cosine terms are used to simulate seasonal mass loading patterns. The models were applied to data collected during water years 1997 through 2005. The pesticides modeled were carbaryl, diazinon, metolachlor, and molinate. Results from the two models show that the ability to capture seasonal variations in pesticide concentrations was affected by pesticide use patterns and the methods by which pesticides are transported to streams. Estimated seasonal loads compared well with results from previous studies for both models. Loads estimated by the two models did not differ significantly from each other, with the exceptions of carbaryl and molinate during the precipitation season, where loads were affected by application patterns and rainfall. However, in watersheds with variable and intermittent pesticide applications, the SeaWave model is more suitable for use on the basis of its robust capability of describing seasonal variation of pesticide concentrations. ?? 2010 American Water Resources Association. This article is a US Government work and is in the public domain in the USA.

  18. Comparison of two parametric methods to estimate pesticide mass loads in California's Central Valley

    USGS Publications Warehouse

    Saleh, Dina K.; Lorenz, David L.; Domagalski, Joseph L.

    2011-01-01

    Mass loadings were calculated for four pesticides in two watersheds with different land uses in the Central Valley, California, by using two parametric models: (1) the Seasonal Wave model (SeaWave), in which a pulse signal is used to describe the annual cycle of pesticide occurrence in a stream, and (2) the Sine Wave model, in which first-order Fourier series sine and cosine terms are used to simulate seasonal mass loading patterns. The models were applied to data collected during water years 1997 through 2005. The pesticides modeled were carbaryl, diazinon, metolachlor, and molinate. Results from the two models show that the ability to capture seasonal variations in pesticide concentrations was affected by pesticide use patterns and the methods by which pesticides are transported to streams. Estimated seasonal loads compared well with results from previous studies for both models. Loads estimated by the two models did not differ significantly from each other, with the exceptions of carbaryl and molinate during the precipitation season, where loads were affected by application patterns and rainfall. However, in watersheds with variable and intermittent pesticide applications, the SeaWave model is more suitable for use on the basis of its robust capability of describing seasonal variation of pesticide concentrations.

  19. Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene a.

    2006-01-01

    Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.

  20. An Estimation Method of Waiting Time for Health Service at Hospital by Using a Portable RFID and Robust Estimation

    NASA Astrophysics Data System (ADS)

    Ishigaki, Tsukasa; Yamamoto, Yoshinobu; Nakamura, Yoshiyuki; Akamatsu, Motoyuki

    Patients that have an health service by doctor have to wait long time at many hospitals. The long waiting time is the worst factor of patient's dissatisfaction for hospital service according to questionnaire for patients. The present paper describes an estimation method of the waiting time for each patient without an electronic medical chart system. The method applies a portable RFID system to data acquisition and robust estimation of probability distribution of the health service and test time by doctor for high-accurate waiting time estimation. We carried out an health service of data acquisition at a real hospital and verified the efficiency of the proposed method. The proposed system widely can be used as data acquisition system in various fields such as marketing service, entertainment or human behavior measurement.

  1. Comparing the estimation of postpartum hemorrhage using the weighting method and National Guideline with the postpartum hemorrhage estimation by midwives

    PubMed Central

    Golmakani, Nahid; Khaleghinezhad, Khosheh; Dadgar, Selmeh; Hashempor, Majid; Baharian, Nosrat

    2015-01-01

    Introduction: In developing countries, hemorrhage accounts for 30% of the maternal deaths. Postpartum hemorrhage has been defined as blood loss of around 500 ml or more, after completing the third phase of labor. Most cases of postpartum hemorrhage occur during the first hour after birth. The most common reason for bleeding in the early hours after childbirth is uterine atony. Bleeding during delivery is usually a visual estimate that is measured by the midwife. It has a high error rate. However, studies have shown that the use of a standard can improve the estimation. The aim of the research is to compare the estimation of postpartum hemorrhage using the weighting method and the National Guideline for postpartum hemorrhage estimation. Materials and Methods: This descriptive study was conducted on 112 females in the Omolbanin Maternity Department of Mashhad, for a six-month period, from November 2012 to May 2013. The accessible method was used for sampling. The data collection tools were case selection, observation and interview forms. For postpartum hemorrhage estimation, after the third section of labor was complete, the quantity of bleeding was estimated in the first and second hours after delivery, by the midwife in charge, using the National Guideline for vaginal delivery, provided by the Maternal Health Office. Also, after visual estimation by using the National Guideline, the sheets under parturient in first and second hours after delivery were exchanged and weighted. The data were analyzed using descriptive statistics and the t-test. Results: According to the results, a significant difference was found between the estimated blood loss based on the weighting methods and that using the National Guideline (weighting method 62.68 ± 16.858 cc vs. National Guideline 45.31 ± 13.484 cc in the first hour after delivery) (P = 0.000) and (weighting method 41.26 ± 10.518 vs. National Guideline 30.24 ± 8.439 in second hour after delivery) (P = 0.000). Conclusions

  2. Abilities of helium immobilization by the UO2 surface using the “ab initio” method

    NASA Astrophysics Data System (ADS)

    Dąbrowski, Ludwik; Szuta, Marcin

    2016-09-01

    We present density functional theory calculation results concerning the uranium dioxide crystals with a helium atom incorporated in the octahedral sites on a nano superficial layer of UO2 fuel element. In order to quantify the capability of helium immobilization we propose a quantum model of adsorption and desorption which we compare with the classical model of Langmuir. Significant differences between the models are maintained in a wide temperature range including high temperatures of the order of 1000 K. By the proposed method of quantum isotherms it was established that the octahedral positions near the metal surface are good traps for helium atoms. While in a temperature close to 1089 K it predicts an intensive release of helium, which is consistent with the experimental results.

  3. Computation of nonparametric convex hazard estimators via profile methods

    PubMed Central

    Jankowski, Hanna K.; Wellner, Jon A.

    2010-01-01

    This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females. PMID:20300560

  4. New Torque Estimation Method Considering Spatial Harmonics and Torque Ripple Reduction in Permanent Magnet Synchronous Motors

    NASA Astrophysics Data System (ADS)

    Hida, Hajime; Tomigashi, Yoshio; Ueyama, Kenji; Inoue, Yukinori; Morimoto, Shigeo

    This paper proposes a new torque estimation method that takes into account the spatial harmonics of permanent magnet synchronous motors and that is capable of real-time estimation. First, the torque estimation equation of the proposed method is derived. In the method, the torque ripple of a motor can be estimated from the average of the torque calculated by the conventional method (cross product of the fluxlinkage and motor current) and the torque calculated from the electric input power to the motor. Next, the effectiveness of the proposed method is verified by simulations in which two kinds of motors with different components of torque ripple are considered. The simulation results show that the proposed method estimates the torque ripple more accurately than the conventional method. Further, the effectiveness of the proposed method is verified by performing on experiment. It is shown that the torque ripple is decreased by using the proposed method to the torque control.

  5. Estimating the prevalence of anaemia: a comparison of three methods.

    PubMed Central

    Sari, M.; de Pee, S.; Martini, E.; Herman, S.; Sugiatmi; Bloem, M. W.; Yip, R.

    2001-01-01

    OBJECTIVE: To determine the most effective method for analysing haemoglobin concentrations in large surveys in remote areas, and to compare two methods (indirect cyanmethaemoglobin and HemoCue) with the conventional method (direct cyanmethaemoglobin). METHODS: Samples of venous and capillary blood from 121 mothers in Indonesia were compared using all three methods. FINDINGS: When the indirect cyanmethaemoglobin method was used the prevalence of anaemia was 31-38%. When the direct cyanmethaemoglobin or HemoCue method was used the prevalence was 14-18%. Indirect measurement of cyanmethaemoglobin had the highest coefficient of variation and the largest standard deviation of the difference between the first and second assessment of the same blood sample (10-12 g/l indirect measurement vs 4 g/l direct measurement). In comparison with direct cyanmethaemoglobin measurement of venous blood, HemoCue had the highest sensitivity (82.4%) and specificity (94.2%) when used for venous blood. CONCLUSIONS: Where field conditions and local resources allow it, haemoglobin concentration should be assessed with the direct cyanmethaemoglobin method, the gold standard. However, the HemoCue method can be used for surveys involving different laboratories or which are conducted in relatively remote areas. In very hot and humid climates, HemoCue microcuvettes should be discarded if not used within a few days of opening the container containing the cuvettes. PMID:11436471

  6. Testing the ability of a proposed geotechnical based method to evaluate the liquefaction potential analysis subjected to earthquake vibrations

    NASA Astrophysics Data System (ADS)

    Abbaszadeh Shahri, A.; Behzadafshar, K.; Esfandiyari, B.; Rajablou, R.

    2010-12-01

    During the earthquakes a number of earth dams have had severe damages or suffered major displacements as a result of liquefaction, thus modeling by computer codes can provide a reliable tool to predict the response of the dam foundation against earthquakes. These modeling can be used in the design of new dams or safety assessments of existing ones. In this paper, on base of the field and laboratory tests and by combination of several software packages a seismic geotechnical based analysis procedure is proposed and verified by comparison with computer model tests, field and laboratory experiences. Verification or validation of the analyses relies to ability of the applied computer codes. By use of Silakhor earthquake (2006, Ms 6.1) and in order to check the efficiency of the proposed framework, the procedure is applied to the Korzan earth dam of Iran which is located in Hamedan Province to analyze and estimate the liquefaction and safety factor. Design and development of a computer code by authors which named as “Abbas Converter” with graphical user interface which operates as logic connecter function that can computes and models the soil profiles is the critical point of this study and the results are confirm and proved the ability of the generated computer code on evaluation of soil behavior under the earthquake excitations. Also this code can make and render facilitate this study more than previous have done, and take over the encountered problem.

  7. Preparation of mesoporous silica thin films by photocalcination method and their adsorption abilities for various proteins.

    PubMed

    Kato, Katsuya; Nakamura, Hitomi; Yamauchi, Yoshihiro; Nakanishi, Kazuma; Tomita, Masahiro

    2014-07-01

    Mesoporous silica (MPS) thin film biosensor platforms were established. MPS thin films were prepared from tetraethoxysilane (TEOS) via using sol-gel and spin-coating methods using a poly-(ethylene oxide)-block-poly-(propylene oxide)-block-poly-(ethylene oxide) triblock polymer, such as P123 ((EO)20(PO)70(EO)20) or F127 ((EO)106(PO)70(EO)106), as the structure-directing agent. The MPS thin film prepared using P123 as the mesoporous template and treated via vacuum ultraviolet (VUV) irradiation to remove the triblock copolymer had a more uniform pore array than that of the corresponding film prepared via thermal treatment. Protein adsorption and enzyme-linked immunosorbent assay (ELISA) on the synthesized MPS thin films were also investigated. VUV-irradiated MPS thin films adsorbed a smaller quantity of protein A than the thermally treated films; however, the human immunoglobulin G (IgG) binding efficiency was higher on the former. In addition, protein A-IgG specific binding on MPS thin films was achieved without using a blocking reagent; i.e., nonspecific adsorption was inhibited by the uniform pore arrays of the films. Furthermore, VUV-irradiated MPS thin films exhibited high sensitivity for ELISA testing, and cytochrome c adsorbed on the MPS thin films exhibited high catalytic activity and recyclability. These results suggest that MPS thin films are attractive platforms for the development of novel biosensors.

  8. Child mortality estimation: methods used to adjust for bias due to AIDS in estimating trends in under-five mortality.

    PubMed

    Walker, Neff; Hill, Kenneth; Zhao, Fengmin

    2012-01-01

    In most low- and middle-income countries, child mortality is estimated from data provided by mothers concerning the survival of their children using methods that assume no correlation between the mortality risks of the mothers and those of their children. This assumption is not valid for populations with generalized HIV epidemics, however, and in this review, we show how the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME) uses a cohort component projection model to correct for AIDS-related biases in the data used to estimate trends in under-five mortality. In this model, births in a given year are identified as occurring to HIV-positive or HIV-negative mothers, the lives of the infants and mothers are projected forward using survivorship probabilities to estimate survivors at the time of a given survey, and the extent to which excess mortality of children goes unreported because of the deaths of HIV-infected mothers prior to the survey is calculated. Estimates from the survey for past periods can then be adjusted for the estimated bias. The extent of the AIDS-related bias depends crucially on the dynamics of the HIV epidemic, on the length of time before the survey that the estimates are made for, and on the underlying non-AIDS child mortality. This simple methodology (which does not take into account the use of effective antiretroviral interventions) gives results qualitatively similar to those of other studies.

  9. An estimation method of the fault wind turbine power generation loss based on correlation analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Zhu, Shourang; Wang, Wei

    2017-01-01

    A method for estimating the power generation loss of a fault wind turbine is proposed in this paper. In this method, the wind speed is estimated and the estimated value of the loss of power generation is given by combining the actual output power characteristic curve of the wind turbine. In the wind speed estimation, the correlation analysis is used, and the normal operation of the wind speed of the fault wind turbine is selected, and the regression analysis method is used to obtain the estimated value of the wind speed. Based on the estimation method, this paper presents an implementation of the method in the monitoring system of the wind turbine, and verifies the effectiveness of the proposed method.

  10. Differences in Movement Pattern and Detectability between Males and Females Influence How Common Sampling Methods Estimate Sex Ratio

    PubMed Central

    Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco

    2016-01-01

    Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population’s sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns. PMID:27441554

  11. Advanced Method to Estimate Fuel Slosh Simulation Parameters

    NASA Technical Reports Server (NTRS)

    Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl

    2005-01-01

    The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the

  12. Palatine tonsil volume estimation using different methods after tonsillectomy.

    PubMed

    Sağıroğlu, Ayşe; Acer, Niyazi; Okuducu, Hacı; Ertekin, Tolga; Erkan, Mustafa; Durmaz, Esra; Aydın, Mesut; Yılmaz, Seher; Zararsız, Gökmen

    2016-06-15

    This study was carried out to measure the volume of the palatine tonsil in otorhinolaryngology outpatients with complaints of adenotonsillar hypertrophy and chronic tonsillitis who had undergone tonsillectomy. To date, no study has investigated palatine tonsil volume using different methods and compared with subjective tonsil size in the literature. For this purpose, we used three different methods to measure palatine tonsil volume. The correlation of each parameter with tonsil size was assessed. After tonsillectomy, palatine tonsil volume was measured by Archimedes, Cavalieri and Ellipsoid methods. Mean right-left palatine tonsil volumes were calculated as 2.63 ± 1.34 cm(3) and 2.72 ± 1.51 cm(3) by the Archimedes method, 3.51 ± 1.48 cm(3) and 3.37 ± 1.36 cm(3) by the Cavalieri method, and 2.22 ± 1.22 cm(3) and 2.29 ± 1.42 cm(3) by the Ellipsoid method, respectively. Excellent agreement was found among the three methods of measuring volumetric techniques according to Bland-Altman plots. In addition, tonsil grade was correlated significantly with tonsil volume.

  13. Numerical method for estimating the size of chaotic regions of phase space

    SciTech Connect

    Henyey, F.S.; Pomphrey, N.

    1987-10-01

    A numerical method for estimating irregular volumes of phase space is derived. The estimate weights the irregular area on a surface of section with the average return time to the section. We illustrate the method by application to the stadium and oval billiard systems and also apply the method to the continuous Henon-Heiles system. 15 refs., 10 figs. (LSP)

  14. Method of estimating pulse response using an impedance spectrum

    SciTech Connect

    Morrison, John L; Morrison, William H; Christophersen, Jon P; Motloch, Chester G

    2014-10-21

    Electrochemical Impedance Spectrum data are used to predict pulse performance of an energy storage device. The impedance spectrum may be obtained in-situ. A simulation waveform includes a pulse wave with a period greater than or equal to the lowest frequency used in the impedance measurement. Fourier series coefficients of the pulse train can be obtained. The number of harmonic constituents in the Fourier series are selected so as to appropriately resolve the response, but the maximum frequency should be less than or equal to the highest frequency used in the impedance measurement. Using a current pulse as an example, the Fourier coefficients of the pulse are multiplied by the impedance spectrum at corresponding frequencies to obtain Fourier coefficients of the voltage response to the desired pulse. The Fourier coefficients of the response are then summed and reassembled to obtain the overall time domain estimate of the voltage using the Fourier series analysis.

  15. Semi-quantitative method to estimate levels of Campylobacter

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Introduction: Research projects utilizing live animals and/or systems often require reliable, accurate quantification of Campylobacter following treatments. Even with marker strains, conventional methods designed to quantify are labor and material intensive requiring either serial dilutions or MPN ...

  16. A history-based method to estimate animal preference

    PubMed Central

    Maia, Caroline Marques; Volpato, Gilson Luiz

    2016-01-01

    Giving animals their preferred items (e.g., environmental enrichment) has been suggested as a method to improve animal welfare, thus raising the question of how to determine what animals want. Most studies have employed choice tests for detecting animal preferences. However, whether choice tests represent animal preferences remains a matter of controversy. Here, we present a history-based method to analyse data from individual choice tests to discriminate between preferred and non-preferred items. This method differentially weighs choices from older and recent tests performed over time. Accordingly, we provide both a preference index that identifies preferred items contrasted with non-preferred items in successive multiple-choice tests and methods to detect the strength of animal preferences for each item. We achieved this goal by investigating colour choices in the Nile tilapia fish species. PMID:27350213

  17. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  18. Evaluation of acidity estimation methods for mine drainage, Pennsylvania, USA.

    PubMed

    Park, Daeryong; Park, Byungtae; Mendinsky, Justin J; Paksuchon, Benjaphon; Suhataikul, Ratda; Dempsey, Brian A; Cho, Yunchul

    2015-01-01

    Eighteen sites impacted by abandoned mine drainage (AMD) in Pennsylvania were sampled and measured for pH, acidity, alkalinity, metal ions, and sulfate. This study compared the accuracy of four acidity calculation methods with measured hot peroxide acidity and identified the most accurate calculation method for each site as a function of pH and sulfate concentration. Method E1 was the sum of proton and acidity based on total metal concentrations; method E2 added alkalinity; method E3 also accounted for aluminum speciation and temperature effects; and method E4 accounted for sulfate speciation. To evaluate errors between measured and predicted acidity, the Nash-Sutcliffe efficiency (NSE), the coefficient of determination (R (2)), and the root mean square error to standard deviation ratio (RSR) methods were applied. The error evaluation results show that E1, E2, E3, and E4 sites were most accurate at 0, 9, 4, and 5 of the sites, respectively. Sites where E2 was most accurate had pH greater than 4.0 and less than 400 mg/L of sulfate. Sites where E3 was most accurate had pH greater than 4.0 and sulfate greater than 400 mg/L with two exceptions. Sites where E4 was most accurate had pH less than 4.0 and more than 400 mg/L sulfate with one exception. The results indicate that acidity in AMD-affected streams can be accurately predicted by using pH, alkalinity, sulfate, Fe(II), Mn(II), and Al(III) concentrations in one or more of the identified equations, and that the appropriate equation for prediction can be selected based on pH and sulfate concentration.

  19. Effects of Score Discreteness and Estimating Alternative Model Parameters on Power Estimation Methods in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Lei, Pui-Wa; Dunbar, Stephen B.

    2004-01-01

    The primary purpose of this study was to examine relative performance of 2 power estimation methods in structural equation modeling. Sample size, alpha level, type of manifest variable, type of specification errors, and size of correlation between constructs were manipulated. Type 1 error rate of the model chi-square test, empirical critical…

  20. Estimation of mechanical properties of nanomaterials using artificial intelligence methods

    NASA Astrophysics Data System (ADS)

    Vijayaraghavan, V.; Garg, A.; Wong, C. H.; Tai, K.

    2014-09-01

    Computational modeling tools such as molecular dynamics (MD), ab initio, finite element modeling or continuum mechanics models have been extensively applied to study the properties of carbon nanotubes (CNTs) based on given input variables such as temperature, geometry and defects. Artificial intelligence techniques can be used to further complement the application of numerical methods in characterizing the properties of CNTs. In this paper, we have introduced the application of multi-gene genetic programming (MGGP) and support vector regression to formulate the mathematical relationship between the compressive strength of CNTs and input variables such as temperature and diameter. The predictions of compressive strength of CNTs made by these models are compared to those generated using MD simulations. The results indicate that MGGP method can be deployed as a powerful method for predicting the compressive strength of the carbon nanotubes.

  1. A method for estimating abundance of mobile populations using telemetry and counts of unmarked animals

    USGS Publications Warehouse

    Clement, Matthew; O'Keefe, Joy M; Walters, Brianne

    2015-01-01

    While numerous methods exist for estimating abundance when detection is imperfect, these methods may not be appropriate due to logistical difficulties or unrealistic assumptions. In particular, if highly mobile taxa are frequently absent from survey locations, methods that estimate a probability of detection conditional on presence will generate biased abundance estimates. Here, we propose a new estimator for estimating abundance of mobile populations using telemetry and counts of unmarked animals. The estimator assumes that the target population conforms to a fission-fusion grouping pattern, in which the population is divided into groups that frequently change in size and composition. If assumptions are met, it is not necessary to locate all groups in the population to estimate abundance. We derive an estimator, perform a simulation study, conduct a power analysis, and apply the method to field data. The simulation study confirmed that our estimator is asymptotically unbiased with low bias, narrow confidence intervals, and good coverage, given a modest survey effort. The power analysis provided initial guidance on survey effort. When applied to small data sets obtained by radio-tracking Indiana bats, abundance estimates were reasonable, although imprecise. The proposed method has the potential to improve abundance estimates for mobile species that have a fission-fusion social structure, such as Indiana bats, because it does not condition detection on presence at survey locations and because it avoids certain restrictive assumptions.

  2. Comparison of some biased estimation methods (including ordinary subset regression) in the linear model

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1975-01-01

    Ridge, Marquardt's generalized inverse, shrunken, and principal components estimators are discussed in terms of the objectives of point estimation of parameters, estimation of the predictive regression function, and hypothesis testing. It is found that as the normal equations approach singularity, more consideration must be given to estimable functions of the parameters as opposed to estimation of the full parameter vector; that biased estimators all introduce constraints on the parameter space; that adoption of mean squared error as a criterion of goodness should be independent of the degree of singularity; and that ordinary least-squares subset regression is the best overall method.

  3. Estimation of missing rainfall data using spatial interpolation and imputation methods

    NASA Astrophysics Data System (ADS)

    Radi, Noor Fadhilah Ahmad; Zakaria, Roslinazairimah; Azman, Muhammad Az-zuhri

    2015-02-01

    This study is aimed to estimate missing rainfall data by dividing the analysis into three different percentages namely 5%, 10% and 20% in order to represent various cases of missing data. In practice, spatial interpolation methods are chosen at the first place to estimate missing data. These methods include normal ratio (NR), arithmetic average (AA), coefficient of correlation (CC) and inverse distance (ID) weighting methods. The methods consider the distance between the target and the neighbouring stations as well as the correlations between them. Alternative method for solving missing data is an imputation method. Imputation is a process of replacing missing data with substituted values. A once-common method of imputation is single-imputation method, which allows parameter estimation. However, the single imputation method ignored the estimation of variability which leads to the underestimation of standard errors and confidence intervals. To overcome underestimation problem, multiple imputations method is used, where each missing value is estimated with a distribution of imputations that reflect the uncertainty about the missing data. In this study, comparison of spatial interpolation methods and multiple imputations method are presented to estimate missing rainfall data. The performance of the estimation methods used are assessed using the similarity index (S-index), mean absolute error (MAE) and coefficient of correlation (R).

  4. An S-System Parameter Estimation Method (SPEM) for Biological Networks

    PubMed Central

    Yang, Xinyi; Dent, Jennifer E.

    2012-01-01

    Abstract Advances in experimental biology, coupled with advances in computational power, bring new challenges to the interdisciplinary field of computational biology. One such broad challenge lies in the reverse engineering of gene networks, and goes from determining the structure of static networks, to reconstructing the dynamics of interactions from time series data. Here, we focus our attention on the latter area, and in particular, on parameterizing a dynamic network of oriented interactions between genes. By basing the parameterizing approach on a known power-law relationship model between connected genes (S-system), we are able to account for non-linearity in the network, without compromising the ability to analyze network characteristics. In this article, we introduce the S-System Parameter Estimation Method (SPEM). SPEM, a freely available R software package (http://www.picb.ac.cn/ClinicalGenomicNTW/temp3.html), takes gene expression data in time series and returns the network of interactions as a set of differential equations. The methods, which are presented and tested here, are shown to provide accurate results not only on synthetic data, but more importantly on real and therefore noisy by nature, biological data. In summary, SPEM shows high sensitivity and positive predicted values, as well as free availability and expansibility (because based on open source software). We expect these characteristics to make it a useful and broadly applicable software in the challenging reconstruction of dynamic gene networks. PMID:22300319

  5. A robust and efficient method for estimating enzyme complex abundance and metabolic flux from expression data

    PubMed Central

    Barker, Brandon E.; Smallbone, Kieran; Myers, Christopher R.; Xi, Hongwei; Locasale, Jason W.; Gu, Zhenglong

    2015-01-01

    A major theme in constraint-based modeling is unifying experimental data, such as biochemical information about the reactions that can occur in a system or the composition and localization of enzyme complexes, with high-throughput data including expression data, metabolomics, or DNA sequencing. The desired result is to increase predictive capability and improve our understanding of metabolism. The approach typically employed when only gene (or protein) intensities are available is the creation of tissue-specific models, which reduces the available reactions in an organism model, and does not provide an objective function for the estimation of fluxes. We develop a method, flux assignment with LAD (least absolute deviation) convex objectives and normalization (FALCON), that employs metabolic network reconstructions along with expression data to estimate fluxes. In order to use such a method, accurate measures of enzyme complex abundance are needed, so we first present an algorithm that addresses quantification of complex abundance. Our extensions to prior techniques include the capability to work with large models and significantly improved run-time performance even for smaller models, an improved analysis of enzyme complex formation, the ability to handle large enzyme complex rules that may incorporate multiple isoforms, and either maintained or significantly improved correlation with experimentally measured fluxes. FALCON has been implemented in MATLAB and ATS, and can be downloaded from: https://github.com/bbarker/FALCON. ATS is not required to compile the software, as intermediate C source code is available. FALCON requires use of the COBRA Toolbox, also implemented in MATLAB. PMID:26381164

  6. A method for estimating both the solubility parameters and molar volumes of liquids

    NASA Technical Reports Server (NTRS)

    Fedors, R. F.

    1974-01-01

    Development of an indirect method of estimating the solubility parameter of high molecular weight polymers. The proposed method of estimating the solubility parameter, like Small's method, is based on group additive constants, but is believed to be superior to Small's method for two reasons: (1) the contribution of a much larger number of functional groups have been evaluated, and (2) the method requires only a knowledge of structural formula of the compound.

  7. Estimation of the size of the female sex worker population in Rwanda using three different methods.

    PubMed

    Mutagoma, Mwumvaneza; Kayitesi, Catherine; Gwiza, Aimé; Ruton, Hinda; Koleros, Andrew; Gupta, Neil; Balisanga, Helene; Riedel, David J; Nsanzimana, Sabin

    2015-10-01

    HIV prevalence is disproportionately high among female sex workers compared to the general population. Many African countries lack useful data on the size of female sex worker populations to inform national HIV programmes. A female sex worker size estimation exercise using three different venue-based methodologies was conducted among female sex workers in all provinces of Rwanda in August 2010. The female sex worker national population size was estimated using capture-recapture and enumeration methods, and the multiplier method was used to estimate the size of the female sex worker population in Kigali. A structured questionnaire was also used to supplement the data. The estimated number of female sex workers by the capture-recapture method was 3205 (95% confidence interval: 2998-3412). The female sex worker size was estimated at 3348 using the enumeration method. In Kigali, the female sex worker size was estimated at 2253 (95% confidence interval: 1916-2524) using the multiplier method. Nearly 80% of all female sex workers in Rwanda were found to be based in the capital, Kigali. This study provided a first-time estimate of the female sex worker population size in Rwanda using capture-recapture, enumeration, and multiplier methods. The capture-recapture and enumeration methods provided similar estimates of female sex worker in Rwanda. Combination of such size estimation methods is feasible and productive in low-resource settings and should be considered vital to inform national HIV programmes.

  8. Application of Density Estimation Methods to Datasets from a Glider

    DTIC Science & Technology

    2014-09-30

    humpback and sperm whales as well as different dolphin species. OBJECTIVES The objective of this research is to extend existing methods for cetacean...OSU) working on this project with known occurrence of many marine mammal species, ranging from pinnipeds, to baleen whales, cetaceans and dolphin

  9. Estimating School Efficiency: A Comparison of Methods Using Simulated Data.

    ERIC Educational Resources Information Center

    Bifulco, Robert; Bretschneider, Stuart

    2001-01-01

    Uses simulated data to assess the adequacy of two econometric and linear-programming techniques (data-envelopment analysis and corrected ordinary least squares) for measuring performance-based school reform. In complex data sets (simulated to contain measurement error and endogeneity), these methods are inadequate efficiency measures. (Contains 40…

  10. Effects of Vertical Scaling Methods on Linear Growth Estimation

    ERIC Educational Resources Information Center

    Lei, Pui-Wa; Zhao, Yu

    2012-01-01

    Vertical scaling is necessary to facilitate comparison of scores from test forms of different difficulty levels. It is widely used to enable the tracking of student growth in academic performance over time. Most previous studies on vertical scaling methods assume relatively long tests and large samples. Little is known about their performance when…

  11. Validation of visual estimation of portion size consumed as a method for estimating food intake by young Indian children.

    PubMed

    Dhingra, Pratibha; Sazawa, Sunil; Menon, Venugopal P; Dhingra, Usha; Black, Robert E

    2007-03-01

    In this observational study, estimation of food intake was evaluated using recording of portion size consumed, instead of post-weighing, as a method. In total, 930 feeding episodes were observed among 128 children aged 12-24 months in which actual intake was available by pre- and post-weighing. For each offering and feeding episode, portion size consumed was recorded by an independent nutritionist-as none, less than half, half or more, and all. Using the pre-weighed offering, available intake was estimated by multiplying portion sizes by the estimated weight. The estimated mean intake was 510.4 kilojoules compared to actual intake of 510.7 kilojoules by weighing. Similar results were found with nestum (52.0 vs 56.2 g), bread (3.8 vs 3.7 g), puffed rice (1.7 vs 1.9 g), banana (31.3 vs 24.4 g), and milk (41.6 vs 44.2 mL). Recording portion size consumed and estimating food intake from that provides a good alternative to the time-consuming and often culturally-unacceptable method of post-weighing food each time after a feeding episode.

  12. A modified method to detect the phagocytic ability of eosinophilic and basophilic haemocytes in the oyster Crassostrea plicatula.

    PubMed

    Lin, Tingting; Zhang, Dong; Lai, Qifang; Sun, Min; Quan, Weimin; Zhou, Kai

    2014-09-01

    The immune defence system of bivalve species largely depends on haemocytes. Haemocytes are generally classified as hyalinocytes (H) or granulocytes (G), and each cell type is further sub-classified as eosinophilic (E) or basophilic (B) haemocytes. Until recently, research on eosinophilic and basophilic haemocytes has primarily focused on their morphologies, dye affinities and intracellular components. Few studies have investigated their phagocytic ability because of the absence of appropriate experimental methods. In this study, we introduce a modified method suitable to detect the phagocytic ability of eosinophilic and basophilic haemocytes. This modified method involves neutral red staining by employing fluorescent microspheres as the phagocytosed medium. Specifically, haemocytes are incubated with fluorescent microspheres and then stained with neutral red. Next, the stained haemocytes are fixed by acetone and are counterstained by propidium iodide. Finally, the haemocytes are observed under a multifunctional microscope to analyse the phagocytic ability by counting the number of eosinophilic or basophilic haemocytes involved in phagocytosis (calculation for phagocytic rate, PR) and the number of phagocytosed microspheres by each eosinophilic or basophilic haemocyte (calculation for phagocytic index, PI). By employing this modified method in the oyster Crassostrea plicatula, we found that the PRs of G and H were very similar to the data obtained by another method, flow cytometry, indicating that this modified method has high accuracy. Additionally, we also found that the PR and PI in E-G were 70.9 ± 7.3% and 1.0 ± 0.2, respectively, which were both significantly higher than those in B-G (53.1 ± 6.4% and 0.7 ± 0.1). The PR and PI in E-H were 16.3 ± 2.8% and 0.2 ± 0.1, respectively, while in B-H, the PR and PI were 13.3 ± 3.6% and 0.2 ± 0.1, respectively, with no significant difference observed. Based on this result, eosinophilic granulocytes are more active

  13. Self- and other-estimates of multiple abilities in Britain and Turkey: a cross-cultural comparison of subjective ratings of intelligence.

    PubMed

    Furnham, Adrian; Arteche, Adriane; Chamorro-Premuzic, Tomas; Keser, Askin; Swami, Viren

    2009-12-01

    This study is part of a programmatic research effort into the determinants of self-assessed abilities. It examined cross-cultural differences in beliefs about intelligence and self- and other-estimated intelligence in two countries at extreme ends of the European continent. In all, 172 British and 272 Turkish students completed a three-part questionnaire where they estimated their parents', partners' and own multiple intelligences (Gardner (10) and Sternberg (3)). They also completed a measure of the 'big five' personality scales and rated six questions about intelligence. The British sample had more experience with IQ tests than the Turks. The majority of participants in both groups did not believe in sex differences in intelligence but did think there were race differences. They also believed that intelligence was primarily inherited. Participants rated their social and emotional intelligence highly (around one standard deviation above the norm). Results suggested that there were more cultural than sex differences in all the ratings, with various interactions mainly due to the British sample differentiating more between the sexes than the Turks. Males rated their overall, verbal, logical, spatial, creative and practical intelligence higher than females. Turks rated their musical, body-kinesthetic, interpersonal and intrapersonal intelligence as well as existential, naturalistic, emotional, creative, and practical intelligence higher than the British. There was evidence of participants rating their fathers' intelligence on most factors higher than their mothers'. Factor analysis of the ten Gardner intelligences yield two clear factors: cognitive and social intelligence. The first factor was impacted by sex but not culture; it was the other way round for the second factor. Regressions showed that five factors predicted overall estimates: sex (male), age (older), test experience (has done tests), extraversion (strong) and openness (strong). Results are discussed in

  14. A wind tunnel evaluation of methods for estimating surface roughness length at industrial facilities

    NASA Astrophysics Data System (ADS)

    Petersen, Ronald L.

    This paper discusses three objective methods for estimating surface roughness length based on the physical dimensions of structures or obstructions at a refinery (or other industrial sites of interest). The three methods are referred to as the Lettau method, simplified Counihan method, and Counihan method. These three methods were evaluated using five wind tunnel databases. The databases consisted of scale models of three refineries and two uniform roughness configurations. Velocity profiles were measured in the wind tunnel over these refinery models and roughness configurations, and were subsequently analyzed to estimate the surface roughness, z0. Seven different methods were used to estimate surface roughness from the velocity profiles and a wide range of z0 estimates was obtained from these methods. Only two of the methods were deemed adequate for estimating surface roughness length for situations with large roughness elements and where a change of roughness has occurred. These two methods were selected to represent 'true' estimates of the surface roughness length for the modeled refineries and roughness configurations. A statistical evaluation of the predicted (Lettau, simplified Counihan and Counihan) and observed surface roughness lengths was then carried out using a statistical analysis program developed by the American Petroleum Institute (API). The results of the evaluation showed that the Lettau method provides a good estimate (within a factor of 0.5-1.5 at the 95% confidence interval) of surface roughness length and one that is better than the other methods tested.

  15. Comparison of ready biodegradation estimation methods for fragrance materials.

    PubMed

    Boethling, Robert

    2014-11-01

    Biodegradability is fundamental to the assessment of environmental exposure and risk from organic chemicals. Predictive models can be used to pursue both regulatory and chemical design (green chemistry) objectives, which are most effectively met when models are easy to use and available free of charge. The objective of this work was to evaluate no-cost estimation programs with respect to prediction of ready biodegradability. Fragrance materials, which are structurally diverse and have significant exposure potential, were used for this purpose. Using a database of 222 fragrance compounds with measured ready biodegradability, 10 models were compared on the basis of overall accuracy, sensitivity, specificity, and Matthews correlation coefficient (MCC), a measure of quality for binary classification. The 10 models were VEGA© Non-Interactive Client, START (Toxtree©), Biowin©1-6, and two models based on inductive machine learning. Applicability domain (AD) was also considered. Overall accuracy was ca. 70% and varied little over all models, but sensitivity, specificity and MCC showed wider variation. Based on MCC, the best models for fragrance compounds were Biowin6, VEGA and Biowin3. VEGA performance was slightly better for the <50% of the compounds it identified as having "high reliability" predictions (AD index >0.8). However, removing compounds with one and only one quaternary carbon yielded similar improvement in predictivity for VEGA, START, and Biowin3/6, with a smaller penalty in reduced coverage. Of the nine compounds for which the eight models (VEGA, START, Biowin1-6) all disagreed with the measured value, measured analog data were available for seven, and all supported the predicted value. VEGA, Biowin3 and Biowin6 are judged suitable for ready biodegradability screening of fragrance compounds.

  16. Experimental Sentinel-2 LAI estimation using parametric, non-parametric and physical retrieval methods - A comparison

    NASA Astrophysics Data System (ADS)

    Verrelst, Jochem; Rivera, Juan Pablo; Veroustraete, Frank; Muñoz-Marí, Jordi; Clevers, Jan G. P. W.; Camps-Valls, Gustau; Moreno, José

    2015-10-01

    Given the forthcoming availability of Sentinel-2 (S2) images, this paper provides a systematic comparison of retrieval accuracy and processing speed of a multitude of parametric, non-parametric and physically-based retrieval methods using simulated S2 data. An experimental field dataset (SPARC), collected at the agricultural site of Barrax (Spain), was used to evaluate different retrieval methods on their ability to estimate leaf area index (LAI). With regard to parametric methods, all possible band combinations for several two-band and three-band index formulations and a linear regression fitting function have been evaluated. From a set of over ten thousand indices evaluated, the best performing one was an optimized three-band combination according to (ρ560 -ρ1610 -ρ2190) / (ρ560 +ρ1610 +ρ2190) with a 10-fold cross-validation RCV2 of 0.82 (RMSECV : 0.62). This family of methods excel for their fast processing speed, e.g., 0.05 s to calibrate and validate the regression function, and 3.8 s to map a simulated S2 image. With regard to non-parametric methods, 11 machine learning regression algorithms (MLRAs) have been evaluated. This methodological family has the advantage of making use of the full optical spectrum as well as flexible, nonlinear fitting. Particularly kernel-based MLRAs lead to excellent results, with variational heteroscedastic (VH) Gaussian Processes regression (GPR) as the best performing method, with a RCV2 of 0.90 (RMSECV : 0.44). Additionally, the model is trained and validated relatively fast (1.70 s) and the processed image (taking 73.88 s) includes associated uncertainty estimates. More challenging is the inversion of a PROSAIL based radiative transfer model (RTM). After the generation of a look-up table (LUT), a multitude of cost functions and regularization options were evaluated. The best performing cost function is Pearson's χ -square. It led to a R2 of 0.74 (RMSE: 0.80) against the validation dataset. While its validation went fast

  17. A TRMM Rainfall Estimation Method Applicable to Land Areas

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Oki, R.; Weinman, J. A.

    1998-01-01

    Utilizing multi-spectral, dual-polarization Special Sensor Microwave Imager (SSM/I) radiometer measurements, we have developed in this study a method to retrieve average rain rate, R(sub f(sub R)), in a mesoscale grid box of 2deg x 3deg over land. The key parameter of this method is the fractional rain area, f(sub R), in that grid box, which is determined with the help of a threshold on the 85 GHz scattering depression 0 deduced from the SSM/I data. In order to demonstrate the usefulness of this method, nine-months of R(sub f(sub R))are retrieved from SSM/I data over three grid boxes in the Northeastern United States. These retrievals are then compared with the corresponding ground-truth-average rain rate, R(sub g), deduced from 15-minute rain gauges. Based on nine months of rain rate retrievals over three grid boxes, we find that R(sub f(sub R)can explain about 64 % of the variance contained in R(sub g). A similar evaluation of the grid-box-average rain rates R(sub GSCAT) and R(sub SRL), given by the NASA/GSCAT and NOAA/SRL rain retrieval algorithms, is performed. This evaluation reveals that R(sub GSCAT) and R(sub SRL) can explain only about 42 % of the variance contained in R(sub g). In our method, a threshold on the 85 GHz scattering depression is used primarily to determine the fractional rain area in a mesoscale grid box. Quantitative information pertaining to the 85 GHz scattering depression in the grid box is disregarded. In the NASA/GSCAT and NOAA/SRL methods on the other hand, this quantitative information is included. Based on the performance of all three methods, we infer that the magnitude of the scattering depression is a poor indicator of rain rate. Furthermore, from maps based on the observations made by SSM/I on land and ocean we find that there is a significant redundancy in the information content of the SSM/I multi-spectral observations. This leads us to infer that observations of SSM/I at 19 and 37 GHz add only marginal information to that

  18. Feasible methods to estimate disease based price indexes.

    PubMed

    Bradley, Ralph

    2013-05-01

    There is a consensus that statistical agencies should report medical data by disease rather than by service. This study computes price indexes that are necessary to deflate nominal disease expenditures and to decompose their growth into price, treated prevalence and output per patient growth. Unlike previous studies, it uses methods that can be implemented by the Bureau of Labor Statistics (BLS). For the calendar years 2005-2010, I find that these feasible disease based indexes are approximately 1% lower on an annual basis than indexes computed by current methods at BLS. This gives evidence that traditional medical price indexes have not accounted for the more efficient use of medical inputs in treating most diseases.

  19. A new gaze estimation method considering external light.

    PubMed

    Lee, Jong Man; Lee, Hyeon Chang; Gwon, Su Yeong; Jung, Dongwook; Pan, Weiyuan; Cho, Chul Woo; Park, Kang Ryoung; Kim, Hyun-Cheol; Cha, Jihun

    2015-03-11

    Gaze tracking systems usually utilize near-infrared (NIR) lights and NIR cameras, and the performance of such systems is mainly affected by external light sources that include NIR components. This is ascribed to the production of additional (imposter) corneal specular reflection (SR) caused by the external light, which makes it difficult to discriminate between the correct SR as caused by the NIR illuminator of the gaze tracking system and the imposter SR. To overcome this problem, a new method is proposed for determining the correct SR in the presence of external light based on the relationship between the corneal SR and the pupil movable area with the relative position of the pupil and the corneal SR. The experimental results showed that the proposed method makes the gaze tracking system robust to the existence of external light.

  20. Data-Driven Method to Estimate Nonlinear Chemical Equivalence

    PubMed Central

    Mayo, Michael; Collier, Zachary A.; Winton, Corey; Chappell, Mark A

    2015-01-01

    There is great need to express the impacts of chemicals found in the environment in terms of effects from alternative chemicals of interest. Methods currently employed in fields such as life-cycle assessment, risk assessment, mixtures toxicology, and pharmacology rely mostly on heuristic arguments to justify the use of linear relationships in the construction of “equivalency factors,” which aim to model these concentration-concentration correlations. However, the use of linear models, even at low concentrations, oversimplifies the nonlinear nature of the concentration-response curve, therefore introducing error into calculations involving these factors. We address this problem by reporting a method to determine a concentration-concentration relationship between two chemicals based on the full extent of experimentally derived concentration-response curves. Although this method can be easily generalized, we develop and illustrate it from the perspective of toxicology, in which we provide equations relating the sigmoid and non-monotone, or “biphasic,” responses typical of the field. The resulting concentration-concentration relationships are manifestly nonlinear for nearly any chemical level, even at the very low concentrations common to environmental measurements. We demonstrate the method using real-world examples of toxicological data which may exhibit sigmoid and biphasic mortality curves. Finally, we use our models to calculate equivalency factors, and show that traditional results are recovered only when the concentration-response curves are “parallel,” which has been noted before, but we make formal here by providing mathematical conditions on the validity of this approach. PMID:26158701

  1. A Method to Estimate Fabric Particle Penetration Performance

    DTIC Science & Technology

    2014-09-08

    OF PAGES 19a. NAME OF RESPONSIBLE PERSON Terence Ghee a . REPORT Unclassified b . ABSTRACT Unclassified c. THIS PAGE Unclassified... A Effective swatch area, 9.6 cm2 b Span length, 0.6096 m Dp Particle diameter, nm N Number concentration P Pressure, millibars P∞ Tunnel static...using the ASTM D1895B method and included two controls: Arizona road dust and glass spheres see Table B -1. DISSEMINATION SYSTEM A dissemination

  2. Systematic variational method for statistical nonlinear state and parameter estimation

    NASA Astrophysics Data System (ADS)

    Ye, Jingxin; Rey, Daniel; Kadakia, Nirag; Eldridge, Michael; Morone, Uriel I.; Rozdeba, Paul; Abarbanel, Henry D. I.; Quinn, John C.

    2015-11-01

    In statistical data assimilation one evaluates the conditional expected values, conditioned on measurements, of interesting quantities on the path of a model through observation and prediction windows. This often requires working with very high dimensional integrals in the discrete time descriptions of the observations and model dynamics, which become functional integrals in the continuous-time limit. Two familiar methods for performing these integrals include (1) Monte Carlo calculations and (2) variational approximations using the method of Laplace plus perturbative corrections to the dominant contributions. We attend here to aspects of the Laplace approximation and develop an annealing method for locating the variational path satisfying the Euler-Lagrange equations that comprises the major contribution to the integrals. This begins with the identification of the minimum action path starting with a situation where the model dynamics is totally unresolved in state space, and the consistent minimum of the variational problem is known. We then proceed to slowly increase the model resolution, seeking to remain in the basin of the minimum action path, until a path that gives the dominant contribution to the integral is identified. After a discussion of some general issues, we give examples of the assimilation process for some simple, instructive models from the geophysical literature. Then we explore a slightly richer model of the same type with two distinct time scales. This is followed by a model characterizing the biophysics of individual neurons.

  3. Real-Time Parameter Estimation Method Applied to a MIMO Process and its Comparison with an Offline Identification Method

    SciTech Connect

    Kaplanoglu, Erkan; Safak, Koray K.; Varol, H. Selcuk

    2009-01-12

    An experiment based method is proposed for parameter estimation of a class of linear multivariable systems. The method was applied to a pressure-level control process. Experimental time domain input/output data was utilized in a gray-box modeling approach. Prior knowledge of the form of the system transfer function matrix elements is assumed to be known. Continuous-time system transfer function matrix parameters were estimated in real-time by the least-squares method. Simulation results of experimentally determined system transfer function matrix compare very well with the experimental results. For comparison and as an alternative to the proposed real-time estimation method, we also implemented an offline identification method using artificial neural networks and obtained fairly good results. The proposed methods can be implemented conveniently on a desktop PC equipped with a data acquisition board for parameter estimation of moderately complex linear multivariable systems.

  4. Simplified Computation for Nonparametric Windows Method of Probability Density Function Estimation.

    PubMed

    Joshi, Niranjan; Kadir, Timor; Brady, Michael

    2011-08-01

    Recently, Kadir and Brady proposed a method for estimating probability density functions (PDFs) for digital signals which they call the Nonparametric (NP) Windows method. The method involves constructing a continuous space representation of the discrete space and sampled signal by using a suitable interpolation method. NP Windows requires only a small number of observed signal samples to estimate the PDF and is completely data driven. In this short paper, we first develop analytical formulae to obtain the NP Windows PDF estimates for 1D, 2D, and 3D signals, for different interpolation methods. We then show that the original procedure to calculate the PDF estimate can be significantly simplified and made computationally more efficient by a judicious choice of the frame of reference. We have also outlined specific algorithmic details of the procedures enabling quick implementation. Our reformulation of the original concept has directly demonstrated a close link between the NP Windows method and the Kernel Density Estimator.

  5. A Practical Torque Estimation Method for Interior Permanent Magnet Synchronous Machine in Electric Vehicles.

    PubMed

    Wu, Zhihong; Lu, Ke; Zhu, Yuan

    2015-01-01

    The torque output accuracy of the IPMSM in electric vehicles using a state of the art MTPA strategy highly depends on the accuracy of machine parameters, thus, a torque estimation method is necessary for the safety of the vehicle. In this paper, a torque estimation method based on flux estimator with a modified low pass filter is presented. Moreover, by taking into account the non-ideal characteristic of the inverter, the torque estimation accuracy is improved significantly. The effectiveness of the proposed method is demonstrated through MATLAB/Simulink simulation and experiment.

  6. A Practical Torque Estimation Method for Interior Permanent Magnet Synchronous Machine in Electric Vehicles

    PubMed Central

    Zhu, Yuan

    2015-01-01

    The torque output accuracy of the IPMSM in electric vehicles using a state of the art MTPA strategy highly depends on the accuracy of machine parameters, thus, a torque estimation method is necessary for the safety of the vehicle. In this paper, a torque estimation method based on flux estimator with a modified low pass filter is presented. Moreover, by taking into account the non-ideal characteristic of the inverter, the torque estimation accuracy is improved significantly. The effectiveness of the proposed method is demonstrated through MATLAB/Simulink simulation and experiment. PMID:26114557

  7. EXPERIMENTAL METHODS TO ESTIMATE ACCUMULATED SOLIDS IN NUCLEAR WASTE TANKS

    SciTech Connect

    Duignan, M.; Steeper, T.; Steimke, J.

    2012-12-10

    devices and techniques were very effective to estimate the movement, location, and concentrations of the solids representing plutonium and are expected to perform well at a larger scale. The operation of the techniques and their measurement accuracies will be discussed as well as the overall results of the accumulated solids test.

  8. Bayesian methods for parameter estimation in effective field theories

    SciTech Connect

    Schindler, M.R. Phillips, D.R.

    2009-03-15

    We demonstrate and explicate Bayesian methods for fitting the parameters that encode the impact of short-distance physics on observables in effective field theories (EFTs). We use Bayes' theorem together with the principle of maximum entropy to account for the prior information that these parameters should be natural, i.e., O(1) in appropriate units. Marginalization can then be employed to integrate the resulting probability density function (pdf) over the EFT parameters that are not of specific interest in the fit. We also explore marginalization over the order of the EFT calculation, M, and over the variable, R, that encodes the inherent ambiguity in the notion that these parameters are O(1). This results in a very general formula for the pdf of the EFT parameters of interest given a data set, D. We use this formula and the simpler 'augmented {chi}{sup 2}' in a toy problem for which we generate pseudo-data. These Bayesian methods, when used in combination with the 'naturalness prior', facilitate reliable extractions of EFT parameters in cases where {chi}{sup 2} methods are ambiguous at best. We also examine the problem of extracting the nucleon mass in the chiral limit, M{sub 0}, and the nucleon sigma term, from pseudo-data on the nucleon mass as a function of the pion mass. We find that Bayesian techniques can provide reliable information on M{sub 0}, even if some of the data points used for the extraction lie outside the region of applicability of the EFT.

  9. An Ultrasonic Guided Wave Method to Estimate Applied Biaxial Loads (Preprint)

    DTIC Science & Technology

    2011-11-01

    orientation are estimated from a sinusoidal fit of ultrasonic data collected from the same spatially distributed array that is being used to detect and...AFRL-RX-WP-TP-2011-4361 AN ULTRASONIC GUIDED WAVE METHOD TO ESTIMATE APPLIED BIAXIAL LOADS (PREPRINT) Fan, Shi, Jennifer E. Michaels, and...Technical Paper 1 November 2011 – 1 November 2011 4. TITLE AND SUBTITLE AN ULTRASONIC GUIDED WAVE METHOD TO ESTIMATE APPLIED BIAXIAL LOADS (PREPRINT

  10. Limitations of model-fitting methods for lensing shear estimation

    NASA Astrophysics Data System (ADS)

    Voigt, L. M.; Bridle, S. L.

    2010-05-01

    Gravitational lensing shear has the potential to be the most powerful tool for constraining the nature of dark energy. However, accurate measurement of galaxy shear is crucial and has been shown to be non-trivial by the Shear TEsting Programme. Here, we demonstrate a fundamental limit to the accuracy achievable by model-fitting techniques, if oversimplistic models are used. We show that even if galaxies have elliptical isophotes, model-fitting methods which assume elliptical isophotes can have significant biases if they use the wrong profile. We use noise-free simulations to show that on allowing sufficient flexibility in the profile the biases can be made negligible. This is no longer the case if elliptical isophote models are used to fit galaxies made up of a bulge plus a disc, if these two components have different ellipticities. The limiting accuracy is dependent on the galaxy shape, but we find the most significant biases (~1 per cent of the shear) for simple spiral-like galaxies. The implications for a given cosmic shear survey will depend on the actual distribution of galaxy morphologies in the Universe, taking into account the survey selection function and the point spread function. However, our results suggest that the impact on cosmic shear results from current and near future surveys may be negligible. Meanwhile, these results should encourage the development of existing approaches which are less sensitive to morphology, as well as methods which use priors on galaxy shapes learnt from deep surveys.

  11. Effects of Sample Size, Estimation Methods, and Model Specification on Structural Equation Modeling Fit Indexes.

    ERIC Educational Resources Information Center

    Fan, Xitao; Wang, Lin; Thompson, Bruce

    1999-01-01

    A Monte Carlo simulation study investigated the effects on 10 structural equation modeling fit indexes of sample size, estimation method, and model specification. Some fit indexes did not appear to be comparable, and it was apparent that estimation method strongly influenced almost all fit indexes examined, especially for misspecified models. (SLD)

  12. Parameter estimation technique for boundary value problems by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1988-01-01

    A parameter-estimation technique for boundary-integral equations of the second kind is developed. The output least-squares identification technique using the spline collocation method is considered. The convergence analysis for the numerical method is discussed. The results are applied to boundary parameter estimations for two-dimensional Laplace and Helmholtz equations.

  13. Global mean estimation using a self-organizing dual-zoning method for preferential sampling.

    PubMed

    Pan, Yuchun; Ren, Xuhong; Gao, Bingbo; Liu, Yu; Gao, YunBing; Hao, Xingyao; Chen, Ziyue

    2015-03-01

    Giving an appropriate weight to each sampling point is essential to global mean estimation. The objective of this paper was to develop a global mean estimation method with preferential samples. The procedure for this estimation method was to first zone the study area based on self-organizing dual-zoning method and then to estimate the mean according to stratified sampling method. In this method, spreading of points in both feature and geographical space is considered. The method is tested in a case study on the metal Mn concentrations in Jilin provinces of China. Six sample patterns are selected to estimate the global mean and compared with the global mean calculated by direct arithmetic mean method, polygon method, and cell method. The results show that the proposed method produces more accurate and stable mean estimates under different feature deviation index (FDI) values and sample sizes. The relative errors of the global mean calculated by the proposed method are from 0.14 to 1.47 % and they are the largest (4.83-8.84 %) by direct arithmetic mean method. At the same time, the mean results calculated by the other three methods are sensitive to the FDI values and sample sizes.

  14. System and Method for Outlier Detection via Estimating Clusters

    NASA Technical Reports Server (NTRS)

    Iverson, David J. (Inventor)

    2016-01-01

    An efficient method and system for real-time or offline analysis of multivariate sensor data for use in anomaly detection, fault detection, and system health monitoring is provided. Models automatically derived from training data, typically nominal system data acquired from sensors in normally operating conditions or from detailed simulations, are used to identify unusual, out of family data samples (outliers) that indicate possible system failure or degradation. Outliers are determined through analyzing a degree of deviation of current system behavior from the models formed from the nominal system data. The deviation of current system behavior is presented as an easy to interpret numerical score along with a measure of the relative contribution of each system parameter to any off-nominal deviation. The techniques described herein may also be used to "clean" the training data.

  15. Method and system for non-linear motion estimation

    NASA Technical Reports Server (NTRS)

    Lu, Ligang (Inventor)

    2011-01-01

    A method and system for extrapolating and interpolating a visual signal including determining a first motion vector between a first pixel position in a first image to a second pixel position in a second image, determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image, determining a third motion vector between one of the first pixel position in the first image and the second pixel position in the second image, and the second pixel position in the second image and the third pixel position in the third image using a non-linear model, determining a position of the fourth pixel in a fourth image based upon the third motion vector.

  16. Study on Comparison of Bidding and Pricing Behavior Distinction between Estimate Methods

    NASA Astrophysics Data System (ADS)

    Morimoto, Emi; Namerikawa, Susumu

    The most characteristic trend on bidding and pricing behavior distinction in recent years is the increasing number of bidders just above the criteria for low-price bidding investigations. The contractor's markup is the difference between the bidding price and the execution price. Therefore, the contractor's markup is the difference between criteria for low-price bidding investigations price and the execution price in the public works bid in Japan. Virtually, bidder's strategies and behavior have been controlled by public engineer's budgets. Estimation and bid are inseparably linked in the Japanese public works procurement system. The trial of the unit price-type estimation method begins in 2004. On another front, accumulated estimation method is one of the general methods in public works. So, there are two types of standard estimation methods in Japan. In this study, we did a statistical analysis on the bid information of civil engineering works for the Ministry of Land, Infrastructure, and Transportation in 2008. It presents several issues that bidding and pricing behavior is related to an estimation method (several estimation methods) for public works bid in Japan. The two types of standard estimation methods produce different results that number of bidders (decide on bid-no bid strategy) and distribution of bid price (decide on mark-up strategy).The comparison on the distribution of bid prices showed that the percentage of the bid concentrated on the criteria for low-price bidding investigations have had a tendency to get higher in the large-sized public works by the unit price-type estimation method, comparing with the accumulated estimation method. On one hand, the number of bidders who bids for public works estimated unit-price tends to increase significantly Public works estimated unit-price is likely to have been one of the factors for the construction companies to decide if they participate in the biddings.

  17. A method to estimate weight and dimensions of large and small gas turbine engines

    NASA Technical Reports Server (NTRS)

    Onat, E.; Klees, G. W.

    1979-01-01

    A computerized method was developed to estimate weight and envelope dimensions of large and small gas turbine engines within + or - 5% to 10%. The method is based on correlations of component weight and design features of 29 data base engines. Rotating components were estimated by a preliminary design procedure which is sensitive to blade geometry, operating conditions, material properties, shaft speed, hub tip ratio, etc. The development and justification of the method selected, and the various methods of analysis are discussed.

  18. Comparative Evaluation of Two Methods to Estimate Natural Gas Production in Texas

    EIA Publications

    2003-01-01

    This report describes an evaluation conducted by the Energy Information Administration (EIA) in August 2003 of two methods that estimate natural gas production in Texas. The first method (parametric method) was used by EIA from February through August 2003 and the second method (multinomial method) replaced it starting in September 2003, based on the results of this evaluation.

  19. Techniques and methods for estimating abundance of larval and metamorphosed sea lampreys in Great Lakes tributaries, 1995 to 2001

    USGS Publications Warehouse

    Slade, Jeffrey W.; Adams, Jean V.; Christie, Gavin C.; Cuddy, Douglas W.; Fodale, Michael F.; Heinrich, John W.; Quinlan, Henry R.; Weise, Jerry G.; Weisser, John W.; Young, Robert J.

    2003-01-01

    Before 1995, Great Lakes streams were selected for lampricide treatment based primarily on qualitative measures of the relative abundance of larval sea lampreys, Petromyzon marinus. New integrated pest management approaches required standardized quantitative measures of sea lamprey. This paper evaluates historical larval assessment techniques and data and describes how new standardized methods for estimating abundance of larval and metamorphosed sea lampreys were developed and implemented. These new methods have been used to estimate larval and metamorphosed sea lamprey abundance in about 100 Great Lakes streams annually and to rank them for lampricide treatment since 1995. Implementation of these methods has provided a quantitative means of selecting streams for treatment based on treatment cost and estimated production of metamorphosed sea lampreys, provided managers with a tool to estimate potential recruitment of sea lampreys to the Great Lakes and the ability to measure the potential consequences of not treating streams, resulting in a more justifiable allocation of resources. The empirical data produced can also be used to simulate the impacts of various control scenarios.

  20. A Hybrid Method to Estimate Specific Differential Phase and Rainfall With Linear Programming and Physics Constraints

    SciTech Connect

    Huang, Hao; Zhang, Guifu; Zhao, Kun; Giangrande, Scott E.

    2016-10-20

    A hybrid method of combining linear programming (LP) and physical constraints is developed to estimate specific differential phase (KDP) and to improve rain estimation. Moreover, the hybrid KDP estimator and the existing estimators of LP, least squares fitting, and a self-consistent relation of polarimetric radar variables are evaluated and compared using simulated data. Our simulation results indicate the new estimator's superiority, particularly in regions where backscattering phase (δhv) dominates. Further, a quantitative comparison between auto-weather-station rain-gauge observations and KDP-based radar rain estimates for a Meiyu event also demonstrate the superiority of the hybrid KDP estimator over existing methods.

  1. An improved method for nonlinear parameter estimation: a case study of the Rössler model

    NASA Astrophysics Data System (ADS)

    He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan

    2016-08-01

    Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.

  2. Body mass estimates of an exceptionally complete Stegosaurus (Ornithischia: Thyreophora): comparing volumetric and linear bivariate mass estimation methods.

    PubMed

    Brassey, Charlotte A; Maidment, Susannah C R; Barrett, Paul M

    2015-03-01

    Body mass is a key biological variable, but difficult to assess from fossils. Various techniques exist for estimating body mass from skeletal parameters, but few studies have compared outputs from different methods. Here, we apply several mass estimation methods to an exceptionally complete skeleton of the dinosaur Stegosaurus. Applying a volumetric convex-hulling technique to a digital model of Stegosaurus, we estimate a mass of 1560 kg (95% prediction interval 1082-2256 kg) for this individual. By contrast, bivariate equations based on limb dimensions predict values between 2355 and 3751 kg and require implausible amounts of soft tissue and/or high body densities. When corrected for ontogenetic scaling, however, volumetric and linear equations are brought into close agreement. Our results raise concerns regarding the application of predictive equations to extinct taxa with no living analogues in terms of overall morphology and highlight the sensitivity of bivariate predictive equations to the ontogenetic status of the specimen. We emphasize the significance of rare, complete fossil skeletons in validating widely applied mass estimation equations based on incomplete skeletal material and stress the importance of accurately determining specimen age prior to further analyses.

  3. Body mass estimates of an exceptionally complete Stegosaurus (Ornithischia: Thyreophora): comparing volumetric and linear bivariate mass estimation methods

    PubMed Central

    Brassey, Charlotte A.; Maidment, Susannah C. R.; Barrett, Paul M.

    2015-01-01

    Body mass is a key biological variable, but difficult to assess from fossils. Various techniques exist for estimating body mass from skeletal parameters, but few studies have compared outputs from different methods. Here, we apply several mass estimation methods to an exceptionally complete skeleton of the dinosaur Stegosaurus. Applying a volumetric convex-hulling technique to a digital model of Stegosaurus, we estimate a mass of 1560 kg (95% prediction interval 1082–2256 kg) for this individual. By contrast, bivariate equations based on limb dimensions predict values between 2355 and 3751 kg and require implausible amounts of soft tissue and/or high body densities. When corrected for ontogenetic scaling, however, volumetric and linear equations are brought into close agreement. Our results raise concerns regarding the application of predictive equations to extinct taxa with no living analogues in terms of overall morphology and highlight the sensitivity of bivariate predictive equations to the ontogenetic status of the specimen. We emphasize the significance of rare, complete fossil skeletons in validating widely applied mass estimation equations based on incomplete skeletal material and stress the importance of accurately determining specimen age prior to further analyses. PMID:25740841

  4. A comparison of methods to estimate photosynthetic light absorption in leaves with contrasting morphology

    PubMed Central

    Olascoaga, Beñat; Mac Arthur, Alasdair; Atherton, Jon; Porcar-Castell, Albert

    2016-01-01

    Accurate temporal and spatial measurements of leaf optical traits (i.e., absorption, reflectance and transmittance) are paramount to photosynthetic studies. These optical traits are also needed to couple radiative transfer and physiological models to facilitate the interpretation of optical data. However, estimating leaf optical traits in leaves with complex morphologies remains a challenge. Leaf optical traits can be measured using integrating spheres, either by placing the leaf sample in one of the measuring ports (External Method) or by placing the sample inside the sphere (Internal Method). However, in leaves with complex morphology (e.g., needles), the External Method presents limitations associated with gaps between the leaves, and the Internal Method presents uncertainties related to the estimation of total leaf area. We introduce a modified version of the Internal Method, which bypasses the effect of gaps and the need to estimate total leaf area, by painting the leaves black and measuring them before and after painting. We assess and compare the new method with the External Method using a broadleaf and two conifer species. Both methods yielded similar leaf absorption estimates for the broadleaf, but absorption estimates were higher with the External Method for the conifer species. Factors explaining the differences between methods, their trade-offs and their advantages and limitations are also discussed. We suggest that the new method can be used to estimate leaf absorption in any type of leaf independently of its morphology, and be used to study further the impact of gap fraction in the External Method. PMID:26843207

  5. Multivariate drought frequency estimation using copula method in Southwest China

    NASA Astrophysics Data System (ADS)

    Hao, Cui; Zhang, Jiahua; Yao, Fengmei

    2017-02-01

    Drought over Southwest China occurs frequently and has an obvious seasonal characteristic. Proper management of regional droughts requires knowledge of the expected frequency or probability of specific climate information. This study utilized k-means classification and copulas to demonstrate the regional drought occurrence probability and return period based on trivariate drought properties, i.e., drought duration, severity, and peak. A drought event in this study was defined when 3-month Standardized Precipitation Evapotranspiration Index (SPEI) was less than -0.99 according to the regional climate characteristic. Then, the next step was to classify the region into six clusters by k-means method based on annual and seasonal precipitation and temperature and to establish marginal probabilistic distributions for each drought property in each sub-region. Several copula types were selected to test the best fit distribution, and Student t copula was recognized as the best one to integrate drought duration, severity, and peak. The results indicated that a proper classification was important for a regional drought frequency analysis, and copulas were useful tools in exploring the associations of the correlated drought variables and analyzing drought frequency. Student t copula was a robust and proper function for drought joint probability and return period analysis, which is important for analyzing and predicting the regional drought risks.

  6. Ability of combined Near-Infrared Spectroscopy-Intravascular Ultrasound (NIRS-IVUS) imaging to detect lipid core plaques and estimate cap thickness in human autopsy coronary arteries

    NASA Astrophysics Data System (ADS)

    Grainger, S. J.; Su, J. L.; Greiner, C. A.; Saybolt, M. D.; Wilensky, R. L.; Raichlen, J. S.; Madden, S. P.; Muller, J. E.

    2016-03-01

    The ability to determine plaque cap thickness during catheterization is thought to be of clinical importance for plaque vulnerability assessment. While methods to compositionally assess cap integrity are in development, a method utilizing currently available tools to measure cap thickness is highly desirable. NIRS-IVUS is a commercially available dual imaging method in current clinical use that may provide cap thickness information to the skilled reader; however, this is as yet unproven. Ten autopsy hearts (n=15 arterial segments) were scanned with the multimodality NIRS-IVUS catheter (TVC Imaging System, Infraredx, Inc.) to identify lipid core plaques (LCPs). Skilled readers made predictions of cap thickness over regions of chemogram LCP, using NIRS-IVUS. Artery segments were perfusion fixed and cut into 2 mm serial blocks. Thin sections stained with Movat's pentachrome were analyzed for cap thickness at LCP regions. Block level predictions were compared to histology, as classified by a blinded pathologist. Within 15 arterial segments, 117 chemogram blocks were found by NIRS to contain LCP. Utilizing NIRSIVUS, chemogram blocks were divided into 4 categories: thin capped fibroatheromas (TCFA), thick capped fibroatheromas (ThCFA), pathological intimal thickening (PIT)/lipid pool (no defined cap), and calcified/unable to determine cap thickness. Sensitivities/specificities for thin cap fibroatheromas, thick cap fibroatheromas, and PIT/lipid pools were 0.54/0.99, 0.68/0.88, and 0.80/0.97, respectively. The overall accuracy rate was 70.1% (including 22 blocks unable to predict, p = 0.075). In the absence of calcium, NIRS-IVUS imaging provided predictions of cap thickness over LCP with moderate accuracy. The ability of this multimodality imaging method to identify vulnerable coronary plaques requires further assessment in both larger autopsy studies, and clinical studies in patients undergoing NIRS-IVUS imaging.

  7. Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods

    USGS Publications Warehouse

    Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.

    2011-01-01

    Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.

  8. A stochastic approximation algorithm with Markov chain Monte-carlo method for incomplete data estimation problems.

    PubMed

    Gu, M G; Kong, F H

    1998-06-23

    We propose a general procedure for solving incomplete data estimation problems. The procedure can be used to find the maximum likelihood estimate or to solve estimating equations in difficult cases such as estimation with the censored or truncated regression model, the nonlinear structural measurement error model, and the random effects model. The procedure is based on the general principle of stochastic approximation and the Markov chain Monte-Carlo method. Applying the theory on adaptive algorithms, we derive conditions under which the proposed procedure converges. Simulation studies also indicate that the proposed procedure consistently converges to the maximum likelihood estimate for the structural measurement error logistic regression model.

  9. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia

    PubMed Central

    Kidney, Darren; Rawson, Benjamin M.; Borchers, David L.; Stevenson, Ben C.; Marques, Tiago A.; Thomas, Len

    2016-01-01

    Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers’ estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method

  10. Ischemic Stroke Detection System with a Computer-Aided Diagnostic Ability Using an Unsupervised Feature Perception Enhancement Method

    PubMed Central

    Tyan, Yeu-Sheng; Wu, Ming-Chi; Chin, Chiun-Li; Kuo, Yu-Liang; Lee, Ming-Sian; Chang, Hao-Yan

    2014-01-01

    We propose an ischemic stroke detection system with a computer-aided diagnostic ability using a four-step unsupervised feature perception enhancement method. In the first step, known as preprocessing, we use a cubic curve contrast enhancement method to enhance image contrast. In the second step, we use a series of methods to extract the brain tissue image area identified during preprocessing. To detect abnormal regions in the brain images, we propose using an unsupervised region growing algorithm to segment the brain tissue area. The brain is centered on a horizontal line and the white matter of the brain's inner ring is split into eight regions. In the third step, we use a coinciding regional location method to find the hybrid area of locations where a stroke may have occurred in each cerebral hemisphere. Finally, we make corrections and mark the stroke area with red color. In the experiment, we tested the system on 90 computed tomography (CT) images from 26 patients, and, with the assistance of two radiologists, we proved that our proposed system has computer-aided diagnostic capabilities. Our results show an increased stroke diagnosis sensitivity of 83% in comparison to 31% when radiologists use conventional diagnostic images. PMID:25610453

  11. Ischemic stroke detection system with a computer-aided diagnostic ability using an unsupervised feature perception enhancement method.

    PubMed

    Tyan, Yeu-Sheng; Wu, Ming-Chi; Chin, Chiun-Li; Kuo, Yu-Liang; Lee, Ming-Sian; Chang, Hao-Yan

    2014-01-01

    We propose an ischemic stroke detection system with a computer-aided diagnostic ability using a four-step unsupervised feature perception enhancement method. In the first step, known as preprocessing, we use a cubic curve contrast enhancement method to enhance image contrast. In the second step, we use a series of methods to extract the brain tissue image area identified during preprocessing. To detect abnormal regions in the brain images, we propose using an unsupervised region growing algorithm to segment the brain tissue area. The brain is centered on a horizontal line and the white matter of the brain's inner ring is split into eight regions. In the third step, we use a coinciding regional location method to find the hybrid area of locations where a stroke may have occurred in each cerebral hemisphere. Finally, we make corrections and mark the stroke area with red color. In the experiment, we tested the system on 90 computed tomography (CT) images from 26 patients, and, with the assistance of two radiologists, we proved that our proposed system has computer-aided diagnostic capabilities. Our results show an increased stroke diagnosis sensitivity of 83% in comparison to 31% when radiologists use conventional diagnostic images.

  12. Quantitative Estimation of Trace Chemicals in Industrial Effluents with the Sticklet Transform Method

    SciTech Connect

    Mehta, N C; Scharlemann, E T; Stevens, C G

    2001-04-02

    Application of a novel transform operator, the Sticklet transform, to the quantitative estimation of trace chemicals in industrial effluent plumes is reported. The sticklet transform is a superset of the well-known derivative operator and the Haar wavelet, and is characterized by independently adjustable lobe width and separation. Computer simulations demonstrate that they can make accurate and robust concentration estimates of multiple chemical species in industrial effluent plumes in the presence of strong clutter background, interferent chemicals and random noise. In this paper they address the application of the sticklet transform in estimating chemical concentrations in effluent plumes in the presence of atmospheric transmission effects. They show that this transform retains the ability to yield accurate estimates using on-plume/off-plume measurements that represent atmospheric differentials up to 10% of the full atmospheric attenuation.

  13. LSimpute: accurate estimation of missing values in microarray data with least squares methods.

    PubMed

    Bø, Trond Hellem; Dysvik, Bjarte; Jonassen, Inge

    2004-02-20

    Microarray experiments generate data sets with information on the expression levels of thousands of genes in a set of biological samples. Unfortunately, such experiments often produce multiple missing expression values, normally due to various experimental problems. As many algorithms for gene expression analysis require a complete data matrix as input, the missing values have to be estimated in order to analyze the available data. Alternatively, genes and arrays can be removed until no missing values remain. However, for genes or arrays with only a small number of missing values, it is desirable to impute those values. For the subsequent analysis to be as informative as possible, it is essential that the estimates for the missing gene expression values are accurate. A small amount of badly estimated missing values in the data might be enough for clustering methods, such as hierachical clustering or K-means clustering, to produce misleading results. Thus, accurate methods for missing value estimation are needed. We present novel methods for estimation of missing values in microarray data sets that are based on the least squares principle, and that utilize correlations between both genes and arrays. For this set of methods, we use the common reference name LSimpute. We compare the estimation accuracy of our methods with the widely used KNNimpute on three complete data matrices from public data sets by randomly knocking out data (labeling as missing). From these tests, we conclude that our LSimpute methods produce estimates that consistently are more accurate than those obtained using KNNimpute. Additionally, we examine a more classic approach to missing value estimation based on expectation maximization (EM). We refer to our EM implementations as EMimpute, and the estimate errors using the EMimpute methods are compared with those our novel methods produce. The results indicate that on average, the estimates from our best performing LSimpute method are at least as

  14. Estimating the Optimal Spatial Complexity of a Water Quality Model Using Multi-Criteria Methods

    NASA Astrophysics Data System (ADS)

    Meixner, T.

    2002-12-01

    Discretizing the landscape into multiple smaller units appears to be a necessary step for improving the performance of water quality models. However there is a need for adequate falsification methods to discern between discretization that improves model performance and discretization that merely adds to model complexity. Multi-criteria optimization methods promise a way to increase the power of model discrimination and a path to increasing our ability in differentiating between good and bad model discretization methods. This study focuses on the optimal level of spatial discretization of a water quality model, the Alpine Hydrochemical Model of the Emerald Lake watershed in Sequoia National Park, California. The 5 models of the watershed differ in the degree of simplification that they represent from the real watershed. The simplest model is just a lumped model of the entire watershed. The most complex model takes the 5 main soil groups in the watershed and represents each with a modeling subunit as well as having subunits for rock and talus areas in the watershed. Each of these models was calibrated using stream discharge and three chemical fluxes jointly as optimization criteria using a Pareto optimization routine, MOCOM-UA. After optimization the 5 models were compared for their performance using model criteria not used in calibration, the variability of model parameter estimates, and comparison to the mean of observations as a predictor of stream chemical composition. Based on these comparisons, the results indicate that the model with only 2 terrestrial subunits had the optimal level of model complexity. This result shows that increasing model complexity, even using detailed site specific data, is not always rewarded with improved model performance. Additionally, this result indicates that the most important geographic element for modeling water quality in alpine watersheds is accurately delineating the boundary between areas of rock and areas containing either

  15. An interior point method for state estimation with current magnitude measurements and inequality constraints

    SciTech Connect

    Handschin, E.; Langer, M.; Kliokys, E.

    1995-12-31

    The possibility of power system state estimation with non-traditional measurement configuration is investigated. It is assumed that some substations are equipped with current magnitude measurements. Unique state estimation is possible, in such a situation, if currents are combined with voltage or power measurements and inequality constraints on node power injections are taken into account. The state estimation algorithm facilitating the efficient incorporation of inequality constraints is developed using an interior point optimization method. Simulation results showing the performance of the algorithm are presented. The method can be used for state estimation in medium voltage subtransmission and distribution networks.

  16. Improving Estimation Performance in Networked Control Systems Applying the Send-on-delta Transmission Method

    PubMed Central

    Nguyen, Vinh Hao; Suh, Young Soo

    2007-01-01

    This paper is concerned with improving performance of a state estimation problem over a network in which a send-on-delta (SOD) transmission method is used. The SOD method requires that a sensor node transmit data to the estimator node only if its measurement value changes more than a given specified δ value. This method has been explored and applied by researchers because of its efficiency in the network bandwidth improvement. However, when this method is used, it is not ensured that the estimator node receives data from the sensor nodes regularly at every estimation period. Therefore, we propose a method to reduce estimation error in case of no sensor data reception. When the estimator node does not receive data from the sensor node, the sensor value is known to be in a (−δi,+δi) interval from the last transmitted sensor value. This implicit information has been used to improve estimation performance in previous studies. The main contribution of this paper is to propose an algorithm, where the sensor value interval is reduced to (−δi/2,+δi/2) in certain situations. Thus, the proposed algorithm improves the overall estimation performance without any changes in the send-on-delta algorithms of the sensor nodes. Through numerical simulations, we demonstrate the feasibility and the usefulness of the proposed method.

  17. Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†

    PubMed Central

    Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia

    2015-01-01

    Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144

  18. Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark

    2009-01-01

    High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.

  19. Modified Maxium Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model

    DTIC Science & Technology

    2015-08-01

    MODIFIED MAXIMUM LIKELIHOOD ESTIMATION METHOD FOR COMPLETELY SEPARATED AND QUASI-COMPLETELY SEPARATED DATA...Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model 5a. CONTRACT NUMBER 5b. GRANT...quasi-completely separated , the traditional maximum likelihood estimation (MLE) method generates infinite estimates. The bias-reduction (BR) method

  20. Applicability of Demirjian's four methods and Willems method for age estimation in a sample of Turkish children.

    PubMed

    Akkaya, Nursel; Yilanci, Hümeyra Özge; Göksülük, Dinçer

    2015-09-01

    The aim of this study was to evaluate applicability of five dental methods including Demirjian's original, revised, four teeth, and alternate four teeth methods and Willems method for age estimation in a sample of Turkish children. Panoramic radiographs of 799 children (412 females, 387 males) aged between 2.20 and 15.99years were examined by two observers. A repeated measures ANOVA was performed to compare dental methods among gender and age groups. All of the five methods overestimated the chronological age on the average. Among these, Willems method was found to be the most accurate method, which showed 0.07 and 0.15years overestimation for males and females, respectively. It was followed by Demirjian's four teeth methods, revised and original methods. According to the results, Willems method can be recommended for dental age estimation of Turkish children in forensic applications.

  1. Motion estimation using low-band-shift method for wavelet-based moving-picture coding.

    PubMed

    Park, H W; Kim, H S

    2000-01-01

    The discrete wavelet transform (DWT) has several advantages of multiresolution analysis and subband decomposition, which has been successfully used in image processing. However, the shift-variant property is intrinsic due to the decimation process of the wavelet transform, and it makes the wavelet-domain motion estimation and compensation inefficient. To overcome the shift-variant property, a low-band-shift method is proposed and a motion estimation and compensation method in the wavelet-domain is presented. The proposed method has a superior performance to the conventional motion estimation methods in terms of the mean absolute difference (MAD) as well as the subjective quality. The proposed method can be a model method for the motion estimation in wavelet-domain just like the full-search block matching in the spatial domain.

  2. A biphasic parameter estimation method for quantitative analysis of dynamic renal scintigraphic data

    NASA Astrophysics Data System (ADS)

    Koh, T. S.; Zhang, Jeff L.; Ong, C. K.; Shuter, B.

    2006-06-01

    Dynamic renal scintigraphy is an established method in nuclear medicine, commonly used for the assessment of renal function. In this paper, a biphasic model fitting method is proposed for simultaneous estimation of both vascular and parenchymal parameters from renal scintigraphic data. These parameters include the renal plasma flow, vascular and parenchymal mean transit times, and the glomerular extraction rate. Monte Carlo simulation was used to evaluate the stability and confidence of the parameter estimates obtained by the proposed biphasic method, before applying the method on actual patient study cases to compare with the conventional fitting approach and other established renal indices. The various parameter estimates obtained using the proposed method were found to be consistent with the respective pathologies of the study cases. The renal plasma flow and extraction rate estimated by the proposed method were in good agreement with those previously obtained using dynamic computed tomography and magnetic resonance imaging.

  3. A Comparison of Methods for Estimating Quadratic Effects in Nonlinear Structural Equation Models

    PubMed Central

    Harring, Jeffrey R.; Weiss, Brandi A.; Hsu, Jui-Chen

    2012-01-01

    Two Monte Carlo simulations were performed to compare methods for estimating and testing hypotheses of quadratic effects in latent variable regression models. The methods considered in the current study were (a) a 2-stage moderated regression approach using latent variable scores, (b) an unconstrained product indicator approach, (c) a latent moderated structural equation method, (d) a fully Bayesian approach, and (e) marginal maximum likelihood estimation. Of the 5 estimation methods, it was found that overall the methods based on maximum likelihood estimation and the Bayesian approach performed best in terms of bias, root-mean-square error, standard error ratios, power, and Type I error control, although key differences were observed. Similarities as well as disparities among methods are highlight and general recommendations articulated. As a point of comparison, all 5 approaches were fit to a reparameterized version of the latent quadratic model to educational reading data. PMID:22429193

  4. Assessing Planning Ability Across the Adult Life Span: Population-Representative and Age-Adjusted Reliability Estimates for the Tower of London (TOL-F).

    PubMed

    Kaller, Christoph P; Debelak, Rudolf; Köstering, Lena; Egle, Johanna; Rahm, Benjamin; Wild, Philipp S; Blettner, Maria; Beutel, Manfred E; Unterrainer, Josef M

    2016-03-01

    Planning ahead the consequences of future actions is a prototypical executive function. In clinical and experimental neuropsychology, disc-transfer tasks like the Tower of London (TOL) are commonly used for the assessment of planning ability. Previous psychometric evaluations have, however, yielded a poor reliability of measuring planning performance with the TOL. Based on theory-grounded task analyses and a systematic problem selection, the computerized TOL-Freiburg version (TOL-F) was developed to improve the task's psychometric properties for diagnostic applications. Here, we report reliability estimates for the TOL-F from two large samples collected in Mainz, Germany (n = 3,770; 40-80 years) and in Vienna, Austria (n = 830; 16-84 years). Results show that planning accuracy on the TOL-F possesses an adequate internal consistency and split-half reliability (>0.7) that are stable across the adult life span while the TOL-F covers a broad range of graded difficulty even in healthy adults, making it suitable for both research and clinical application.

  5. Feasibility study of a novel method for real-time aerodynamic coefficient estimation

    NASA Astrophysics Data System (ADS)

    Gurbacki, Phillip M.

    In this work, a feasibility study of a novel technique for the real-time identification of uncertain nonlinear aircraft aerodynamic coefficients has been conducted. The major objective of this paper is to investigate the feasibility of a system for parameter identification in a real-time flight environment. This system should be able to calculate aerodynamic coefficients and derivative information using typical pilot inputs while ensuring robust, stable, and rapid convergence. The parameter estimator investigated is based upon the nonlinear sliding mode control schema; one of the main advantages of the sliding mode estimator is the ability to guarantee a stable and robust convergence. Stable convergence is ensured by choosing a sliding surface and function that satisfies the Lyapunov stability criteria. After a proper sliding surface has been chosen, the nonlinear equations of motion for an F-16 aircraft are substituted into the sliding surface yielding an estimator capable of identifying a single aircraft parameter. Multiple sliding surfaces are then developed for each of the different flight parameters that will be identified. Sliding surfaces and parameter estimators have been developed and simulated for the pitching moment, lift force, and drag force coefficients of the F-16 aircraft. Comparing the estimated coefficients with the reference coefficients shows rapid and stable convergence for a variety of pilot inputs. Starting with simple doublet and sin wave commands, and followed by more complicated continuous pilot inputs, estimated aerodynamic coefficients have been shown to match the actual coefficients with a high degree of accuracy. This estimator is also shown to be superior to model reference or adaptive estimators, it is able to handle positive and negative estimated parameters and control inputs along with guaranteeing Lyapunov stability during convergence. Accurately estimating these aerodynamic parameters in real-time during a flight is essential

  6. Estimation of brood and nest survival: Comparative methods in the presence of heterogeneity

    USGS Publications Warehouse

    Manly, Bryan F.J.; Schmutz, Joel A.

    2001-01-01

    The Mayfield method has been widely used for estimating survival of nests and young animals, especially when data are collected at irregular observation intervals. However, this method assumes survival is constant throughout the study period, which often ignores biologically relevant variation and may lead to biased survival estimates. We examined the bias and accuracy of 1 modification to the Mayfield method that allows for temporal variation in survival, and we developed and similarly tested 2 additional methods. One of these 2 new methods is simply an iterative extension of Klett and Johnson's method, which we refer to as the Iterative Mayfield method and bears similarity to Kaplan-Meier methods. The other method uses maximum likelihood techniques for estimation and is best applied to survival of animals in groups or families, rather than as independent individuals. We also examined how robust these estimators are to heterogeneity in the data, which can arise from such sources as dependent survival probabilities among siblings, inherent differences among families, and adoption. Testing of estimator performance with respect to bias, accuracy, and heterogeneity was done using simulations that mimicked a study of survival of emperor goose (Chen canagica) goslings. Assuming constant survival for inappropriately long periods of time or use of Klett and Johnson's methods resulted in large bias or poor accuracy (often >5% bias or root mean square error) compared to our Iterative Mayfield or maximum likelihood methods. Overall, estimator performance was slightly better with our Iterative Mayfield than our maximum likelihood method, but the maximum likelihood method provides a more rigorous framework for testing covariates and explicity models a heterogeneity factor. We demonstrated use of all estimators with data from emperor goose goslings. We advocate that future studies use the new methods outlined here rather than the traditional Mayfield method or its previous

  7. Baroreflex Sensitivity Estimation by the Transfer Function Method Revised: Effect of Changing the Coherence Criterion

    DTIC Science & Technology

    2007-11-02

    BAROREFLEX SENSITIVITY ESTIMATION BY THE TRANSFER FUNCTION METHOD REVISED: EFFECT OF CHANGING THE COHERENCE CRITERION G.D. Pinna1, R. Maestri1, G...In this study we appraised three new criteria for the computation of baroreflex sensitivity (BRS) using the transfer function magnitude (TFM) method... Baroreflex sensitivity , spectral analysis, transfer function I. INTRODUCTION Among the different spectral methods proposed so far for the estimation of

  8. Modified periodogram method for estimating the Hurst exponent of fractional Gaussian noise.

    PubMed

    Liu, Yingjun; Liu, Yong; Wang, Kun; Jiang, Tianzi; Yang, Lihua

    2009-12-01

    Fractional Gaussian noise (fGn) is an important and widely used self-similar process, which is mainly parametrized by its Hurst exponent (H) . Many researchers have proposed methods for estimating the Hurst exponent of fGn. In this paper we put forward a modified periodogram method for estimating the Hurst exponent based on a refined approximation of the spectral density function. Generalizing the spectral exponent from a linear function to a piecewise polynomial, we obtained a closer approximation of the fGn's spectral density function. This procedure is significant because it reduced the bias in the estimation of H . Furthermore, the averaging technique that we used markedly reduced the variance of estimates. We also considered the asymptotical unbiasedness of the method and derived the upper bound of its variance and confidence interval. Monte Carlo simulations showed that the proposed estimator was superior to a wavelet maximum likelihood estimator in terms of mean-squared error and was comparable to Whittle's estimator. In addition, a real data set of Nile river minima was employed to evaluate the efficiency of our proposed method. These tests confirmed that our proposed method was computationally simpler and faster than Whittle's estimator.

  9. Modified periodogram method for estimating the Hurst exponent of fractional Gaussian noise

    NASA Astrophysics Data System (ADS)

    Liu, Yingjun; Liu, Yong; Wang, Kun; Jiang, Tianzi; Yang, Lihua

    2009-12-01

    Fractional Gaussian noise (fGn) is an important and widely used self-similar process, which is mainly parametrized by its Hurst exponent (H) . Many researchers have proposed methods for estimating the Hurst exponent of fGn. In this paper we put forward a modified periodogram method for estimating the Hurst exponent based on a refined approximation of the spectral density function. Generalizing the spectral exponent from a linear function to a piecewise polynomial, we obtained a closer approximation of the fGn’s spectral density function. This procedure is significant because it reduced the bias in the estimation of H . Furthermore, the averaging technique that we used markedly reduced the variance of estimates. We also considered the asymptotical unbiasedness of the method and derived the upper bound of its variance and confidence interval. Monte Carlo simulations showed that the proposed estimator was superior to a wavelet maximum likelihood estimator in terms of mean-squared error and was comparable to Whittle’s estimator. In addition, a real data set of Nile river minima was employed to evaluate the efficiency of our proposed method. These tests confirmed that our proposed method was computationally simpler and faster than Whittle’s estimator.

  10. Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant

    DOEpatents

    Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa

    2013-09-17

    System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.

  11. Parameters estimation using the first passage times method in a jump-diffusion model

    NASA Astrophysics Data System (ADS)

    Khaldi, K.; Meddahi, S.

    2016-06-01

    The main purposes of this paper are two contributions: (1) it presents a new method, which is the first passage time (FPT method) generalized for all passage times (GPT method), in order to estimate the parameters of stochastic Jump-Diffusion process. (2) it compares in a time series model, share price of gold, the empirical results of the estimation and forecasts obtained with the GPT method and those obtained by the moments method and the FPT method applied to the Merton Jump-Diffusion (MJD) model.

  12. Statistical study of generalized nonlinear phase step estimation methods in phase-shifting interferometry

    NASA Astrophysics Data System (ADS)

    Langoju, Rajesh; Patil, Abhijit; Rastogi, Pramod

    2007-11-01

    Signal processing methods based on maximum-likelihood theory, discrete chirp Fourier transform, and spectral estimation methods have enabled accurate measurement of phase in phase-shifting interferometry in the presence of nonlinear response of the piezoelectric transducer to the applied voltage. We present the statistical study of these generalized nonlinear phase step estimation methods to identify the best method by deriving the Cramér-Rao bound. We also address important aspects of these methods for implementation in practical applications and compare the performance of the best-identified method with other bench marking algorithms in the presence of harmonics and noise.

  13. Statistical study of generalized nonlinear phase step estimation methods in phase-shifting interferometry

    SciTech Connect

    Langoju, Rajesh; Patil, Abhijit; Rastogi, Pramod

    2007-11-20

    Signal processing methods based on maximum-likelihood theory, discrete chirp Fourier transform, and spectral estimation methods have enabled accurate measurement of phase in phase-shifting interferometry in the presence of nonlinear response of the piezoelectric transducer to the applied voltage. We present the statistical study of these generalized nonlinear phase step estimation methods to identify the best method by deriving the Cramer-Rao bound. We also address important aspects of these methods for implementation in practical applications and compare the performance of the best-identified method with other bench marking algorithms in the presence of harmonics and noise.

  14. Inter-Method Discrepancies in Brain Volume Estimation May Drive Inconsistent Findings in Autism

    PubMed Central

    Katuwal, Gajendra J.; Baum, Stefi A.; Cahill, Nathan D.; Dougherty, Chase C.; Evans, Eli; Evans, David W.; Moore, Gregory J.; Michael, Andrew M.

    2016-01-01

    Previous studies applying automatic preprocessing methods on Structural Magnetic Resonance Imaging (sMRI) report inconsistent neuroanatomical abnormalities in Autism Spectrum Disorder (ASD). In this study we investigate inter-method differences as a possible cause behind these inconsistent findings. In particular, we focus on the estimation of the following brain volumes: gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), and total intra cranial volume (TIV). T1-weighted sMRIs of 417 ASD subjects and 459 typically developing controls (TDC) from the ABIDE dataset were estimated using three popular preprocessing methods: SPM, FSL, and FreeSurfer (FS). Brain volumes estimated by the three methods were correlated but had significant inter-method differences; except TIVSPM vs. TIVFS, all inter-method differences were significant. ASD vs. TDC group differences in all brain volume estimates were dependent on the method used. SPM showed that TIV, GM, and CSF volumes of ASD were larger than TDC with statistical significance, whereas FS and FSL did not show significant differences in any of the volumes; in some cases, the direction of the differences were opposite to SPM. When methods were compared with each other, they showed differential biases for autism, and several biases were larger than ASD vs. TDC differences of the respective methods. After manual inspection, we found inter-method segmentation mismatches in the cerebellum, sub-cortical structures, and inter-sulcal CSF. In addition, to validate automated TIV estimates we performed manual segmentation on a subset of subjects. Results indicate that SPM estimates are closest to manual segmentation, followed by FS while FSL estimates were significantly lower. In summary, we show that ASD vs. TDC brain volume differences are method dependent and that these inter-method discrepancies can contribute to inconsistent neuroimaging findings in general. We suggest cross-validation across methods and emphasize the

  15. Inter-Method Discrepancies in Brain Volume Estimation May Drive Inconsistent Findings in Autism.

    PubMed

    Katuwal, Gajendra J; Baum, Stefi A; Cahill, Nathan D; Dougherty, Chase C; Evans, Eli; Evans, David W; Moore, Gregory J; Michael, Andrew M

    2016-01-01

    Previous studies applying automatic preprocessing methods on Structural Magnetic Resonance Imaging (sMRI) report inconsistent neuroanatomical abnormalities in Autism Spectrum Disorder (ASD). In this study we investigate inter-method differences as a possible cause behind these inconsistent findings. In particular, we focus on the estimation of the following brain volumes: gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), and total intra cranial volume (TIV). T1-weighted sMRIs of 417 ASD subjects and 459 typically developing controls (TDC) from the ABIDE dataset were estimated using three popular preprocessing methods: SPM, FSL, and FreeSurfer (FS). Brain volumes estimated by the three methods were correlated but had significant inter-method differences; except TIVSPM vs. TIVFS, all inter-method differences were significant. ASD vs. TDC group differences in all brain volume estimates were dependent on the method used. SPM showed that TIV, GM, and CSF volumes of ASD were larger than TDC with statistical significance, whereas FS and FSL did not show significant differences in any of the volumes; in some cases, the direction of the differences were opposite to SPM. When methods were compared with each other, they showed differential biases for autism, and several biases were larger than ASD vs. TDC differences of the respective methods. After manual inspection, we found inter-method segmentation mismatches in the cerebellum, sub-cortical structures, and inter-sulcal CSF. In addition, to validate automated TIV estimates we performed manual segmentation on a subset of subjects. Results indicate that SPM estimates are closest to manual segmentation, followed by FS while FSL estimates were significantly lower. In summary, we show that ASD vs. TDC brain volume differences are method dependent and that these inter-method discrepancies can contribute to inconsistent neuroimaging findings in general. We suggest cross-validation across methods and emphasize the

  16. A TRMM Microwave Radiometer Rain Rate Estimation Method with Convective and Stratiform Discrimination

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Weinman, J. A.; Dalu, G.

    1999-01-01

    cases is on the average about 15 %. Taking advantage of this ability of our retrieval method, one could derive the latent heat input into the atmosphere over the 760 km wide swath of the TMI radiometer in the tropics.

  17. Summary of methods for calculating dynamic lateral stability and response and for estimating aerodynamic stability derivatives

    NASA Technical Reports Server (NTRS)

    Campbell, John P; Mckinney, Marion O

    1952-01-01

    A summary of methods for making dynamic lateral stability and response calculations and for estimating the aerodynamic stability derivatives required for use in these calculations is presented. The processes of performing calculations of the time histories of lateral motions, of the period and damping of these motions, and of the lateral stability boundaries are presented as a series of simple straightforward steps. Existing methods for estimating the stability derivatives are summarized and, in some cases, simple new empirical formulas are presented. Detailed estimation methods are presented for low-subsonic-speed conditions but only a brief discussion and a list of references are given for transonic and supersonic speed conditions.

  18. A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2015-01-01

    A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.

  19. Modal wavefront estimation from its slopes by numerical orthogonal transformation method over general shaped aperture.

    PubMed

    Ye, Jingfei; Wang, Wei; Gao, Zhishan; Liu, Zhiying; Wang, Shuai; Benítez, Pablo; Miñano, Juan C; Yuan, Qun

    2015-10-05

    Wavefront estimation from the slope-based sensing metrologies zis important in modern optical testing. A numerical orthogonal transformation method is proposed for deriving the numerical orthogonal gradient polynomials as numerical orthogonal basis functions for directly fitting the measured slope data and then converting to the wavefront in a straightforward way in the modal approach. The presented method can be employed in the wavefront estimation from its slopes over the general shaped aperture. Moreover, the numerical orthogonal transformation method could be applied to the wavefront estimation from its slope measurements over the dynamic varying aperture. The performance of the numerical orthogonal transformation method is discussed, demonstrated and verified by the examples. They indicate that the presented method is valid, accurate and easily implemented for wavefront estimation from its slopes.

  20. Simplified Estimating Method for Shock Response Spectrum Envelope of V-Band Clamp Separation Shock

    NASA Astrophysics Data System (ADS)

    Iwasa, Takashi; Shi, Qinzhong

    A simplified estimating method for the Shock Response Spectrum (SRS) envelope at the spacecraft interface near the V-band clamp separation device has been established. This simplified method is based on the pyroshock analysis method with a single degree of freedom (D.O.F) model proposed in our previous paper. The parameters required in the estimating method are only geometrical information of the interface and a tension of the V-band clamp. According to the use of these parameters, a simplified calculation of the SRS magnitude at the knee frequency is newly proposed. By comparing the estimation results with actual pyroshock test results, it was verified that the SRS envelope estimated with the simplified method appropriately covered the pyroshock test data of the actual space satellite systems except some specific high frequency responses.

  1. Estimating of equilibrium formation temperature by curve fitting method and it's problems

    SciTech Connect

    Kenso Takai; Masami Hyodo; Shinji Takasugi

    1994-01-20

    Determination of true formation temperature from measured bottom hole temperature is important for geothermal reservoir evaluation after completion of well drilling. For estimation of equilibrium formation temperature, we studied non-linear least squares fitting method adapting the Middleton Model (Chiba et al., 1988). It was pointed out that this method was applicable as simple and relatively reliable method for estimation of the equilibrium formation temperature after drilling. As a next step, we are studying the estimation of equilibrium formation temperature from bottom hole temperature data measured by MWD (measurement while drilling system). In this study, we have evaluated availability of nonlinear least squares fitting method adapting curve fitting method and the numerical simulator (GEOTEMP2) for estimation of the equilibrium formation temperature while drilling.

  2. Accuracy of age estimation methods from orthopantomograph in forensic odontology: a comparative study.

    PubMed

    Khorate, Manisha M; Dinkar, A D; Ahmed, Junaid

    2014-01-01

    Changes related to chronological age are seen in both hard and soft tissue. A number of methods for age estimation have been proposed which can be classified in four categories, namely, clinical, radiological, histological and chemical analysis. In forensic odontology, age estimation based on tooth development is universally accepted method. The panoramic radiographs of 500 healthy Goan, Indian children (250 boys and 250 girls) aged between 4 and 22.1 years were selected. Modified Demirjian's method (1973/2004), Acharya AB formula (2011), Dr Ajit D. Dinkar (1984) regression equation, Foti and coworkers (2003) formula (clinical and radiological) were applied for estimation of age. The result of our study has shown that Dr Ajit D. Dinkar method is more accurate followed by Acharya Indian-specific formula. Furthermore, in this study by applying all these methods to one regional population, we have attempted to present dental age estimation methodology best suited for the Goan Indian population.

  3. Fast 2D DOA Estimation Algorithm by an Array Manifold Matching Method with Parallel Linear Arrays

    PubMed Central

    Yang, Lisheng; Liu, Sheng; Li, Dong; Jiang, Qingping; Cao, Hailin

    2016-01-01

    In this paper, the problem of two-dimensional (2D) direction-of-arrival (DOA) estimation with parallel linear arrays is addressed. Two array manifold matching (AMM) approaches, in this work, are developed for the incoherent and coherent signals, respectively. The proposed AMM methods estimate the azimuth angle only with the assumption that the elevation angles are known or estimated. The proposed methods are time efficient since they do not require eigenvalue decomposition (EVD) or peak searching. In addition, the complexity analysis shows the proposed AMM approaches have lower computational complexity than many current state-of-the-art algorithms. The estimated azimuth angles produced by the AMM approaches are automatically paired with the elevation angles. More importantly, for estimating the azimuth angles of coherent signals, the aperture loss issue is avoided since a decorrelation procedure is not required for the proposed AMM method. Numerical studies demonstrate the effectiveness of the proposed approaches. PMID:26907301

  4. Spacecraft Attitude Estimation Integrating the Q-Method into an Extended Kalman Filter

    NASA Astrophysics Data System (ADS)

    Ainscough, Thomas G.

    A new algorithm is proposed that smoothly integrates the nonlinear estimation of the attitude quaternion using Davenport's q-method and the estimation of non-attitude states within the framework of an extended Kalman filter. A modification to the q-method and associated covariance analysis is derived with the inclusion of an a priori attitude estimate. The non-attitude states are updated from the nonlinear attitude estimate based on linear optimal Kalman filter techniques. The proposed filter is compared to existing methods and is shown to be equivalent to second-order in the attitude update and exactly equivalent in the non-attitude state update with the Sequential Optimal Attitude Recursion filter. Monte Carlo analysis is used in numerical simulations to demonstrate the validity of the proposed approach. This filter successfully estimates the nonlinear attitude and non-attitude states in a single Kalman filter without the need for iterations.

  5. Influence of the optimization methods on neural state estimation quality of the drive system with elasticity.

    PubMed

    Orlowska-Kowalska, Teresa; Kaminski, Marcin

    2014-01-01

    The paper deals with the implementation of optimized neural networks (NNs) for state variable estimation of the drive system with an elastic joint. The signals estimated by NNs are used in the control structure with a state-space controller and additional feedbacks from the shaft torque and the load speed. High estimation quality is very important for the correct operation of a closed-loop system. The precision of state variables estimation depends on the generalization properties of NNs. A short review of optimization methods of the NN is presented. Two techniques typical for regularization and pruning methods are described and tested in detail: the Bayesian regularization and the Optimal Brain Damage methods. Simulation results show good precision of both optimized neural estimators for a wide range of changes of the load speed and the load torque, not only for nominal but also changed parameters of the drive system. The simulation results are verified in a laboratory setup.

  6. Improved method for estimating water solubility from octanol/water partition coefficient

    SciTech Connect

    Meylan, W.; Howard, P.; Boethling, R.

    1994-12-31

    Water solubility (wsol) is a critical property in risk assessments for chemicals. It is often necessary to estimate wsol because measured values are unavailable. However, the most widely used estimation methods predict wsol from the logarithm of the octanol/water partition coefficient (log K{sub ow}), via regression equations based on approximately 200 (or fewer) measured values of log K{sub ow}. The overall accuracy of these correlations is only about {+-} one order of magnitude. To update and enhance existing wsol estimation methods, the authors first collected 3,000+ measured values from a variety of sources. The range of chemical structures represented by this data set is much greater than for the older regressions. They then investigated the accuracy of wsol/log K{sub ow} correlations for the entire data set and for various chemical classes, as well as the importance of melting point (mp) to the estimate. The results of this investigation include a new regression equation for estimating wsol. This method has been encoded in a computer program that is compatible with other programs in the Estimation Programs Interface (EPI), a program used by OPPT to estimate key properties and fate parameters for existing and Premanufacture Notice (PMN) chemicals. To estimate wsol the user can enter a measured value of log K{sub ow}, or allow the program to estimate log K{sub ow} from the chemical`s SMILES notation.

  7. Simultaneous Effects of Allowed Time, Teaching Method, Ability, and Student Assessment of Treatment on Achievement in a High School Biology Course (ISIS).

    ERIC Educational Resources Information Center

    Burkman, Ernest; And Others

    1982-01-01

    Examined effects of teaching method (self-directed, group-directed, teacher-directed), academic ability, student assessment of treatment, and allowed time on achievement in three Individualized Science Instructional System (ISIS) biology minicourses. Results, among others, indicated that individualized instruction favored high-ability students and…

  8. New method to estimate the possibility of natural pregnancy using computer-assisted sperm analysis.

    PubMed

    Isobe, Tetsuya

    2012-12-01

    The World Health Organization (WHO) criteria, which include percent motility and sperm concentration, are the only criteria for evaluating sperm quality and conception ability. However, these criteria are insufficient to evaluate the possibility of natural pregnancy. Thus, an index that can directly evaluate the possibility of a natural pregnancy is necessary. A new sperm energy theory without approximation was developed to assess the possibility of natural pregnancy based on mechanical sperm energy. Sperm motility parameters were measured using computer-assisted sperm analysis (CASA) in 129 ejaculated semen samples from 50 men in couples diagnosed with infertility, in which no abnormalities were found in women (sterile group), and 157 ejaculated semen samples from 57 men who had already fathered children in natural pregnancies (control group). A total of 129 subjects were selected from the control group and classified as the fertile group in order of the sample measurement date. The sperm energy index (SEI) and mean sperm energy index (MEI) were accurately obtained according to the methods described by the new sperm energy theory. SEI reflects total mechanical energy of the sperm in a visual field during CASA measurements. MEI reflects the mean mechanical energy of one sperm in a measurement field. All subjects with (MEI)/(SEI) > 2 were assigned to the sterile group. The larger the SEI, the higher was the probability of predicting fertile subjects. The probability of predicting fertile subjects was approximately 60% with a SEI of > 0.5, 70% with a SEI of > 1, 80% with a SEI of > 3, and 90% with a SEI of > 6 in cases where (MEI)/(SEI) is < 2. The data support the view that this novel method can be used to estimate the possibility of a natural pregnancy.

  9. Comparison of different methods for estimating snowcover in forested, mountainous basins using LANDSAT (ERTS) images. [Washington and Santiam River, Oregon

    NASA Technical Reports Server (NTRS)

    Meier, M. J.; Evans, W. E.

    1975-01-01

    Snow-covered areas on LANDSAT (ERTS) images of the Santiam River basin, Oregon, and other basins in Washington were measured using several operators and methods. Seven methods were used: (1) Snowline tracing followed by measurement with planimeter, (2) mean snowline altitudes determined from many locations, (3) estimates in 2.5 x 2.5 km boxes of snow-covered area with reference to snow-free images, (4) single radiance-threshold level for entire basin, (5) radiance-threshold setting locally edited by reference to altitude contours and other images, (6) two-band color-sensitive extraction locally edited as in (5), and (7) digital (spectral) pattern recognition techniques. The seven methods are compared in regard to speed of measurement, precision, the ability to recognize snow in deep shadow or in trees, relative cost, and whether useful supplemental data are produced.

  10. Evaluation of methods for estimating the effects of vegetation change and climate variability on streamflow

    NASA Astrophysics Data System (ADS)

    Zhao, Fangfang; Zhang, Lu; Xu, Zongxue; Scott, David F.

    2010-03-01

    Changes in vegetation cover can significantly affect streamflow. Two common methods for estimating vegetation effects on streamflow are the paired catchment method and the time trend analysis technique. In this study, the performance of these methods is evaluated using data from paired catchments in Australia, New Zealand, and South Africa. Results show that these methods generally yield consistent estimates of the vegetation effect, and most of the observed streamflow changes are attributable to vegetation change. These estimates are realistic and are supported by the vegetation history. The accuracy of the estimates, however, largely depends on the length of calibration periods or pretreatment periods. For catchments with short or no pretreatment periods, we find that statistically identified prechange periods can be used as calibration periods. Because streamflow also responds to climate variability, in assessing streamflow changes it is necessary to consider the effect of climate in addition to the effect of vegetation. Here, the climate effect on streamflow was estimated using a sensitivity-based method that calculates changes in rainfall and potential evaporation. A unifying conceptual framework, based on the assumption that climate and vegetation are the only drivers for streamflow changes, enables comparison of all three methods. It is shown that these methods provide consistent estimates of vegetation and climate effects on streamflow for the catchments considered. An advantage of the time trend analysis and sensitivity-based methods is that they are applicable to nonpaired catchments, making them potentially useful in large catchments undergoing vegetation change.

  11. Handbook for cost estimating. A method for developing estimates of costs for generic actions for nuclear power plants

    SciTech Connect

    Ball, J.R.; Cohen, S.; Ziegler, E.Z.

    1984-10-01

    This document provides overall guidance to assist the NRC in preparing the types of cost estimates required by the Regulatory Analysis Guidelines and to assist in the assignment of priorities in resolving generic safety issues. The Handbook presents an overall cost model that allows the cost analyst to develop a chronological series of activities needed to implement a specific regulatory requirement throughout all applicable commercial LWR power plants and to identify the significant cost elements for each activity. References to available cost data are provided along with rules of thumb and cost factors to assist in evaluating each cost element. A suitable code-of-accounts data base is presented to assist in organizing and aggregating costs. Rudimentary cost analysis methods are described to allow the analyst to produce a constant-dollar, lifetime cost for the requirement. A step-by-step example cost estimate is included to demonstrate the overall use of the Handbook.

  12. The effect of physical parameters of inertial stabilization platform on disturbance rejection ability and its improvement method

    NASA Astrophysics Data System (ADS)

    Mao, Yao; Deng, Chao; Gan, Xun; Tian, Jing

    2015-10-01

    The development of space optical communication requires arcsecond precision or even higher precision of the tracking performance of ATP(Acquisition, Tracking and Pointing) system under the condition of base disturbance. ATP system supported by stabilized reference beam which is provided by inertial stabilization platform with high precision and high bandwidth, can effectively restrain the influence of base angular disturbance on the line of sight. To get better disturbance rejection ability, this paper analyzes the influence of transfer characteristics and physical parameters of stabilization platform on disturbance stabilization performance, the result shows that the stabilization characteristics of inertial stabilization platform equals to the product of rejection characteristics of control loop and disturbance transfer characteristics of the platform, and improving isolation characteristics of the platform or extending control bandwidth can both achieve the result of getting a better rejection ability. Limited by factors such as mechanical characteristics of stabilization platform, bandwidth/noise of the sensor, and so on, as the control bandwidth of the LOS stabilization platform is limited, and high frequency disturbance can not be effectively rejected, so the rejection of high frequency disturbance mainly depends on the isolation characteristics of the platform itself. This paper puts forward three methods of improving the isolation characteristics of the platform itself, which includes 1) changing mechanical structure, such as reducing elastic coefficient, increasing moment of inertia of the platform, and so on; 2) changing electrical structure of the platform, such as increasing resistance, adding current loop, and so on; 3)adding a passive vibration isolator between the inertial stabilization platform and the base. The result of the experiment shows that adding current loop or adding a passive vibration isolator can effectively reject high frequency

  13. Methods to Estimate the Between-Study Variance and Its Uncertainty in Meta-Analysis

    ERIC Educational Resources Information Center

    Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P. T.; Langan, Dean; Salanti, Georgia

    2016-01-01

    Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance,…

  14. The effects of rater bias and assessment method used to estimate disease severity on hypothesis testing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The effects of bias (over and underestimates) in estimates of disease severity on hypothesis testing using different assessment methods was explored. Nearest percent estimates (NPE), the Horsfall-Barratt (H-B) scale, and two different linear category scales (10% increments, with and without addition...

  15. IN-RESIDENCE, MULTIPLE ROUTE EXPOSURES TO CHLORPYRIFOS AND DIAZINON ESTIMATED BY INDIRECT METHOD MODELS

    EPA Science Inventory

    One of the objectives of the National Human Exposure Assessment Survey (NHEXAS) is to estimate exposures to several pollutants in multiple media and determine their distributions for the population of Arizona. This paper presents modeling methods used to estimate exposure dist...

  16. Fitting Multilevel Models with Ordinal Outcomes: Performance of Alternative Specifications and Methods of Estimation

    ERIC Educational Resources Information Center

    Bauer, Daniel J.; Sterba, Sonya K.

    2011-01-01

    Previous research has compared methods of estimation for fitting multilevel models to binary data, but there are reasons to believe that the results will not always generalize to the ordinal case. This article thus evaluates (a) whether and when fitting multilevel linear models to ordinal outcome data is justified and (b) which estimator to employ…

  17. Assessment of Methods for Estimating Risk to Birds from Ingestion of Contaminated Grit Particles (Final Report)

    EPA Science Inventory

    The U.S. EPA Ecological Risk Assessment Support Center (ERASC) announced the release of the final report entitled, Assessment of Methods for Estimating Risk to Birds from Ingestion of Contaminated Grit Particles. This report evaluates approaches for estimating the probabi...

  18. Bayesian and Frequentist Methods for Estimating Joint Uncertainty of Freundlich Adsorption Isotherm Fitting Parameters

    EPA Science Inventory

    In this paper, we present methods for estimating Freundlich isotherm fitting parameters (K and N) and their joint uncertainty, which have been implemented into the freeware software platforms R and WinBUGS. These estimates were determined by both Frequentist and Bayesian analyse...

  19. Method for Estimating the Acoustic Pressure in Tissues Using Low-Amplitude Measurements in Water.

    PubMed

    Keravnou, Christina P; Izamis, Maria-Louisa; Averkiou, Michalakis A

    2015-11-01

    The aim of this study was to evaluate a simple, reliable and reproducible method for accuracy in estimating the acoustic pressure delivered in tissue exposed to ultrasound. Such a method would be useful for therapeutic applications of ultrasound with microbubbles, for example, sonoporation. The method is based on (i) low-amplitude water measurements that are easily made and do not suffer from non-linear propagation effects, and (ii) the attenuation coefficient of the tissue of interest. The range of validity of the extrapolation method for different attenuation and pressure values was evaluated with a non-linear propagation theoretical model. Depending on the specific tissue attenuation, the method produces good estimates of pressures in excess of 10 MPa. Ex vivo machine-perfused pig liver tissue was used to validate the method for source pressures up to 3.5 MPa. The method can be used to estimate the delivered pressure in vivo in diagnostic and therapeutic applications of ultrasound.

  20. Estimating Rooftop Suitability for PV: A Review of Methods, Patents, and Validation Techniques

    SciTech Connect

    Melius, J.; Margolis, R.; Ong, S.

    2013-12-01

    A number of methods have been developed using remote sensing data to estimate rooftop area suitable for the installation of photovoltaics (PV) at various geospatial resolutions. This report reviews the literature and patents on methods for estimating rooftop-area appropriate for PV, including constant-value methods, manual selection methods, and GIS-based methods. This report also presents NREL's proposed method for estimating suitable rooftop area for PV using Light Detection and Ranging (LiDAR) data in conjunction with a GIS model to predict areas with appropriate slope, orientation, and sunlight. NREL's method is validated against solar installation data from New Jersey, Colorado, and California to compare modeled results to actual on-the-ground measurements.

  1. Application of the median method to estimate the kinetic constants of the substrate uncompetitive inhibition equation.

    PubMed

    Valencia, Pedro L; Astudillo-Castro, Carolina; Gajardo, Diego; Flores, Sebastián

    2017-04-07

    In 1974, Eisenthal and Cornish-Bowden published the direct linear plot method, which used the median to estimate the Vmax and Km from a set of initial rates as a function of substrate concentrations. The robustness of this non-parametric method was clearly demonstrated by comparing it with the least-squares method. The authors commented that the method cannot readily be generalized to equations of more than two parameters. Unfortunately, this comment has been misread by other authors. Comments such as "this method cannot be extended directly to equations with more than two parameters" were found in some publications. In addition, recently, the most drastic comment was published: "this method cannot be applied for the analysis of substrate inhibition." Given all of these presumptions, we have been motivated to publish a demonstration of the contrary: the median method can be applied to more than two-parameter equations, using as an example, the substrate uncompetitive inhibition equation. A computer algorithm was written to evaluate the effect of simulated experimental error of the initial rates on the estimation of Vmax, Km and KS. The error was assigned to different points of the experimental design. Four different KS/Km ratios were analyzed with the values 10, 100, 1000 and 10,000. The results indicated that the least-squares method was slightly better than the median method in terms of accuracy and variance. However, the presence of outliers affected the estimation of kinetic constants using the least-squares method more severely than the median method. The estimation of KS using the median method to estimate 1/KS was much better than the direct estimation of KS, causing a negative effect of non-linearity of KS in the kinetic equation. Considering that the median method is free from the assumptions of the least-squares method and the arbitrary assumptions implicit in the linearization methods to estimate the kinetic constants Vmax, Km and KS from the substrate

  2. Accurate and efficient velocity estimation using Transmission matrix formalism based on the domain decomposition method

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Jakobsen, Morten; Wu, Ru-Shan; Lu, Wenkai; Chen, Xiaohong

    2017-03-01

    Full waveform inversion (FWI) has been regarded as an effective tool to build the velocity model for the following pre-stack depth migration. Traditional inversion methods are built on Born approximation and are initial model dependent, while this problem can be avoided by introducing Transmission matrix (T-matrix), because the T-matrix includes all orders of scattering effects. The T-matrix can be estimated from the spatial aperture and frequency bandwidth limited seismic data using linear optimization methods. However the full T-matrix inversion method (FTIM) is always required in order to estimate velocity perturbations, which is very time consuming. The efficiency can be improved using the previously proposed inverse thin-slab propagator (ITSP) method, especially for large scale models. However, the ITSP method is currently designed for smooth media, therefore the estimation results are unsatisfactory when the velocity perturbation is relatively large. In this paper, we propose a domain decomposition method (DDM) to improve the efficiency of the velocity estimation for models with large perturbations, as well as guarantee the estimation accuracy. Numerical examples for smooth Gaussian ball models and a reservoir model with sharp boundaries are performed using the ITSP method, the proposed DDM and the FTIM. The estimated velocity distributions, the relative errors and the elapsed time all demonstrate the validity of the proposed DDM.

  3. Estimating Small-area Populations by Age and Sex Using Spatial Interpolation and Statistical Inference Methods

    SciTech Connect

    Qai, Qiang; Rushton, Gerald; Bhaduri, Budhendra L; Bright, Eddie A; Coleman, Phil R

    2006-01-01

    The objective of this research is to compute population estimates by age and sex for small areas whose boundaries are different from those for which the population counts were made. In our approach, population surfaces and age-sex proportion surfaces are separately estimated. Age-sex population estimates for small areas and their confidence intervals are then computed using a binomial model with the two surfaces as inputs. The approach was implemented for Iowa using a 90 m resolution population grid (LandScan USA) and U.S. Census 2000 population. Three spatial interpolation methods, the areal weighting (AW) method, the ordinary kriging (OK) method, and a modification of the pycnophylactic method, were used on Census Tract populations to estimate the age-sex proportion surfaces. To verify the model, age-sex population estimates were computed for paired Block Groups that straddled Census Tracts and therefore were spatially misaligned with them. The pycnophylactic method and the OK method were more accurate than the AW method. The approach is general and can be used to estimate subgroup-count types of variables from information in existing administrative areas for custom-defined areas used as the spatial basis of support in other applications.

  4. A novel method for blood volume estimation using trivalent chromium in rabbit models

    PubMed Central

    Baby, Prathap Moothamadathil; Kumar, Pramod; Kumar, Rajesh; Jacob, Sanu S.; Rawat, Dinesh; Binu, V. S.; Karun, Kalesh M.

    2014-01-01

    Background: Blood volume measurement though important in management of critically ill-patients is not routinely estimated in clinical practice owing to labour intensive, intricate and time consuming nature of existing methods. Aims: The aim was to compare blood volume estimations using trivalent chromium [51Cr(III)] and standard Evans blue dye (EBD) method in New Zealand white rabbit models and establish correction-factor (CF). Materials and Methods: Blood volume estimation in 33 rabbits was carried out using EBD method and concentration determined using spectrophotometric assay followed by blood volume estimation using direct injection of 51Cr(III). Twenty out of 33 rabbits were used to find CF by dividing blood volume estimation using EBD with blood volume estimation using 51Cr(III). CF is validated in 13 rabbits by multiplying it with blood volume estimation values obtained using 51Cr(III). Results: The mean circulating blood volume of 33 rabbits using EBD was 142.02 ± 22.77 ml or 65.76 ± 9.31 ml/kg and using 51Cr(III) was estimated to be 195.66 ± 47.30 ml or 89.81 ± 17.88 ml/kg. The CF was found to be 0.77. The mean blood volume of 13 rabbits measured using EBD was 139.54 ± 27.19 ml or 66.33 ± 8.26 ml/kg and using 51Cr(III) with CF was 152.73 ± 46.25 ml or 71.87 ± 13.81 ml/kg (P = 0.11). Conclusions: The estimation of blood volume using 51Cr(III) was comparable to standard EBD method using CF. With further research in this direction, we envisage human blood volume estimation using 51Cr(III) to find its application in acute clinical settings. PMID:25190922

  5. New method of estimating temperatures near the mesopause region using meteor radar observations

    NASA Astrophysics Data System (ADS)

    Lee, Changsup; Kim, Jeong-Han; Jee, Geonhwa; Lee, Wonseok; Song, In-Sun; Kim, Yong Ha

    2016-10-01

    We present a novel method of estimating temperatures near the mesopause region using meteor radar observations. The method utilizes the linear relationship between the full width at half maximum (FWHM) of the meteor height distribution and the temperature at the meteor peak height. Once the proportionality constant of the linear relationship is determined from independent temperature measurements performed over a specific period of time by the Microwave Limb Sounder (MLS) instrument on board the Aura satellite, the temperature can be estimated continuously according to the measurements of the FWHM alone without additional information. The temperatures estimated from the FWHM are consistent with the MLS temperatures throughout the study period within a margin of 3.0%. Although previous methods are based on temperature gradient or pressure assumptions, the new method does not require such assumptions, which allows us to estimate the temperature at approximately 90 km with better precision.

  6. A review and comparison of some commonly used methods of estimating petroleum resource availability

    SciTech Connect

    Herbert, J.H.

    1982-10-01

    The purpose of this pedagogical report is to elucidate the characteristics of the principal methods of estimating the petroleum resource base. Other purposes are to indicate the logical similarities and data requirements of these different methods. The report should serve as a guide for the application and interpretation of the different methods.

  7. Method for estimating crack-extension resistance curve from residual strength data

    NASA Technical Reports Server (NTRS)

    Orange, T. W.

    1980-01-01

    A method is presented for estimating the crack extension resistance curve (R curve) from residual strength (maximum load against initial crack length) data for precracked fracture specimens. The method allows additional information to be inferred from simple test results, and that information is used to estimate the failure loads of more complicated structures. Numerical differentiation of the residual strength data is required, and the problems that it may present are discussed.

  8. Method for Estimating Evaporative Potential (IM/CLO) from ASTM Standard Single Wind Velocity Measures

    DTIC Science & Technology

    2016-08-10

    actual measured values of im/clo at 1 m/s, RMSE = 0.013 and MAE = 0.009. This report describes the mathematical methods for estimating the...thermal manikin; mathematical model; thermoregulation modeling; predictive modeling; physiological Unclassified Unclassified Unclassified Unclassified...and actual measured values of im/clo at 1 m/s, RMSE = 0.013 and MAE = 0.009. This report describes the mathematical methods for estimating the

  9. Site Effects Estimation by a Transfer-Station Generalized Inversion Method

    NASA Astrophysics Data System (ADS)

    Zhang, Wenbo; Yu, Xiangwei

    2016-04-01

    Site effect is one of the essential factors in characterizing strong ground motion as well as in earthquake engineering design. In this study, the generalized inversion technique (GIT) is applied to estimate site effects. Moreover, the GIT is modified to improve its analytical ability.GIT needs a reference station as a standard. Ideally the reference station is located at a rock site, and its site effect is considered to be a constant. For the same earthquake, the record spectrum of an interested station is divided by that of the reference station, and the source term is eliminated. Thus site effects and the attenuation can be acquired. In the GIT process, the amount of earthquake data available in analysis is limited to that recorded by the reference station, and the stations of which site effects can be estimated are also restricted to those stations which recorded common events with the reference station. In order to improve the limitation of the GIT, a modified GIT is put forward in this study, namely, the transfer-station generalized inversion method (TSGI). Comparing with the GIT, this modified GIT can be used to enlarge data set and increase the number of stations whose site effects can be analyzed. And this makes solution much more stable. To verify the results of GIT, a non-reference method, the genetic algorithms (GA), is applied to estimate absolute site effects. On April 20, 2013, an earthquake with magnitude of MS 7.0 occurred in the Lushan region, China. After this event, more than several hundred aftershocks with ML<3.0 occurred in this region. The purpose of this paper is to investigate the site effects and Q factor for this area based on the aftershock strong motion records from the China National Strong Motion Observation Network System. Our results show that when the TSGI is applied instead of the GIT, the total number of events used in the inversion increases from 31 to 54 and the total number of stations whose site effect can be estimated

  10. An Estimation Method for Distribution System Load with Photovoltaic Power Generation based on ICA

    NASA Astrophysics Data System (ADS)

    Yamada, Takayoshi; Ishigame, Atsushi; Genji, Takamu

    A large number of Dispersed Generations (DGs) are expected to be installed in distribution systems. Therefore the state estimation is important problem for stable and reliable system operation. However, it is difficult to estimate the total power of DGs connected to a load-side system from a metering spot on the distribution line because at the metering spot only a sum of the active-power from various loads and DGs can be measured. In this paper, we propose an estimation method for unknown DG-outputs connected to a distribution system. This method enables to estimate DG-outputs by analyzing a power flow data measured at one spot using independent component analysis (ICA). The estimation by ICA needs the same number of observations as estimations. However the observation spot is extremely limited in existing distribution system. So we propose an estimation method which enables to estimate DG-outputs and load-changes from only an observation by using known information of load power and a priori knowledge of insolation.

  11. A Comparison of South African National HIV Incidence Estimates: A Critical Appraisal of Different Methods

    PubMed Central

    Rehle, Thomas; Johnson, Leigh; Hallett, Timothy; Mahy, Mary; Kim, Andrea; Odido, Helen; Onoya, Dorina; Jooste, Sean; Shisana, Olive; Puren, Adrian; Parekh, Bharat; Stover, John

    2015-01-01

    Background The interpretation of HIV prevalence trends is increasingly difficult as antiretroviral treatment programs expand. Reliable HIV incidence estimates are critical to monitoring transmission trends and guiding an effective national response to the epidemic. Methods and Findings We used a range of methods to estimate HIV incidence in South Africa: (i) an incidence testing algorithm applying the Limiting-Antigen Avidity Assay (LAg-Avidity EIA) in combination with antiretroviral drug and HIV viral load testing; (ii) a modelling technique based on the synthetic cohort principle; and (iii) two dynamic mathematical models, the EPP/Spectrum model package and the Thembisa model. Overall, the different incidence estimation methods were in broad agreement on HIV incidence estimates among persons aged 15-49 years in 2012. The assay-based method produced slightly higher estimates of incidence, 1.72% (95% CI 1.38 – 2.06), compared with the mathematical models, 1.47% (95% CI 1.23 – 1.72) in Thembisa and 1.52% (95% CI 1.43 – 1.62) in EPP/Spectrum, and slightly lower estimates of incidence compared to the synthetic cohort, 1.9% (95% CI 0.8 – 3.1) over the period from 2008 to 2012. Among youth aged 15-24 years, a declining trend in HIV incidence was estimated by all three mathematical estimation methods. Conclusions The multi-method comparison showed similar levels and trends in HIV incidence and validated the estimates provided by the assay-based incidence testing algorithm. Our results confirm that South Africa is the country with the largest number of new HIV infections in the world, with about 1 000 new infections occurring each day among adults aged 15-49 years in 2012. PMID:26230949

  12. One-repetition maximum bench press performance estimated with a new accelerometer method.

    PubMed

    Rontu, Jari-Pekka; Hannula, Manne I; Leskinen, Sami; Linnamo, Vesa; Salmi, Jukka A

    2010-08-01

    The one repetition maximum (1RM) is an important method to measure muscular strength. The purpose of this study was to evaluate a new method to predict 1RM bench press performance from a submaximal lift. The developed method was evaluated by using different load levels (50, 60, 70, 80, and 90% of 1RM). The subjects were active floorball players (n = 22). The new method is based on the assumption that the estimation of 1RM can be calculated from the submaximal weight and the maximum acceleration of the submaximal weight during the lift. The submaximal bench press lift was recorded with a 3-axis accelerometer integrated to a wrist equipment and a data acquisition card. The maximum acceleration was calculated from the measurement data of the sensor and analyzed in personal computer with LabView-based software. The estimated 1RM results were compared with traditionally measured 1RM results of the subjects. An own estimation equation was developed for each load level, that is, 5 different estimation equations have been used based on the measured 1RM values of the subjects. The mean (+/-SD) of measured 1RM result was 69.86 (+/-15.72) kg. The mean of estimated 1RM values were 69.85-69.97 kg. The correlations between measured and estimated 1RM results were high (0.89-0.97; p < 0.001). The differences between the methods were very small (-0.11 to 0.01 kg) and were not significantly different from each other. The results of this study showed promising prediction accuracy for estimating bench press performance by performing just a single submaximal bench press lift. The estimation accuracy is competitive with other known estimation methods, at least with the current study population.

  13. Structural correlation method for model reduction and practical estimation of patient specific parameters illustrated on heart rate regulation.

    PubMed

    Ottesen, Johnny T; Mehlsen, Jesper; Olufsen, Mette S

    2014-11-01

    We consider the inverse and patient specific problem of short term (seconds to minutes) heart rate regulation specified by a system of nonlinear ODEs and corresponding data. We show how a recent method termed the structural correlation method (SCM) can be used for model reduction and for obtaining a set of practically identifiable parameters. The structural correlation method includes two steps: sensitivity and correlation analysis. When combined with an optimization step, it is possible to estimate model parameters, enabling the model to fit dynamics observed in data. This method is illustrated in detail on a model predicting baroreflex regulation of heart rate and applied to analysis of data from a rat and healthy humans. Numerous mathematical models have been proposed for prediction of baroreflex regulation of heart rate, yet most of these have been designed to provide qualitative predictions of the phenomena though some recent models have been developed to fit observed data. In this study we show that the model put forward by Bugenhagen et al. can be simplified without loss of its ability to predict measured data and to be interpreted physiologically. Moreover, we show that with minimal changes in nominal parameter values the simplified model can be adapted to predict observations from both rats and humans. The use of these methods make the model suitable for estimation of parameters from individuals, allowing it to be adopted for diagnostic procedures.

  14. Structural correlation method for model reduction and practical estimation of patient specific parameters illustrated on heart rate regulation

    PubMed Central

    Ottesen, Johnny T.; Mehlsen, Jesper; Olufsen, Mette S.

    2014-01-01

    We consider the inverse and patient specific problem of short term (seconds to minutes) heart rate regulation specified by a system of nonlinear ODEs and corresponding data. We show how a recent method termed the structural correlation method (SCM) can be used for model reduction and for obtaining a set of practically identifiable parameters. The structural correlation method includes two steps: sensitivity and correlation analysis. When combined with an optimization step, it is possible to estimate model parameters, enabling the model to fit dynamics observed in data. This method is illustrated in detail on a model predicting baroreflex regulation of heart rate and applied to analysis of data from a rat and healthy humans. Numerous mathematical models have been proposed for prediction of baroreflex regulation of heart rate, yet most of these have been designed to provide qualitative predictions of the phenomena though some recent models have been developed to fit observed data. In this study we show that the model put forward by Bugenhagen et al. (2010) can be simplified without loss of its ability to predict measured data and to be interpreted physiologically. Moreover, we show that with minimal changes in nominal parameter values the simplified model can be adapted to predict observations from both rats and humans. The use of these methods make the model suitable for estimation of parameters from individuals, allowing it to be adopted for diagnostic procedures. PMID:25050793

  15. One-level prediction-A numerical method for estimating undiscovered metal endowment

    USGS Publications Warehouse

    McCammon, R.B.; Kork, J.O.

    1992-01-01

    One-level prediction has been developed as a numerical method for estimating undiscovered metal endowment within large areas. The method is based on a presumed relationship between a numerical measure of geologic favorability and the spatial distribution of metal endowment. Metal endowment within an unexplored area for which the favorability measure is greater than a favorability threshold level is estimated to be proportional to the area of that unexplored portion. The constant of proportionality is the ratio of the discovered endowment found within a suitably chosen control region, which has been explored, to the area of that explored region. In addition to the estimate of undiscovered endowment, a measure of the error of the estimate is also calculated. One-level prediction has been used to estimate the undiscovered uranium endowment in the San Juan basin, New Mexico, U.S.A. A subroutine to perform the necessary calculations is included. ?? 1992 Oxford University Press.

  16. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    PubMed

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  17. Evaluation of methods for estimating the uncertainty of electronic balance measurements

    SciTech Connect

    Clark, J.P.

    2000-06-09

    International and national regulations are requiring testing and calibration laboratories to provide estimates of uncertainty with their measurements. Many balance users are having questions about determining weight measurement uncertainty, especially if their quality control programs have provided estimates of measurement system ``bias and precision''. Part of the problem is the terminology used to describe the quality of weight and mass measurements. Manufacturer's specifications list several performance criteria, but do not provide estimates of the ``uncertainty'' of measurements made using an electronic balance. Several methods for estimating the uncertainty of weight and mass measurements have been described in various publications and regulations in recent years. This paper will discuss the terminology used to describe measurement quality, i.e. accuracy, precision, linearity, hysteresis, measurement uncertainty (MU) and the various contributors to MU and will discuss the advantages and limitations of various methods for estimating MU.

  18. Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo

    2016-04-01

    Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.

  19. Estimating the Bias of Local Polynomial Approximation Methods Using the Peano Kernel

    SciTech Connect

    Blair, J.; Machorro, E.; Luttman, A.

    2013-03-01

    The determination of uncertainty of an estimate requires both the variance and the bias of the estimate. Calculating the variance of local polynomial approximation (LPA) estimates is straightforward. We present a method, using the Peano Kernel Theorem, to estimate the bias of LPA estimates and show how this can be used to optimize the LPA parameters in terms of the bias-variance tradeoff. Figures of merit are derived and values calculated for several common methods. The results in the literature are expanded by giving bias error bounds that are valid for all lengths of the smoothing interval, generalizing the currently available asymptotic results that are only valid in the limit as the length of this interval goes to zero.

  20. Design of a Direction-of-Arrival Estimation Method Used for an Automatic Bearing Tracking System

    PubMed Central

    Guo, Feng; Liu, Huawei; Huang, Jingchang; Zhang, Xin; Zu, Xingshui; Li, Baoqing; Yuan, Xiaobing

    2016-01-01

    In this paper, we introduce a sub-band direction-of-arrival (DOA) estimation method suitable for employment within an automatic bearing tracking system. Inspired by the magnitude-squared coherence (MSC), we extend the MSC to the sub-band and propose the sub-band magnitude-squared coherence (SMSC) to measure the coherence between the frequency sub-bands of wideband signals. Then, we design a sub-band DOA estimation method which chooses a sub-band from the wideband signals by SMSC for the bearing tracking system. The simulations demonstrate that the sub-band method has a good tradeoff between the wideband methods and narrowband methods in terms of the estimation accuracy, spatial resolution, and computational cost. The proposed method was also tested in the field environment with the bearing tracking system, which also showed a good performance. PMID:27455267