Ability of geometric morphometric methods to estimate a known covariance matrix.
Walker, J A
2000-12-01
Landmark-based morphometric methods must estimate the amounts of translation, rotation, and scaling (or, nuisance) parameters to remove nonshape variation from a set of digitized figures. Errors in estimates of these nuisance variables will be reflected in the covariance structure of the coordinates, such as the residuals from a superimposition, or any linear combination of the coordinates, such as the partial warp and standard uniform scores. A simulation experiment was used to compare the ability of the generalized resistant fit (GRF) and a relative warp analysis (RWA) to estimate known covariance matrices with various correlations and variance structures. Random covariance matrices were perturbed so as to vary the magnitude of the average correlation among coordinates, the number of landmarks with excessive variance, and the magnitude of the excessive variance. The covariance structure was applied to random figures with between 6 and 20 landmarks. The results show the expected performance of GRF and RWA across a broad spectrum of conditions. The performance of both GRF and RWA depended most strongly on the number of landmarks. RWA performance decreased slightly when one or a few landmarks had excessive variance. GRF performance peaked when approximately 25% of the landmarks had excessive variance. In general, both RWA and GRF performed better at estimating the direction of the first principal axis of the covariance matrix than the structure of the entire covariance matrix. RWA tended to outperform GRF when > approximately 75% of the coordinates had excessive variance. When < 75% of the coordinates had excessive variance, the relative performance of RWA and GRF depended on the magnitude of the excessive variance; when the landmarks with excessive variance had standard deviations (sigma) > or = 4 sigma minimum, GRF regularly outperformed RWA. PMID:12116434
ERIC Educational Resources Information Center
Lee, Young-Jin
2012-01-01
This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…
Evaluation of the ability of the EC tracer method to estimate secondary organic carbon
NASA Astrophysics Data System (ADS)
Day, Melissa C.; Zhang, Minghui; Pandis, Spyros N.
2015-07-01
The elemental carbon (EC) tracer method has often been used to estimate the primary and secondary organic aerosol (OA) fractions using field measurements of organic carbon (OC) and EC. In this observation-based approach, EC is used as a tracer for primary OC (POC), which allows for the estimation of secondary OC (SOC). The accuracy of this approach is evaluated using concentrations generated by PMCAMx, a three-dimensional chemical transport model that simulates the complex processes leading to SOC formation (including evaporation and chemical processing of POC and chemical aging of semivolatile and intermediate volatility organics). The ratio of primary organic to elemental carbon [OC/EC]p is estimated in various locations in the Eastern United States, and is then used to calculate the primary and secondary OC concentrations. To estimate the [OC/EC]p from simulated concentrations, we use both a traditional approach and the high EC edge method, in which only values with the highest EC/OC ratio are used. Both methods perform best on a daily-averaged basis, because of the variability of the [OC/EC]p ratio during the day. The SOC estimated by the EC tracer methods corresponds to the biogenic and anthropogenic SOC formed during the oxidation of volatile organic compounds. On the other hand, the estimated POC corresponds to the sum of the fresh POC, the SOC from oxidation of the evaporated POC and the intermediate volatility organic compounds, and the OC from long-distance transport. With this correspondence, the traditional EC tracer method tends to overpredict primary OC and underpredict secondary OC for the selected urban areas in the eastern United States. The high EC edge method performs better, especially in areas where the primary contribution to OC is smaller. Despite the weaknesses of models like the one used here, the conclusions about the accuracy of observation-based methods like the EC-tracer approach should be relatively robust due to the internal
Velemínská, Jana; Pilný, Ales; Cepek, Miroslav; Kot'ová, Magdaléna; Kubelková, Radka
2013-01-01
Dental development is frequently used to estimate age in many anthropological specializations. The aim of this study was to extract an accurate predictive age system for the Czech population and to discover any different predictive ability of various tooth types and their ontogenetic stability during infancy and adolescence. A cross-sectional panoramic X-ray study was based on developmental stages assessment of mandibular teeth (Moorrees et al. 1963) using 1393 individuals aged from 3 to 17 years. Data mining methods were used for dental age estimation. These are based on nonlinear relationships between the predicted age and data sets. Compared with other tested predictive models, the GAME method predicted age with the highest accuracy. Age-interval estimations between the 10th and 90th percentiles ranged from -1.06 to +1.01 years in girls and from -1.13 to +1.20 in boys. Accuracy was expressed by RMS error, which is the average deviation between estimated and chronological age. The predictive value of individual teeth changed during the investigated period from 3 to 17 years. When we evaluated the whole period, the second molars exhibited the best predictive ability. When evaluating partial age periods, we found that the accuracy of biological age prediction declines with increasing age (from 0.52 to 1.20 years in girls and from 0.62 to 1.22 years in boys) and that the predictive importance of tooth types changes, depending on variability and the number of developmental stages in the age interval. GAME is a promising tool for age-interval estimation studies as they can provide reliable predictive models. PMID:24466642
NASA Astrophysics Data System (ADS)
Korolenko, E. A.; Korolik, E. V.; Korolik, A. K.; Kirkovskii, V. V.
2007-07-01
We present results from an investigation of the binding ability of the main transport proteins (albumin, lipoproteins, and α-1-acid glycoprotein) of blood plasma from patients at different stages of liver cirrhosis by the fluorescent probe method. We used the hydrophobic fluorescent probes anionic 8-anilinonaphthalene-1-sulfonate, which interacts in blood plasma mainly with albumin; cationic Quinaldine red, which interacts with α-1-acid glycoprotein; and neutral Nile red, which redistributes between lipoproteins and albumin in whole blood plasma. We show that the binding ability of albumin and α-1-acid glycoprotein to negatively charged and positively charged hydrophobic metabolites, respectively, increases in the compensation stage of liver cirrhosis. As the pathology process deepens and transitions into the decompensation stage, the transport abilities of albumin and α-1-acid glycoprotein decrease whereas the binding ability of lipoproteins remains high.
Tolerance for error and computational estimation ability.
Hogan, Thomas P; Wyckoff, Laurie A; Krebs, Paul; Jones, William; Fitzgerald, Mark P
2004-06-01
Previous investigators have suggested that the personality variable tolerance for error is related to success in computational estimation. However, this suggestion has not been tested directly. This study examined the relationship between performance on a computational estimation test and scores on the NEO-Five Factor Inventory, a measure of the Big Five personality traits, including Openness, an index of tolerance for ambiguity. Other variables included SAT-I Verbal and Mathematics scores and self-rated mathematics ability. Participants were 65 college students. There was no significant relationship between the tolerance variable and computational estimation performance. There was a modest negative relationship between Agreeableness and estimation performance. The skepticism associated with the negative pole of the Agreeableness dimension may be important to pursue in further understanding of estimation ability. PMID:15362423
Fischer, Dominik; Moeller, Philipp; Thomas, Stephanie M.; Naucke, Torsten J.; Beierkuhnlein, Carl
2011-01-01
Background In the Old World, sandfly species of the genus Phlebotomus are known vectors of Leishmania, Bartonella and several viruses. Recent sandfly catches and autochthonous cases of leishmaniasis hint on spreading tendencies of the vectors towards Central Europe. However, studies addressing potential future distribution of sandflies in the light of a changing European climate are missing. Methodology Here, we modelled bioclimatic envelopes using MaxEnt for five species with proven or assumed vector competence for Leishmania infantum, which are either predominantly located in (south-) western (Phlebotomus ariasi, P. mascittii and P. perniciosus) or south-eastern Europe (P. neglectus and P. perfiliewi). The determined bioclimatic envelopes were transferred to two climate change scenarios (A1B and B1) for Central Europe (Austria, Germany and Switzerland) using data of the regional climate model COSMO-CLM. We detected the most likely way of natural dispersal (“least-cost path”) for each species and hence determined the accessibility of potential future climatically suitable habitats by integrating landscape features, projected changes in climatic suitability and wind speed. Results and Relevance Results indicate that the Central European climate will become increasingly suitable especially for those vector species with a current south-western focus of distribution. In general, the highest suitability of Central Europe is projected for all species in the second half of the 21st century, except for P. perfiliewi. Nevertheless, we show that sandflies will hardly be able to occupy their climatically suitable habitats entirely, due to their limited natural dispersal ability. A northward spread of species with south-eastern focus of distribution may be constrained but not completely avoided by the Alps. Our results can be used to install specific monitoring systems to the projected risk zones of potential sandfly establishment. This is urgently needed for adaptation
ERIC Educational Resources Information Center
Tao, Jian; Shi, Ning-Zhong; Chang, Hua-Hua
2012-01-01
For mixed-type tests composed of both dichotomous and polytomous items, polytomous items often yield more information than dichotomous ones. To reflect the difference between the two types of items, polytomous items are usually pre-assigned with larger weights. We propose an item-weighted likelihood method to better assess examinees' ability…
Rosado, Daniel; Usero, José; Morillo, José
2016-01-15
The bioavailable fraction of metals (Zn, Cu, Cd, Mn, Pb, Ni, Fe, and Cr) in sediments of the Huelva estuary and its littoral of influence has been estimated carrying out the most popular methods of sequential extraction (BCR and Tessier) and a biomimetic approach (protease K extraction). Results were compared to enrichment factors found in Arenicola marina. The linear correlation coefficients (R(2)) obtained between the fraction mobilized by the first step of the BCR sequential extraction, by the sum of the first and second steps of the Tessier sequential extraction, and by protease K, and enrichment factors in A. marina, are at their highest for protease K extraction (0.709), followed by BCR first step (0.507) and the sum of the first and second steps of Tessier (0.465). This observation suggests that protease K represents the bioavailable fraction more reliably than traditional methods (BCR and Tessier), which have a similar ability. PMID:26656803
Estimating Premorbid Cognitive Abilities in Low-Educated Populations
Apolinario, Daniel; Brucki, Sonia Maria Dozzi; Ferretti, Renata Eloah de Lucena; Farfel, José Marcelo; Magaldi, Regina Miksian; Busse, Alexandre Leopold; Jacob-Filho, Wilson
2013-01-01
Objective To develop an informant-based instrument that would provide a valid estimate of premorbid cognitive abilities in low-educated populations. Methods A questionnaire was drafted by focusing on the premorbid period with a 10-year time frame. The initial pool of items was submitted to classical test theory and a factorial analysis. The resulting instrument, named the Premorbid Cognitive Abilities Scale (PCAS), is composed of questions addressing educational attainment, major lifetime occupation, reading abilities, reading habits, writing abilities, calculation abilities, use of widely available technology, and the ability to search for specific information. The validation sample was composed of 132 older Brazilian adults from the following three demographically matched groups: normal cognitive aging (n = 72), mild cognitive impairment (n = 33), and mild dementia (n = 27). The scores of a reading test and a neuropsychological battery were adopted as construct criteria. Post-mortem inter-informant reliability was tested in a sub-study with two relatives from each deceased individual. Results All items presented good discriminative power, with corrected item-total correlation varying from 0.35 to 0.74. The summed score of the instrument presented high correlation coefficients with global cognitive function (r = 0.73) and reading skills (r = 0.82). Cronbach's alpha was 0.90, showing optimal internal consistency without redundancy. The scores did not decrease across the progressive levels of cognitive impairment, suggesting that the goal of evaluating the premorbid state was achieved. The intraclass correlation coefficient was 0.96, indicating excellent inter-informant reliability. Conclusion The instrument developed in this study has shown good properties and can be used as a valid estimate of premorbid cognitive abilities in low-educated populations. The applicability of the PCAS, both as an estimate of premorbid intelligence and cognitive
Kim, Su-Young; Suh, Youngsuk; Kim, Jee-Seon; Albanese, Mark A.; Langer, Michelle M.
2014-01-01
Latent variable models with many categorical items and multiple latent constructs result in many dimensions of numerical integration, and the traditional frequentist estimation approach, such as maximum likelihood (ML), tends to fail due to model complexity. In such cases, Bayesian estimation with diffuse priors can be used as a viable alternative to ML estimation. The present study compares the performance of Bayesian estimation to ML estimation in estimating single or multiple ability factors across two types of measurement models in the structural equation modeling framework: a multidimensional item response theory (MIRT) model and a multiple-indicator multiple-cause (MIMIC) model. A Monte Carlo simulation study demonstrates that Bayesian estimation with diffuse priors, under various conditions, produces quite comparable results to ML estimation in the single- and multi-level MIRT and MIMIC models. Additionally, an empirical example utilizing the Multistate Bar Examination is provided to compare the practical utility of the MIRT and MIMIC models. Structural relationships among the ability factors, covariates, and a binary outcome variable are investigated through the single- and multi-level measurement models. The paper concludes with a summary of the relative advantages of Bayesian estimation over ML estimation in MIRT and MIMIC models and suggests strategies for implementing these methods. PMID:24659828
ERIC Educational Resources Information Center
Ho, Tsung-Han
2010-01-01
Computerized adaptive testing (CAT) provides a highly efficient alternative to the paper-and-pencil test. By selecting items that match examinees' ability levels, CAT not only can shorten test length and administration time but it can also increase measurement precision and reduce measurement error. In CAT, maximum information (MI) is the most…
Ability Self-Estimates and Self-Efficacy: Meaningfully Distinct?
ERIC Educational Resources Information Center
Bubany, Shawn T.; Hansen, Jo-Ida C.
2010-01-01
Conceptual differences between self-efficacy and ability self-estimate scores, used in vocational psychology and career counseling, were examined with confirmatory factor analysis, discriminate relations, and reliability analysis. Results suggest that empirical differences may be due to measurement error or scale content, rather than due to the…
The Effect of Omitted Responses on Ability Estimation in IRT.
ERIC Educational Resources Information Center
De Ayala, R. J.; Plake, Barbara S.; Impara, James C.; Kozmicky, Michelle
This study investigated the effect on examinees' ability estimate under item response theory (IRT) when they are presented an item, have ample time to answer the item, but decide not to respond to the item. Simulation data were modeled on an empirical data set of 25,546 examinees that was calibrated using the 3-parameter logistic model. The study…
Robust Estimation of Latent Ability in Item Response Models
ERIC Educational Resources Information Center
Schuster, Christof; Yuan, Ke-Hai
2011-01-01
Because of response disturbances such as guessing, cheating, or carelessness, item response models often can only approximate the "true" individual response probabilities. As a consequence, maximum-likelihood estimates of ability will be biased. Typically, the nature and extent to which response disturbances are present is unknown, and, therefore,…
Determinants and Validity of Self-Estimates of Abilities and Self-Concept Measures
ERIC Educational Resources Information Center
Ackerman, Phillip L.; Wolman, Stacey D.
2007-01-01
How accurate are self-estimates of cognitive abilities? An investigation of self-estimates of verbal, math, and spatial abilities is reported with a battery of parallel objective tests of abilities. Self-estimates were obtained prior to and after objective ability testing (without test feedback) in order to examine whether self-estimates change…
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
An effective method for incoherent scattering radar's detecting ability evaluation
NASA Astrophysics Data System (ADS)
Lu, Ziqing; Yao, Ming; Deng, Xiaohua
2016-06-01
Ionospheric incoherent scatter radar (ISR), which is used to detect ionospheric electrons and ions, generally, has megawatt class transmission power and hundred meter level antenna aperture. The crucial purpose of this detecting technology is to get ionospheric parameters by acquiring the autocorrelation function and power spectrum of the target ionospheric plasma echoes. Whereas the ISR's echoes are very weak because of the small radar cross section of its target, estimating detecting ability will be significantly instructive and meaningful for ISR system design. In this paper, we evaluate the detecting ability through signal-to-noise ratio (SNR). The soft-target radar equation is deduced to be applicable to ISR, through which we use data from International Reference Ionosphere model to simulate signal-to-noise ratio (SNR) of echoes, and then comparing the measured SNR from European Incoherent Scatter Scientific Association and Advanced Modular Incoherent Scatter Radar with the simulation. The simulation results show good consistency with the measured SNR. For ISR, the topic of this paper is the first comparison between the calculated SNR and radar measurements; the detecting ability can be improved through increasing SNR. The effective method for ISR's detecting ability evaluation provides basis for design of radar system.
Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation
ERIC Educational Resources Information Center
Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting
2011-01-01
Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…
ERIC Educational Resources Information Center
Magis, David; Raiche, Gilles
2012-01-01
This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…
Combining event scores to estimate the ability of competitors.
Hopkins, W G; Green, J R
1995-04-01
Simulation was used to investigate the validities of nine measures of ability derived from scores of two or more competitive events. The measures were: raw means and least-squares means of raw scores, z scores, and normal scores; two measures derived from ranked scores; and the "personal-best" raw score. Simulations were performed for different numbers of competitors, events, and event entries, each for a range of validity of performance in a single event. A complete set of simulations was repeated for each of the following conditions: normal distribution of competitors' ability; skewed distribution of ability; event validity related to ability; validity, ability, and spread of scores differing between events; and events differing in difficulty. The raw mean of raw scores was generally the most valid measure. The personal best was comparable to the mean only when the number of entries approached one per competitor. The least-squares mean of raw scores had highest validity when events differed substantially in difficulty; it should therefore be used when events differ in length, or when event scores are affected by environmental conditions, judging bias, or by uneven matching of competitors in match-play sports. PMID:7791592
PDV Uncertainty Estimation & Methods Comparison
Machorro, E.
2011-11-01
Several methods are presented for estimating the rapidly changing instantaneous frequency of a time varying signal that is contaminated by measurement noise. Useful a posteriori error estimates for several methods are verified numerically through Monte Carlo simulation. However, given the sampling rates of modern digitizers, sub-nanosecond variations in velocity are shown to be reliably measurable in most (but not all) cases. Results support the hypothesis that in many PDV regimes of interest, sub-nanosecond resolution can be achieved.
A Note on the Reliability Coefficients for Item Response Model-Based Ability Estimates
ERIC Educational Resources Information Center
Kim, Seonghoon
2012-01-01
Assuming item parameters on a test are known constants, the reliability coefficient for item response theory (IRT) ability estimates is defined for a population of examinees in two different ways: as (a) the product-moment correlation between ability estimates on two parallel forms of a test and (b) the squared correlation between the true…
Career Interests and Self-Estimated Abilities of Young Adults with Disabilities
ERIC Educational Resources Information Center
Turner, Sherri; Unkefer, Lesley Craig; Cichy, Bryan Ervin; Peper, Christine; Juang, Ju-Ping
2011-01-01
The purpose of this study was to ascertain vocational interests and self-estimated work-relevant abilities of young adults with disabilities. Results showed that young adults with both low incidence and high incidence disabilities have a wide range of interests and self-estimated work-relevant abilities that are comparable to those in the general…
Methods for Cloud Cover Estimation
NASA Technical Reports Server (NTRS)
Glackin, D. L.; Huning, J. R.; Smith, J. H.; Logan, T. L.
1984-01-01
Several methods for cloud cover estimation are described relevant to assessing the performance of a ground-based network of solar observatories. The methods rely on ground and satellite data sources and provide meteorological or climatological information. One means of acquiring long-term observations of solar oscillations is the establishment of a ground-based network of solar observatories. Criteria for station site selection are: gross cloudiness, accurate transparency information, and seeing. Alternative methods for computing this duty cycle are discussed. The cycle, or alternatively a time history of solar visibility from the network, can then be input to a model to determine the effect of duty cycle on derived solar seismology parameters. Cloudiness from space is studied to examine various means by which the duty cycle might be computed. Cloudiness, and to some extent transparency, can potentially be estimated from satellite data.
ERIC Educational Resources Information Center
de la Torre, Jimmy
2009-01-01
For one reason or another, various sources of information, namely, ancillary variables and correlational structure of the latent abilities, which are usually available in most testing situations, are ignored in ability estimation. A general model that incorporates these sources of information is proposed in this article. The model has a general…
ERIC Educational Resources Information Center
Sinharay, Sandip
2015-01-01
The maximum likelihood estimate (MLE) of the ability parameter of an item response theory model with known item parameters was proved to be asymptotically normally distributed under a set of regularity conditions for tests involving dichotomous items and a unidimensional ability parameter (Klauer, 1990; Lord, 1983). This article first considers…
Comparison of the WRAT4 reading subtest and the WTAR for estimating premorbid ability level.
Mullen, Christine M; Fouty, H Edward
2014-01-01
The need to estimate premorbid ability level as part of a neuropsychological evaluation is well understood in the profession. The purpose of this study was to evaluate two popular reading tests for estimating premorbid ability. Participants were 102 undergraduate volunteers between the ages of 18 and 64 years (M = 25.89 years, SD = 9.54). Participants completed the Wechsler Test of Adult Reading (WTAR) and both forms of the Reading subtest of the Wide Range Achievement Test-Fourth Edition (WRAT4). The WTAR was scored using the Predicted Full-Scale IQ (FSIQ) and the Demographic Predicted FSIQ methods presented in the manual. Repeated-measures analyses of variance revealed no significant difference between the two forms of the WRAT4 and the WTAR for both the Predicted FSIQ, F(2, 202) = 0.399, p = .671, and the Demographic Predicted FSIQ, F(2, 190) = 0.085, p = .918, scoring approaches. Concurrent validity correlation coefficients between the three items using the Predicted FSIQ ranged from r = .75 to r = .78; using the Demographic Predicted FSIQ, coefficients ranged from r = .50 to r = .76. Our data suggest that the WTAR offers a slightly more reliable statistical portrait of cognitive functioning, especially with a more educated and originally higher-functioning population. PMID:24826498
Ten Years of GLAPHI Method Developing Scientific Research Abilities
NASA Astrophysics Data System (ADS)
Vega-Carrillo, Hector R.
2006-12-01
During the past ten years we had applied our method, GLAPHI, to teach how to do scientific research. The method has been applied from freshman students up to PhD professionals. The method is based in the search and analysis of scientific literature, the scientific question or problem, the approach of hypothesis and objetive, the estimation of the project cost and the timetable. It also includes statistics for research, author rights, ethics in research, publication of scientific papers, writting scientific reports and meeting presentations. In this work success and fails of GLAPHI methods will be discussed. Work partially supported by CONACyT (Mexico) under contract: SEP-2004-C01-46893
A method for estimating proportions
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
A proportion estimation procedure is presented which requires only on set of ground truth data for determining the error matrix. The error matrix is then used to determine an unbiased estimate. The error matrix is shown to be directly related to the probability of misclassifications, and is more diagonally dominant with the increase in the number of passes used.
Muldoon, Kevin; Towse, John; Simms, Victoria; Perra, Oliver; Menzies, Victoria
2013-02-01
In response to claims that the quality (and in particular linearity) of children's mental representation of number acts as a constraint on number development, we carried out a longitudinal assessment of the relationships between number line estimation, counting, and mathematical abilities. Ninety-nine 5-year-olds were tested on 4 occasions at 3 monthly intervals. Correlations between the 3 types of ability were evident, but while the quality of children's estimations changed over time and performance on the mathematical tasks improved over the same period, changes in one were not associated with changes in the other. In contrast to the earlier claims that the linearity of number representation is potentially a unique contributor to children's mathematical development, the data suggest that this variable is not significantly privileged in its impact over and above simple procedural number skills. We propose that both early arithmetic success and estimating skill are bound closely to developments in counting ability. PMID:22506976
Smoothing Methods for Estimating Test Score Distributions.
ERIC Educational Resources Information Center
Kolen, Michael J.
1991-01-01
Estimation/smoothing methods that are flexible enough to fit a wide variety of test score distributions are reviewed: kernel method, strong true-score model-based method, and method that uses polynomial log-linear models. Applications of these methods include describing/comparing test score distributions, estimating norms, and estimating…
Courtright, Stephen H; McCormick, Brian W; Postlethwaite, Bennett E; Reeves, Cody J; Mount, Michael K
2013-07-01
Despite the wide use of physical ability tests for selection and placement decisions in physically demanding occupations, research has suggested that there are substantial male-female differences on the scores of such tests, contributing to adverse impact. In this study, we present updated, revised meta-analytic estimates of sex differences in physical abilities and test 3 moderators of these differences-selection system design, specificity of measurement, and training-in order to provide insight into possible methods of reducing sex differences on physical ability test scores. Findings revealed that males score substantially better on muscular strength and cardiovascular endurance tests but that there are no meaningful sex differences on movement quality tests. These estimates differ in several ways from past estimates. Results showed that sex differences are similar across selection systems that emphasize basic ability tests versus job simulations. Results also showed that sex differences are smaller for narrow dimensions of muscular strength and that there is substantial variance in the sex differences in muscular strength across different body regions. Finally, we found that training led to greater increases in performance for women than for men on both muscular strength and cardiovascular endurance tests. However, training reduced the male-female differences on muscular strengths tests only modestly and actually increased male-female differences on cardiovascular endurance. We discuss the implications of these findings for research on physical ability testing and adverse impact, as well as the practical implications of the results. PMID:23731029
Cross-Validation of the Quick Word Test as an Estimator of Adult Mental Ability
ERIC Educational Resources Information Center
Grotelueschen, Arden; McQuarrie, Duncan
1970-01-01
This report provides additional evidence that the Quick Word Test (Level 2, Form AM) is valid for estimating adult mental ability as defined by the Wechsler Adult Intelligence Scale. The validation sample is also described to facilitate use of the conversion table developed in the cross-validation analysis. (Author/LY)
The choice of the ability estimate with asymptotically correct standardized person-fit statistics.
Sinharay, Sandip
2016-05-01
Snijders (2001, Psychometrika, 66, 331) suggested a statistical adjustment to obtain the asymptotically correct standardized versions of a specific class of person-fit statistics. His adjustment has been used to obtain the asymptotically correct standardized versions of several person-fit statistics including the lz statistic (Drasgow et al., 1985, Br. J. Math. Stat. Psychol., 38, 67), the infit and outfit statistics (e.g., Wright & Masters, 1982, Rating scale analysis, Chicago, IL: Mesa Press), and the standardized extended caution indices (Tatsuoka, 1984, Psychometrika, 49, 95). Snijders (2001), van Krimpen-Stoop and Meijer (1999, Appl. Psychol. Meas., 23, 327), Magis et al. (2012, J. Educ. Behav. Stat., 37, 57), Magis et al. (2014, J. Appl. Meas., 15, 82), and Sinharay (2015b, Psychometrika, doi:10.1007/s11336-015-9465-x, 2016b, Corrections of standardized extended caution indices, Unpublished manuscript) have used the maximum likelihood estimate, the weighted likelihood estimate, and the posterior mode of the examinee ability with the adjustment of Snijders (2001). This paper broadens the applicability of the adjustment of Snijders (2001) by showing how other ability estimates such as the expected a posteriori estimate, the biweight estimate (Mislevy & Bock, 1982, Educ. Psychol. Meas., 42, 725), and the Huber estimate (Schuster & Yuan, 2011, J. Educ. Behav. Stat., 36, 720) can be used with the adjustment. A simulation study is performed to examine the Type I error rate and power of two asymptotically correct standardized person-fit statistics with several ability estimates. A real data illustration follows. PMID:27062601
[The study of tool use as the way for general estimation of cognitive abilities in animals].
Reznikova, Zh I
2006-01-01
Investigation of tool use is an effective way to determine cognitive abilities of animals. This approach raises hypotheses, which delineate limits of animal's competence in understanding of objects properties and interrelations and the influence of individual and social experience on their behaviour. On the basis of brief review of different models of manipulation with objects and tools manufacturing (detaching, subtracting and reshaping) by various animals (from elephants to ants) in natural conditions the experimental data concerning tool usage was considered. Tool behaviour of anumals could be observed rarely and its distribution among different taxons is rather odd. Recent studies have revealed that some species (for instance, bonobos and tamarins) which didn't manipulate tools in wild life appears to be an advanced tool users and even manufacturers in laboratory. Experimental studies of animals tool use include investigation of their ability to use objects physical properties, to categorize objects involved in tool activity by its functional properties, to take forces affecting objects into account, as well as their capacity of planning their actions. The crucial question is whether animals can abstract general principles of relations between objects regardless of the exact circumstances, or they develop specific associations between concerete things and situations. Effectiveness of laboratory methods is estimated in the review basing on comparative studies of tool behaviour, such as "support problem", "stick problem", "tube- and tube-trap problem", and "reserve tube problem". Levels of social learning, the role of imprinting, and species-specific predisposition to formation of specific domains are discussed. Experimental investigation of tool use allows estimation of the individuals' intelligence in populations. A hypothesis suggesting that strong predisposition to formation of specific associations can serve as a driving force and at the same time as
ERIC Educational Resources Information Center
Zopluoglu, Cengiz; Davenport, Ernest C., Jr.
2011-01-01
The purpose of this study was to examine the effects of answer copying on the ability level estimates of cheater examinees in answer copying pairs. The study generated answer copying pairs for each of 1440 conditions, source ability (12) x cheater ability (12) x amount of copying (10). The average difference between the ability level estimates…
Robust parameter estimation method for bilinear model
NASA Astrophysics Data System (ADS)
Ismail, Mohd Isfahani; Ali, Hazlina; Yahaya, Sharipah Soaad S.
2015-12-01
This paper proposed the method of parameter estimation for bilinear model, especially on BL(1,0,1,1) model without and with the presence of additive outlier (AO). In this study, the estimated parameters for BL(1,0,1,1) model are using nonlinear least squares (LS) method and also through robust approaches. The LS method employs the Newton-Raphson (NR) iterative procedure in estimating the parameters of bilinear model, but, using LS in estimating the parameters can be affected with the occurrence of outliers. As a solution, this study proposed robust approaches in dealing with the problem of outliers specifically on AO in BL(1,0,1,1) model. In robust estimation method, for improvement, we proposed to modify the NR procedure with robust scale estimators. We introduced two robust scale estimators namely median absolute deviation (MADn) and Tn in linear autoregressive model, AR(1) that be adequate and suitable for bilinear BL(1,0,1,1) model. We used the estimated parameter value in AR(1) model as an initial value in estimating the parameter values of BL(1,0,1,1) model. The investigation of the performance of LS and robust estimation methods in estimating the coefficients of BL(1,0,1,1) model is carried out through simulation study. The achievement of performance for both methods will be assessed in terms of bias values. Numerical results present that, the robust estimation method performs better than LS method in estimating the parameters without and with the presence of AO.
Reliability Estimation Methods for Liquid Rocket Engines
NASA Astrophysics Data System (ADS)
Hirata, Kunio; Masuya, Goro; Kamijo, Kenjiro
Reliability estimation using the dispersive, binominal distribution method has been traditionally used to certify the reliability of liquid rocket engines, but its estimation sometimes disagreed with the failure rates of flight engines. In order to take better results, the reliability growth model and the failure distribution method are applied to estimate the reliability of LE-7A engines, which have propelled the first stage of H-2A launch vehicles.
Murray, Aja; McKenzie, Karen; Booth, Tom; Murray, George
2013-11-01
Screening tools can provide an indication of whether a child may have an intellectual disability (ID). Item response theory (IRT) analyses can be used to assess whether the statistical properties of the tools are such that their utility extends beyond their use as a screen for ID. We used non-parametric IRT scaling analyses to investigate whether the Child and Adolescent Intellectual Disability Screening Questionnaire (CAIDS-Q) possessed the statistical properties that would suggest its use could be extended to estimate levels of functional ability and to estimate which (if any) features associated with intellectual impairment are consistently indicative of lower or higher levels of functional ability. The validity of the two proposed applications was assessed by evaluating whether the CAIDS-Q conformed to the properties of the Monotone Homogeneity Model (MHM), characterised by uni-dimensionality, local independence and latent monotonicity and the Double Monotone Model (DMM), characterised by the assumptions of the MHM and, in addition, of non-intersecting item response functions. We analysed these models using CAIDS-Q data from 319 people referred to child clinical services. Of these, 148 had a diagnosis of ID. The CAIDS-Q was found to conform to the properties of the MHM but not the DMM. In practice, this means that the CAIDS-Q total scores can be used to quickly estimate the level of a person's functional ability. However, items of the CAIDS-Q did not show invariant item ordering, precluding the use of individual items in isolation as accurate indices of a person's level of functional ability. PMID:24036121
ERIC Educational Resources Information Center
Attivo, Barbara; Trueblood, Cecil R.
The purpose of this study was to investigate how the nature of metric estimation skill instruction affects prospective elementary and special education teachers' abilities to estimate metric length, area, and volume. Four types of estimation skills were identified by an estimation matrix. Three instructional strategies were selected: (1) a…
ERIC Educational Resources Information Center
Carter, Shani D.
2008-01-01
The paper proposes a theory that trainees have varying ability levels across different factors of cognitive ability, and that these abilities are used in varying levels by different training methods. The paper reviews characteristics of training methods and matches these characteristics to different factors of cognitive ability. The paper proposes…
ERIC Educational Resources Information Center
Jones, Douglas H.; And Others
How accurately ability is estimated when the test model does not fit the data is considered. To address this question, this study investigated the accuracy of the maximum likelihood estimator of ability for the one-, two- and three-parameter logistic (PL) models. The models were fitted into generated item characteristic curves derived from the…
ERIC Educational Resources Information Center
Xu, Xueli; Jia, Yue
2011-01-01
Estimation of item response model parameters and ability distribution parameters has been, and will remain, an important topic in the educational testing field. Much research has been dedicated to addressing this task. Some studies have focused on item parameter estimation when the latent ability was assumed to follow a normal distribution,…
ERIC Educational Resources Information Center
Jones, Douglas H.
New ability estimators have been proposed by Wainer and Wright (1980) and Mislevy and Bock (1981) that are resistant against guessing and careless behaviors exhibited by some examinees. This paper presents another class of ability estimators that are resistant to departures from the underlying assumptions concerning guessing and carelessness. In…
Imagining the Music: Methods for Assessing Musical Imagery Ability
ERIC Educational Resources Information Center
Clark, Terry; Williamon, Aaron
2012-01-01
Timing profiles of live and imagined performances were compared with the aim of creating a context-specific measure of musicians' imagery ability. Thirty-two advanced musicians completed imagery use and vividness surveys, and then gave two live and two mental performances of a two-minute musical excerpt, tapping along with the beat of the mental…
Developing Writing-Reading Abilities though Semiglobal Methods
ERIC Educational Resources Information Center
Macri, Cecilia; Bocos, Musata
2013-01-01
Through this research was intended to underline the importance of the semi-global strategies used within thematic projects for developing writing/reading abilities in the first grade pupils. Four different coordinates were chosen to be the main variables of this research: the level of phonological awareness, the degree in which writing-reading…
NASA Astrophysics Data System (ADS)
Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi
2009-12-01
The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.
NASA Astrophysics Data System (ADS)
Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi
2010-01-01
The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.
Cook, Andrea J.; Elmore, Joann G.; Zhu, Weiwei; Jackson, Sara L.; Carney, Patricia A.; Flowers, Chris; Onega, Tracy; Geller, Berta; Rosenberg, Robert D.; Miglioretti, Diana L.
2013-01-01
Objective To determine if U.S. radiologists accurately estimate their own interpretive performance of screening mammography and how they compare their performance to their peers’. Materials and Methods 174 radiologists from six Breast Cancer Surveillance Consortium (BCSC) registries completed a mailed survey between 2005 and 2006. Radiologists’ estimated and actual recall, false positive, and cancer detection rates and positive predictive value of biopsy recommendation (PPV2) for screening mammography were compared. Radiologists’ ratings of their performance as lower, similar, or higher than their peers were compared to their actual performance. Associations with radiologist characteristics were estimated using weighted generalized linear models. The study was approved by the institutional review boards of the participating sites, informed consent was obtained from radiologists, and procedures were HIPAA compliant. Results While most radiologists accurately estimated their cancer detection and recall rates (74% and 78% of radiologists), fewer accurately estimated their false positive rate and PPV2 (19% and 26%). Radiologists reported having similar (43%) or lower (31%) recall rates and similar (52%) or lower (33%) false positive rates compared to their peers, and similar (72%) or higher (23%) cancer detection rates and similar (72%) or higher (38%) PPV2. Estimation accuracy did not differ by radiologists’ characteristics except radiologists who interpret ≤1,000 mammograms annually were less accurate at estimating their recall rates. Conclusion Radiologists perceive their performance to be better than it actually is and at least as good as their peers. Radiologists have particular difficulty estimating their false positive rates and PPV2. PMID:22915414
ERIC Educational Resources Information Center
Klinkenberg, S.; Straatemeier, M.; van der Maas, H. L. J.
2011-01-01
In this paper we present a model for computerized adaptive practice and monitoring. This model is used in the Maths Garden, a web-based monitoring system, which includes a challenging web environment for children to practice arithmetic. Using a new item response model based on the Elo (1978) rating system and an explicit scoring rule, estimates of…
New Measurement Methods of Network Robustness and Response Ability via Microarray Data
Tu, Chien-Ta; Chen, Bor-Sen
2013-01-01
“Robustness”, the network ability to maintain systematic performance in the face of intrinsic perturbations, and “response ability”, the network ability to respond to external stimuli or transduce them to downstream regulators, are two important complementary system characteristics that must be considered when discussing biological system performance. However, at present, these features cannot be measured directly for all network components in an experimental procedure. Therefore, we present two novel systematic measurement methods – Network Robustness Measurement (NRM) and Response Ability Measurement (RAM) – to estimate the network robustness and response ability of a gene regulatory network (GRN) or protein-protein interaction network (PPIN) based on the dynamic network model constructed by the corresponding microarray data. We demonstrate the efficiency of NRM and RAM in analyzing GRNs and PPINs, respectively, by considering aging- and cancer-related datasets. When applied to an aging-related GRN, our results indicate that such a network is more robust to intrinsic perturbations in the elderly than in the young, and is therefore less responsive to external stimuli. When applied to a PPIN of fibroblast and HeLa cells, we observe that the network of cancer cells possesses better robustness than that of normal cells. Moreover, the response ability of the PPIN calculated from the cancer cells is lower than that from healthy cells. Accordingly, we propose that generalized NRM and RAM methods represent effective tools for exploring and analyzing different systems-level dynamical properties via microarray data. Making use of such properties can facilitate prediction and application, providing useful information on clinical strategy, drug target selection, and design specifications of synthetic biology from a systems biology perspective. PMID:23383119
D'Aniello, Guido Edoardo; Scarpina, Federica; Albani, Giovanni; Castelnuovo, Gianluca; Mauro, Alessandro
2015-08-01
The cognitive estimation test (CET) measures cognitive estimation abilities: it assesses the ability to apply reasoning strategies to answer questions that usually cannot lead to a clear and exact reply. Since it requires the activation of an intricate ensemble of cognitive functions, there is an ongoing debate in the literature regarding whether the CET represents a measurement of global cognitive abilities or a pure measure of executive functions. In the present study, CET together with a neuropsychological assessment focused on executive functions was administered in thirty patients with Parkinson's disease without signs of dementia. The CET correlated with measures of verbal working memory and semantic knowledge, but not with other dimensions of executive domains, such as verbal phonemic fluency, ability to manage real-world interferences, or visuospatial reasoning. According to our results, cognitive estimation abilities appeared to trigger a defined cognitive path that includes executive functions, namely, working memory and semantic knowledge. PMID:25791888
A simple method to estimate interwell autocorrelation
Pizarro, J.O.S.; Lake, L.W.
1997-08-01
The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.
Variational bayesian method of estimating variance components.
Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi
2016-07-01
We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling. PMID:26877207
Cost estimating methods for advanced space systems
NASA Technical Reports Server (NTRS)
Cyr, Kelley
1988-01-01
The development of parametric cost estimating methods for advanced space systems in the conceptual design phase is discussed. The process of identifying variables which drive cost and the relationship between weight and cost are discussed. A theoretical model of cost is developed and tested using a historical data base of research and development projects.
Estimation method for serial dilution experiments.
Ben-David, Avishai; Davidson, Charles E
2014-12-01
Titration of microorganisms in infectious or environmental samples is a corner stone of quantitative microbiology. A simple method is presented to estimate the microbial counts obtained with the serial dilution technique for microorganisms that can grow on bacteriological media and develop into a colony. The number (concentration) of viable microbial organisms is estimated from a single dilution plate (assay) without a need for replicate plates. Our method selects the best agar plate with which to estimate the microbial counts, and takes into account the colony size and plate area that both contribute to the likelihood of miscounting the number of colonies on a plate. The estimate of the optimal count given by our method can be used to narrow the search for the best (optimal) dilution plate and saves time. The required inputs are the plate size, the microbial colony size, and the serial dilution factors. The proposed approach shows relative accuracy well within ±0.1log10 from data produced by computer simulations. The method maintains this accuracy even in the presence of dilution errors of up to 10% (for both the aliquot and diluent volumes), microbial counts between 10(4) and 10(12) colony-forming units, dilution ratios from 2 to 100, and plate size to colony size ratios between 6.25 to 200. PMID:25205541
Efficient Methods of Estimating Switchgrass Biomass Supplies
Technology Transfer Automated Retrieval System (TEKTRAN)
Switchgrass (Panicum virgatum L.) is being developed as a biofuel feedstock for the United States. Efficient and accurate methods to estimate switchgrass biomass feedstock supply within a production area will be required by biorefineries. Our main objective was to determine the effectiveness of in...
The Study on Educational Technology Abilities Evaluation Method
NASA Astrophysics Data System (ADS)
Jing, Duan
The traditional methods used to evaluate the test, the test did not really measure that we want to measuring things. Test results and can not serve as a basis for evaluation, so it was worth the natural result of its evaluation of weighing. This system is full use of technical means of education, based on education, psychological theory, to evaluate the object-based, evaluation tools, evaluation of secondary teachers to primary and secondary school teachers in educational technology as the goal, using a variety of evaluation of side France, from various angles established an informal evaluation system.
Fused methods for visual saliency estimation
NASA Astrophysics Data System (ADS)
Danko, Amanda S.; Lyu, Siwei
2015-02-01
In this work, we present a new model of visual saliency by combing results from existing methods, improving upon their performance and accuracy. By fusing pre-attentive and context-aware methods, we highlight the abilities of state-of-the-art models while compensating for their deficiencies. We put this theory to the test in a series of experiments, comparatively evaluating the visual saliency maps and employing them for content-based image retrieval and thumbnail generation. We find that on average our model yields definitive improvements upon recall and f-measure metrics with comparable precisions. In addition, we find that all image searches using our fused method return more correct images and additionally rank them higher than the searches using the original methods alone.
Cost estimating methods for advanced space systems
NASA Technical Reports Server (NTRS)
Cyr, Kelley
1988-01-01
Parametric cost estimating methods for space systems in the conceptual design phase are developed. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance, and time. The relationship between weight and cost is examined in detail. A theoretical model of cost is developed and tested statistically against a historical data base of major research and development programs. It is concluded that the technique presented is sound, but that it must be refined in order to produce acceptable cost estimates.
Implicit solvent methods for free energy estimation
Decherchi, Sergio; Masetti, Matteo; Vyalov, Ivan; Rocchia, Walter
2014-01-01
Solvation is a fundamental contribution in many biological processes and especially in molecular binding. Its estimation can be performed by means of several computational approaches. The aim of this review is to give an overview of existing theories and methods to estimate solvent effects giving a specific focus on the category of implicit solvent models and their use in Molecular Dynamics. In many of these models, the solvent is considered as a continuum homogenous medium, while the solute can be represented at the atomic detail and at different levels of theory. Despite their degree of approximation, implicit methods are still widely employed due to their trade-off between accuracy and efficiency. Their derivation is rooted in the statistical mechanics and integral equations disciplines, some of the related details being provided here. Finally, methods that combine implicit solvent models and molecular dynamics simulation, are briefly described. PMID:25193298
A method for estimating soil moisture availability
NASA Technical Reports Server (NTRS)
Carlson, T. N.
1985-01-01
A method for estimating values of soil moisture based on measurements of infrared surface temperature is discussed. A central element in the method is a boundary layer model. Although it has been shown that soil moistures determined by this method using satellite measurements do correspond in a coarse fashion to the antecedent precipitation, the accuracy and exact physical interpretation (with respect to ground water amounts) are not well known. This area of ignorance, which currently impedes the practical application of the method to problems in hydrology, meteorology and agriculture, is largely due to the absence of corresponding surface measurements. Preliminary field measurements made over France have led to the development of a promising vegetation formulation (Taconet et al., 1985), which has been incorporated in the model. It is necessary, however, to test the vegetation component, and the entire method, over a wide variety of surface conditions and crop canopies.
Comparative yield estimation via shock hydrodynamic methods
Attia, A.V.; Moran, B.; Glenn, L.A.
1991-06-01
Shock TOA (CORRTEX) from recent underground nuclear explosions in saturated tuff were used to estimate yield via the simulated explosion-scaling method. The sensitivity of the derived yield to uncertainties in the measured shock Hugoniot, release adiabats, and gas porosity is the main focus of this paper. In this method for determining yield, we assume a point-source explosion in an infinite homogeneous material. The rock is formulated using laboratory experiments on core samples, taken prior to the explosion. Results show that increasing gas porosity from 0% to 2% causes a 15% increase in yield per ms/kt{sup 1/3}. 6 refs., 4 figs.
Dykiert, Dominika; Deary, Ian J
2013-12-01
In order to assess the degree of cognitive decline resulting from a pathological state, such as dementia, or from a normal aging process, it is necessary to know or to have a valid estimate of premorbid (or prior) cognitive ability. The National Adult Reading Test (NART; Nelson & Willison, 1991) and the Wechsler Test of Adult Reading (WTAR; Psychological Corporation, 2001) are 2 tests developed to estimate premorbid or prior ability. Due to the rarity of actual prior ability data, validation studies usually compare NART/WTAR performance with measures of current abilities in pathological and nonpathological groups. In this study, we validate the use of WTAR scores and extend the validation of the use of NART scores as estimates of prior ability, vis-à-vis the actual prior (childhood) cognitive ability. We do this in a large sample of healthy older people, the Lothian Birth Cohort 1936 (Deary, Gow, Pattie, & Starr, 2012; Deary et al., 2007). Both NART and WTAR scores were correlated with cognitive ability tested in childhood (r = .66-.68). Scores on both the NART and the WTAR had high stability over a period of 3 years in old age (r in excess of .90) and high interrater reliability. The NART accounted for more unique variance in childhood intelligence than did the WTAR. PMID:23815111
On methods of estimating cosmological bulk flows
NASA Astrophysics Data System (ADS)
Nusser, Adi
2016-01-01
We explore similarities and differences between several estimators of the cosmological bulk flow, B, from the observed radial peculiar velocities of galaxies. A distinction is made between two theoretical definitions of B as a dipole moment of the velocity field weighted by a radial window function. One definition involves the three-dimensional (3D) peculiar velocity, while the other is based on its radial component alone. Different methods attempt at inferring B for either of these definitions which coincide only for the case of a velocity field which is constant in space. We focus on the Wiener Filtering (WF) and the Constrained Minimum Variance (CMV) methodologies. Both methodologies require a prior expressed in terms of the radial velocity correlation function. Hoffman et al. compute B in Top-Hat windows from a WF realization of the 3D peculiar velocity field. Feldman et al. infer B directly from the observed velocities for the second definition of B. The WF methodology could easily be adapted to the second definition, in which case it will be equivalent to the CMV with the exception of the imposed constraint. For a prior with vanishing correlations or very noisy data, CMV reproduces the standard Maximum Likelihood estimation for B of the entire sample independent of the radial weighting function. Therefore, this estimator is likely more susceptible to observational biases that could be present in measurements of distant galaxies. Finally, two additional estimators are proposed.
Cost estimating methods for advanced space systems
NASA Technical Reports Server (NTRS)
Cyr, Kelley
1994-01-01
NASA is responsible for developing much of the nation's future space technology. Cost estimates for new programs are required early in the planning process so that decisions can be made accurately. Because of the long lead times required to develop space hardware, the cost estimates are frequently required 10 to 15 years before the program delivers hardware. The system design in conceptual phases of a program is usually only vaguely defined and the technology used is so often state-of-the-art or beyond. These factors combine to make cost estimating for conceptual programs very challenging. This paper describes an effort to develop parametric cost estimating methods for space systems in the conceptual design phase. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance and time. The nature of the relationships between the driver variables and cost will be discussed. In particular, the relationship between weight and cost will be examined in detail. A theoretical model of cost will be developed and tested statistically against a historical database of major research and development projects.
2013-01-01
Background The use of vasoconstrictor can affect the dynamic indices to predict fluid responsiveness. We investigate the effects of an increase of vascular tone on dynamic variables of fluid responsiveness in a rabbit model of hemorrhage, and to examine the ability of the arterial pressure surrogates dynamic indices to track systolic volume variation (SVV) during hypovolemia under increased vasomotor tone. Methods Eighteen anesthetized and mechanically ventilated rabbits were studied during normovolemia (BL) and after blood progressive removal (15 mL/kg, BW). Other two sets of data were obtained during PHE infusion with normovolemia (BL + PHE) and during hypovolemia (BW + PHE). We measured central venous and left ventricular (LV) pressures and infra diaphragmatic aortic blood flow (AoF) and pressure. Pulse pressure variation (PPV), systolic pressure variation (SPV) and SVV were estimated manually by the variation of beat-to-beat PP, SP and SV, respectively. We also calculated PPVapnea as 100 × (PPmax-PPmin)/PP during apnea. The vasomotor tone was estimated by total peripheral resistance (TPR = mean aortic pressure/mean AoF), dynamic arterial elastance (Eadyn = PPV/SVV) and arterial compliance (C = SV/PP). We assessed LV preload by LV end-diastolic pressure (LVEDP). We compared the trending abilities between SVV and pressure surrogate indices using four-quadrant plots and polar plots. Results Baseline PPV, SPV, PPVapnea, and SVV increased significantly during hemorrhage, with a decrease of AoF (P < 0.05). PHE induced significant TPR and Eadyn increase and C decrease in bled animals, and a further decrease in AoF with a significant decrease of all dynamic indices. There was a significant correlation between SVV and PPV, PPVapnea and SPV in normal vasomotor tone (r2 ≥ 0.5). The concordance rate was 91%, 95% and 76% between SVV and PPV, PPVapnea and SPV, respectively, in accordance with the polar plot analysis. During PHE infusion
An analytical method of estimating turbine performance
NASA Technical Reports Server (NTRS)
Kochendorfer, Fred D; Nettles, J Cary
1949-01-01
A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and the friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and the tuning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of a blading-loss parameter.
An Analytical Method of Estimating Turbine Performance
NASA Technical Reports Server (NTRS)
Kochendorfer, Fred D; Nettles, J Cary
1948-01-01
A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and turning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of the blading-loss parameter. A variation of blading-loss parameter from 0.3 to 0.5 includes most of the experimental data from the turbine investigated.
Estimation of an Examinee's Ability in the Web-Based Computerized Adaptive Testing Program IRT-CAT
Park, Jung-Ho; Park, In-Yong
2006-01-01
We developed a program to estimate an examinee s ability in order to provide freely available access to a web-based computerized adaptive testing (CAT) program. We used PHP and Java Script as the program languages, PostgresSQL as the database management system on an Apache web server and Linux as the operating system. A system which allows for user input and searching within inputted items and creates tests was constructed. We performed an ability estimation on each test based on a Rasch model and 2- or 3-parametric logistic models. Our system provides an algorithm for a web-based CAT, replacing previous personal computer-based ones, and makes it possible to estimate an examinee's ability immediately at the end of test. PMID:19223996
NASA Astrophysics Data System (ADS)
Crowell, S.; Kawa, S. R.; Hammerling, D.; Moore, B., III; Rayner, P. J.
2014-12-01
In Hammerling et al., 2014 (H14) the authors demonstrated a geostatistical method for mapping satellite estimates of column integrated CO2 mixing ratio, denoted XCO2, that incorporates the spatial variability in satellite-measured XCO2 as well as measurement precision. The goal of the study was to determine whether the Active Sensing of CO2 over Nights, Days and Seasons (ASCENDS) mission would be able to detect changes in XCO2 given changes in the underlying fluxes for different levels of instrument precision. Three scenarios were proposed: a flux-neutral shift in fossil fuel emissions from Europe to China (shown in the figure); a permafrost melting event; interannual variability in the Southern Oceans. The conclusions of H14 were modest but favorable for detectability in each case by ASCENDS given enough observations and sufficient precision. These signal detection experiments suggest that ASCENDS observations, together with a chemical transport model and data assimilation methodology, would be sufficient to provide quality estimates of the underlying surface fluxes, so long as the ASCENDS observations are precise enough. In this work, we present results that bridge the gap between the previous signal detection work by [Hammerling et al., 2014] and the ability of transport models to recover flux perturbations from ASCENDS observations utilizing the TM5-4DVAR data assimilation system. In particular, we will explore the space of model and observational uncertainties that will yield useful scientific information in each of the flux perturbation scenarios. This work will give a sense of the ability of ASCENDS to answer key questions about some of the foremost questions in carbon cycle science today. References: Hammerling, D., Kawa, S., Schaefer, K., and Michalak, A. (2014). Detectability of CO2 flux signals by a space-based lidar mission. Submitted.
Method for estimation of protein isoelectric point.
Pihlasalo, Sari; Auranen, Laura; Hänninen, Pekka; Härmä, Harri
2012-10-01
Adsorption of sample protein to Eu(3+) chelate-labeled nanoparticles is the basis of the developed noncompetitive and homogeneous method for the estimation of the protein isoelectric point (pI). The lanthanide ion of the nanoparticle surface-conjugated Eu(3+) chelate is dissociated at a low pH, therefore decreasing the luminescence signal. A nanoparticle-adsorbed sample protein prevents the dissociation of the chelate, leading to a high luminescence signal. The adsorption efficiency of the sample protein is reduced above the isoelectric point due to the decreased electrostatic attraction between the negatively charged protein and the negatively charged particle. Four proteins with isoelectric points ranging from ~5 to 9 were tested to show the performance of the method. These pI values measured with the developed method were close to the theoretical and experimental literature values. The method is sensitive and requires a low analyte concentration of submilligrams per liter, which is nearly 10000 times lower than the concentration required for the traditional isoelectric focusing. Moreover, the method is significantly faster and simpler than the existing methods, as a ready-to-go assay was prepared for the microtiter plate format. This mix-and-measure concept is a highly attractive alternative for routine laboratory work. PMID:22946671
Kundu, Suman; Mihaescu, Raluca; Meijer, Catherina M. C.; Bakker, Rachel; Janssens, A. Cecile J. W.
2014-01-01
Background: There is increasing interest in investigating genetic risk models in empirical studies, but such studies are premature when the expected predictive ability of the risk model is low. We assessed how accurately the predictive ability of genetic risk models can be estimated in simulated data that are created based on the odds ratios (ORs) and frequencies of single-nucleotide polymorphisms (SNPs) obtained from genome-wide association studies (GWASs). Methods: We aimed to replicate published prediction studies that reported the area under the receiver operating characteristic curve (AUC) as a measure of predictive ability. We searched GWAS articles for all SNPs included in these models and extracted ORs and risk allele frequencies to construct genotypes and disease status for a hypothetical population. Using these hypothetical data, we reconstructed the published genetic risk models and compared their AUC values to those reported in the original articles. Results: The accuracy of the AUC values varied with the method used for the construction of the risk models. When logistic regression analysis was used to construct the genetic risk model, AUC values estimated by the simulation method were similar to the published values with a median absolute difference of 0.02 [range: 0.00, 0.04]. This difference was 0.03 [range: 0.01, 0.06] and 0.05 [range: 0.01, 0.08] for unweighted and weighted risk scores. Conclusions: The predictive ability of genetic risk models can be estimated using simulated data based on results from GWASs. Simulation methods can be useful to estimate the predictive ability in the absence of empirical data and to decide whether empirical investigation of genetic risk models is warranted. PMID:24982668
A Novel Method for Estimating Linkage Maps
Tan, Yuan-De; Fu, Yun-Xin
2006-01-01
The goal of linkage mapping is to find the true order of loci from a chromosome. Since the number of possible orders is large even for a modest number of loci, the problem of finding the optimal solution is known as a NP-hard problem or traveling salesman problem (TSP). Although a number of algorithms are available, many either are low in the accuracy of recovering the true order of loci or require tremendous amounts of computational resources, thus making them difficult to use for reconstructing a large-scale map. We developed in this article a novel method called unidirectional growth (UG) to help solve this problem. The UG algorithm sequentially constructs the linkage map on the basis of novel results about additive distance. It not only is fast but also has a very high accuracy in recovering the true order of loci according to our simulation studies. Since the UG method requires n − 1 cycles to estimate the ordering of n loci, it is particularly useful for estimating linkage maps consisting of hundreds or even thousands of linked codominant loci on a chromosome. PMID:16783016
ERIC Educational Resources Information Center
Wang, Wen-Chung
2008-01-01
Raju and Oshima (2005) proposed two prophecy formulas based on item response theory in order to predict the reliability of ability estimates for a test after change in its length. The first prophecy formula is equivalent to the classical Spearman-Brown prophecy formula. The second prophecy formula is misleading because of an underlying false…
ERIC Educational Resources Information Center
Matlock, Ki Lynn
2013-01-01
When test forms that have equal total test difficulty and number of items vary in difficulty and length within sub-content areas, an examinee's estimated score may vary across equivalent forms, depending on how well his or her true ability in each sub-content area aligns with the difficulty of items and number of items within these areas.…
Demographic estimation methods for plants with dormancy
Kery, M.; Gregg, K.B.
2004-01-01
Demographic studies in plants appear simple because unlike animals, plants do not run away. Plant individuals can be marked with, e.g., plastic tags, but often the coordinates of an individual may be sufficient to identify it. Vascular plants in temperate latitudes have a pronounced seasonal life-cycle, so most plant demographers survey their study plots once a year often during or shortly after flowering. Life-states are pervasive in plants, hence the results of a demographic study for an individual can be summarized in a familiar encounter history, such as OVFVVF000. A zero means that an individual was not seen in a year and a letter denotes its state for years when it was seen aboveground. V and F here stand for vegetative and flowering states, respectively. Probabilities of survival and state transitions can then be obtained by mere counting. Problems arise when there is an unobservable dormant state, I.e., when plants may stay belowground for one or more growing seasons. Encounter histories such as OVFOOF000 may then occur where the meaning of zeroes becomes ambiguous. A zero can either mean a dead or a dormant plant. Various ad hoc methods in wide use among plant ecologists have made strong assumptions about when a zero should be equated to a dormant individual. These methods have never been compared among each other. In our talk and in Kery et al. (submitted), we show that these ad hoc estimators provide spurious estimates of survival and should not be used. In contrast, if detection probabilities for aboveground plants are known or can be estimated, capture-recapture (CR) models can be used to estimate probabilities of survival and state-transitions and the fraction of the population that is dormant. We have used this approach in two studies of terrestrial orchids, Cleistes bifaria (Kery et aI., submitted) and Cypripedium reginae (Kery & Gregg, submitted) in West Virginia, U.S.A. For Cleistes, our data comprised one population with a total of 620 marked
Aptitude treatment effects of laboratory grouping method for students of differing reasoning ability
NASA Astrophysics Data System (ADS)
Lawrenz, Frances; Munch, Theodore W.
This study examines aptitude treatment effects in an inquiry/learning cycle based physical science class for elementary education majors. The aptitude was formal reasoning ability and the students were arranged into three groups: high, middle, and low ability reasoners. The treatment was method of forming groups to work in the laboratory. Students in each of three classes were grouped according to reasoning ability. In one class the laboratory groups were homogeneous, i.e., students of similar reasoning ability were grouped together. In the second class the students were grouped heterogeneously, i.e., students of different reasoning ability were grouped together. In the third class, the student choice pattern, the students chose their own partners. The findings were that there were no aptitude treatment interaction for achievement or for gain in formal reasoning ability, that grouping students of similar cognitive ability together for laboratory work in the class was more effective in terms of science achievement than grouping students of differing cognitive ability together or than allowing students to choose their own partners, and that students at different levels of reasoning ability experienced differential gains in that ability over the semester.
A new estimator method for GARCH models
NASA Astrophysics Data System (ADS)
Onody, R. N.; Favaro, G. M.; Cazaroto, E. R.
2007-06-01
The GARCH (p, q) model is a very interesting stochastic process with widespread applications and a central role in empirical finance. The Markovian GARCH (1, 1) model has only 3 control parameters and a much discussed question is how to estimate them when a series of some financial asset is given. Besides the maximum likelihood estimator technique, there is another method which uses the variance, the kurtosis and the autocorrelation time to determine them. We propose here to use the standardized 6th moment. The set of parameters obtained in this way produces a very good probability density function and a much better time autocorrelation function. This is true for both studied indexes: NYSE Composite and FTSE 100. The probability of return to the origin is investigated at different time horizons for both Gaussian and Laplacian GARCH models. In spite of the fact that these models show almost identical performances with respect to the final probability density function and to the time autocorrelation function, their scaling properties are, however, very different. The Laplacian GARCH model gives a better scaling exponent for the NYSE time series, whereas the Gaussian dynamics fits better the FTSE scaling exponent.
ERIC Educational Resources Information Center
Ackerman, Phillip L.; Beier, Margaret E.
2007-01-01
Measures of perceptual speed ability have been shown to be an important part of assessment batteries for predicting performance on tasks and jobs that require a high level of speed and accuracy. However, traditional measures of perceptual speed ability sometimes have limited cost-effectiveness because of the requirements for administration and…
Structural Reliability Using Probability Density Estimation Methods Within NESSUS
NASA Technical Reports Server (NTRS)
Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric
2003-01-01
proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.
ERIC Educational Resources Information Center
Freund, Philipp Alexander; Kasten, Nadine
2012-01-01
Individuals' perceptions of their own level of cognitive ability are expressed through self-estimates. They play an important role in a person's self-concept because they facilitate an understanding of how one's own abilities relate to those of others. People evaluate their own and other persons' abilities all the time, but self-estimates are also…
ERIC Educational Resources Information Center
Acredolo, Curt; And Others
1989-01-01
Two studies assessed 90 elementary school students' attention to the total number of alternative and target outcomes when making probability estimates. All age groups attended to variations in the denominator and numerator and the interaction between these variables. (RJC)
Bayes method for low rank tensor estimation
NASA Astrophysics Data System (ADS)
Suzuki, Taiji; Kanagawa, Heishiro
2016-03-01
We investigate the statistical convergence rate of a Bayesian low-rank tensor estimator, and construct a Bayesian nonlinear tensor estimator. The problem setting is the regression problem where the regression coefficient forms a tensor structure. This problem setting occurs in many practical applications, such as collaborative filtering, multi-task learning, and spatio-temporal data analysis. The convergence rate of the Bayes tensor estimator is analyzed in terms of both in-sample and out-of-sample predictive accuracies. It is shown that a fast learning rate is achieved without any strong convexity of the observation. Moreover, we extend the tensor estimator to a nonlinear function estimator so that we estimate a function that is a tensor product of several functions.
Development of Methods of Evaluating Abilities to Make Plans in New Group Work
NASA Astrophysics Data System (ADS)
Kiriyama, Satoshi
The ability to evaluate something vague which is, for example, originality can be regarded as one of important elements which constitute the ability to make plans. The author has made use of cooperative activities in which every member undertakes each stage of a plan-do-check-cycle in order to develop training methods and evaluating methods of evaluating ability. The members of a CHECK team evaluated activities of a PLAN team and a DO team. The author tried to grasp the abilities of the members of a CHECK team by analyzing results of the evaluation. In addition, the author have made some teachers evaluate a sample in order to study the accuracy of criteria and extracted some challenges.
Using optimal estimation method for upper atmospheric Lidar temperature retrieval
NASA Astrophysics Data System (ADS)
Zou, Rongshi; Pan, Weilin; Qiao, Shuai
2016-07-01
Conventional ground based Rayleigh lidar temperature retrieval use integrate technique, which has limitations that necessitate abandoning temperatures retrieved at the greatest heights due to the assumption of a seeding value required to initialize the integration at the highest altitude. Here we suggests the use of a method that can incorporate information from various sources to improve the quality of the retrieval result. This approach inverts lidar equation via optimal estimation method(OEM) based on Bayesian theory together with Gaussian statistical model. It presents many advantages over the conventional ones: 1) the possibility of incorporating information from multiple heterogeneous sources; 2) provides diagnostic information about retrieval qualities; 3) ability of determining vertical resolution and maximum height to which the retrieval is mostly independent of the a priori profile. This paper compares one-hour temperature profiles retrieved using conventional and optimal estimation methods at Golmud, Qinghai province, China. Results show that OEM results show a better agreement with SABER profile compared with conventional one, in some region it is much lower than SABER profile, which is a very different results compared with previous studies, further studies are needed to explain this phenomenon. The success of applying OEM on temperature retrieval is a validation for using as retrieval framework in large synthetic observation systems including various active remote sensing instruments by incorporating all available measurement information into the model and analyze groups of measurements simultaneously to improve the results.
Two Prophecy Formulas for Assessing the Reliability of Item Response Theory-Based Ability Estimates
ERIC Educational Resources Information Center
Raju, Nambury S.; Oshima, T.C.
2005-01-01
Two new prophecy formulas for estimating item response theory (IRT)-based reliability of a shortened or lengthened test are proposed. Some of the relationships between the two formulas, one of which is identical to the well-known Spearman-Brown prophecy formula, are examined and illustrated. The major assumptions underlying these formulas are…
Simultaneous Estimation of Overall and Domain Abilities: A Higher-Order IRT Model Approach
ERIC Educational Resources Information Center
de la Torre, Jimmy; Song, Hao
2009-01-01
Assessments consisting of different domains (e.g., content areas, objectives) are typically multidimensional in nature but are commonly assumed to be unidimensional for estimation purposes. The different domains of these assessments are further treated as multi-unidimensional tests for the purpose of obtaining diagnostic information. However, when…
ERIC Educational Resources Information Center
Fink, A.; Neubauer, A. C.
2005-01-01
In experimental time estimation research, it has consistently been found that the more a person is engaged in some kind of demanding cognitive activity within a given period of time, the more experienced duration of this time interval decreases. However, the role of individual differences has been largely ignored in this field of research. In a…
ERIC Educational Resources Information Center
Lawrenz, Frances
1985-01-01
Determined: (1) if elementary education majors (N=91) from different levels of reasoning ability learned more science concepts under different grouping methods in an inquiry/learning cycle-based physical science class; and (2) if these students became able to reason more effectively under the different grouping methods. (JN)
Comparisons of Four Methods for Estimating a Dynamic Factor Model
ERIC Educational Resources Information Center
Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.
2008-01-01
Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…
The MIRD method of estimating absorbed dose
Weber, D.A.
1991-01-01
The estimate of absorbed radiation dose from internal emitters provides the information required to assess the radiation risk associated with the administration of radiopharmaceuticals for medical applications. The MIRD (Medical Internal Radiation Dose) system of dose calculation provides a systematic approach to combining the biologic distribution data and clearance data of radiopharmaceuticals and the physical properties of radionuclides to obtain dose estimates. This tutorial presents a review of the MIRD schema, the derivation of the equations used to calculate absorbed dose, and shows how the MIRD schema can be applied to estimate dose from radiopharmaceuticals used in nuclear medicine.
How Accurately Do Spectral Methods Estimate Effective Elastic Thickness?
NASA Astrophysics Data System (ADS)
Perez-Gussinye, M.; Lowry, A. R.; Watts, A. B.; Velicogna, I.
2002-12-01
The effective elastic thickness, Te, is an important parameter that has the potential to provide information on the long-term thermal and mechanical properties of the the lithosphere. Previous studies have estimated Te using both forward and inverse (spectral) methods. While there is generally good agreement between the results obtained using these methods, spectral methods are limited because they depend on the spectral estimator and the window size chosen for analysis. In order to address this problem, we have used a multitaper technique which yields optimal estimates of the bias and variance of the Bouguer coherence function relating topography and gravity anomaly data. The technique has been tested using realistic synthetic topography and gravity. Synthetic data were generated assuming surface and sub-surface (buried) loading of an elastic plate with fractal statistics consistent with real data sets. The cases of uniform and spatially varying Te are examined. The topography and gravity anomaly data consist of 2000x2000 km grids sampled at 8 km interval. The bias in the Te estimate is assessed from the difference between the true Te value and the mean from analyzing 100 overlapping windows within the 2000x2000 km data grids. For the case in which Te is uniform, the bias and variance decrease with window size and increase with increasing true Te value. In the case of a spatially varying Te, however, there is a trade-off between spatial resolution and variance. With increasing window size the variance of the Te estimate decreases, but the spatial changes in Te are smeared out. We find that for a Te distribution consisting of a strong central circular region of Te=50 km (radius 600 km) and progressively smaller Te towards its edges, the 800x800 and 1000x1000 km window gave the best compromise between spatial resolution and variance. Our studies demonstrate that assumed stationarity of the relationship between gravity and topography data yields good results even in
Effect of methods of evaluation on sealing ability of mineral trioxide aggregate apical plug
Nikhil, Vineeta; Jha, Padmanabh; Suri, Navleen Kaur
2016-01-01
Aim: The purpose of the study was to evaluate and compare the sealing ability of mineral trioxide aggregate (MTA) with three different methods. Materials and Methods: Forty single canal teeth were decoronated, and root canals were enlarged to simulate immature apex. The samples were randomly divided into Group MD = MTA-angelus mixed with distilled water and Group MC = MTA-angelus mixed with 2% chlorhexidine, and apical seal was recorded with glucose penetration method, fluid filtration method, and dye penetration methods and compared. Results: The three methods of evaluation resulted differently. The glucose penetration method showed that MD sealed better than MC, but difference was statistically insignificant (P > 0.05). The fluid filtration method resulted that Group MC was statistically insignificant superior to Group MD (P > 0.05). The dye penetration method showed that Group MC sealed statistically better than Group MD. Conclusion: No correlation was found among the results obtained with the three methods of evaluation. Addition of chlorhexidine enhanced the sealing ability of MTA according to the fluid filtration test and dye leakage while according to the glucose penetration test, chlorhexidine did not enhance the sealing ability of MTA. This study showed that relying on the results of apical sealing by only method can be misleading. PMID:27217635
Statistical methods of estimating mining costs
Long, K.R.
2011-01-01
Until it was defunded in 1995, the U.S. Bureau of Mines maintained a Cost Estimating System (CES) for prefeasibility-type economic evaluations of mineral deposits and estimating costs at producing and non-producing mines. This system had a significant role in mineral resource assessments to estimate costs of developing and operating known mineral deposits and predicted undiscovered deposits. For legal reasons, the U.S. Geological Survey cannot update and maintain CES. Instead, statistical tools are under development to estimate mining costs from basic properties of mineral deposits such as tonnage, grade, mineralogy, depth, strip ratio, distance from infrastructure, rock strength, and work index. The first step was to reestimate "Taylor's Rule" which relates operating rate to available ore tonnage. The second step was to estimate statistical models of capital and operating costs for open pit porphyry copper mines with flotation concentrators. For a sample of 27 proposed porphyry copper projects, capital costs can be estimated from three variables: mineral processing rate, strip ratio, and distance from nearest railroad before mine construction began. Of all the variables tested, operating costs were found to be significantly correlated only with strip ratio.
A Study of Variance Estimation Methods. Working Paper Series.
ERIC Educational Resources Information Center
Zhang, Fan; Weng, Stanley; Salvucci, Sameena; Hu, Ming-xiu
This working paper contains reports of five studies of variance estimation methods. The first, An Empirical Study of Poststratified Estimator, by Fan Zhang uses data from the National Household Education Survey to illustrate use of poststratified estimation. The second paper, BRR Variance Estimation Using BPLX Hadamard Procedure, by Stanley Weng…
Olsen, J. Pat; Fellows, Robert P.; Rivera-Mindt, Monica; Morgello, Susan; Byrd, Desiree A.
2015-01-01
The Wide Range Achievement Test, 3rd edition, Reading-Recognition subtest (WRAT-3 RR) is an established measure of premorbid ability. Furthermore, its long-term reliability is not well documented, particularly in diverse populations with CNS-relevant disease. Objective: We examined test-retest reliability of the WRAT-3 RR over time in an HIV+ sample of predominantly racial/ethnic minority adults. Method: Participants (N = 88) completed a comprehensive neuropsychological battery, including the WRAT-3 RR, on at least two separate study visits. Intraclass correlation coefficients (ICCs) were computed using scores from baseline and follow-up assessments to determine the test-retest reliability of the WRAT-3 RR across racial/ethnic groups and changes in medical (immunological) and clinical (neurocognitive) factors. Additionally, Fisher’s Z tests were used to determine the significance of the differences between ICCs. Results: The average test-retest interval was 58.7 months (SD=36.4). The overall WRAT-3 RR test-retest reliability was high (r = .97, p < .001), and remained robust across all demographic, medical, and clinical variables (all r’s > .92). Intraclass correlation coefficients did not differ significantly between the subgroups tested (all Fisher’s Z p’s > .05). Conclusions: Overall, this study supports the appropriateness of word-reading tests, such as the WRAT-3 RR, for use as stable premorbid IQ estimates among ethnically diverse groups. Moreover, this study supports the reliability of this measure in the context of change in health and neurocognitive status, and in lengthy inter-test intervals. These findings offer strong rationale for reading as a “hold” test, even in the presence of a chronic, variable disease such as HIV. PMID:26689235
Nutrient Estimation Using Subsurface Sensing Methods
Technology Transfer Automated Retrieval System (TEKTRAN)
This report investigates the use of precision management techniques for measuring soil conductivity on feedlot surfaces to estimate nutrient value for crop production. An electromagnetic induction soil conductivity meter was used to collect apparent soil electrical conductivity (ECa) from feedlot p...
Estimation Methods for One-Parameter Testlet Models
ERIC Educational Resources Information Center
Jiao, Hong; Wang, Shudong; He, Wei
2013-01-01
This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…
ERIC Educational Resources Information Center
Maarif, Samsul
2016-01-01
The aim of this study was to identify the influence of discovery learning method towards the mathematical analogical ability of junior high school's students. This is a research using factorial design 2x2 with ANOVA-Two ways. The population of this research included the entire students of SMPN 13 Jakarta (State Junior High School 13 of Jakarta)…
ERIC Educational Resources Information Center
Clark, Sarah K.; Read, Sylvia
2012-01-01
This quantitative study examined the effectiveness of pairing preservice teachers with young readers during a 9-week reading methods course to participate together in reading-related activities and partner journaling. It was hypothesized that these preservice partnerships would strengthen preservice teacher perceptions about their ability to…
ERIC Educational Resources Information Center
Chang, Ting-Wen; Kinshuk; Chen, Nian-Shing; Yu, Pao-Ta
2012-01-01
This study investigates the effects of successive and simultaneous information presentation methods on learner's visual search ability and working memory load for different information densities. Since the processing of information in the brain depends on the capacity of visual short-term memory (VSTM), the limited information processing capacity…
Affect Abilities Training--A Competency Based Method for Counseling Persons with Mental Retardation.
ERIC Educational Resources Information Center
Corcoran, James R.
1982-01-01
Affect Abilities Training (AAT) illustrates the kinds of concrete methods which can be used to further the affective development of persons with mental retardation. The objective of AAT is to develop those emotional behaviors upon which the individual (and society) place value while decreasing those responses which are counterproductive to…
An Empirical Comparison of Tree-Based Methods for Propensity Score Estimation
Watkins, Stephanie; Jonsson-Funk, Michele; Brookhart, M Alan; Rosenberg, Steven A; O'Shea, T Michael; Daniels, Julie
2013-01-01
Objective To illustrate the use of ensemble tree-based methods (random forest classification [RFC] and bagging) for propensity score estimation and to compare these methods with logistic regression, in the context of evaluating the effect of physical and occupational therapy on preschool motor ability among very low birth weight (VLBW) children. Data Source We used secondary data from the Early Childhood Longitudinal Study Birth Cohort (ECLS-B) between 2001 and 2006. Study Design We estimated the predicted probability of treatment using tree-based methods and logistic regression (LR). We then modeled the exposure-outcome relation using weighted LR models while considering covariate balance and precision for each propensity score estimation method. Principal Findings Among approximately 500 VLBW children, therapy receipt was associated with moderately improved preschool motor ability. Overall, ensemble methods produced the best covariate balance (Mean Squared Difference: 0.03–0.07) and the most precise effect estimates compared to LR (Mean Squared Difference: 0.11). The overall magnitude of the effect estimates was similar between RFC and LR estimation methods. Conclusion Propensity score estimation using RFC and bagging produced better covariate balance with increased precision compared to LR. Ensemble methods are a useful alterative to logistic regression to control confounding in observational studies. PMID:23701015
Development of advanced acreage estimation methods
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr. (Principal Investigator)
1980-01-01
The use of the AMOEBA clustering/classification algorithm was investigated as a basis for both a color display generation technique and maximum likelihood proportion estimation procedure. An approach to analyzing large data reduction systems was formulated and an exploratory empirical study of spatial correlation in LANDSAT data was also carried out. Topics addressed include: (1) development of multiimage color images; (2) spectral spatial classification algorithm development; (3) spatial correlation studies; and (4) evaluation of data systems.
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1989-01-01
The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.
The augmented Lagrangian method for parameter estimation in elliptic systems
NASA Technical Reports Server (NTRS)
Ito, Kazufumi; Kunisch, Karl
1990-01-01
In this paper a new technique for the estimation of parameters in elliptic partial differential equations is developed. It is a hybrid method combining the output-least-squares and the equation error method. The new method is realized by an augmented Lagrangian formulation, and convergence as well as rate of convergence proofs are provided. Technically the critical step is the verification of a coercivity estimate of an appropriately defined Lagrangian functional. To obtain this coercivity estimate a seminorm regularization technique is used.
Morphological method for estimation of simian virus 40 infectious titer.
Landau, S M; Nosach, L N; Pavlova, G V
1982-01-01
The cytomorphologic method previously reported for titration of adenoviruses has been employed for estimating the infectious titer of simian virus 40 (SV 40). Infected cells forming intranuclear inclusions were determined. The method examined possesses a number of advantages over virus titration by plaque assay and cytopathic effect. The virus titer estimated by the method of inclusion counting and expressed as IFU/ml (Inclusion Forming Units/ml) corresponds to that estimated by plaque count and expressed as PFU/ml. PMID:6289780
Estimated Accuracy of Three Common Trajectory Statistical Methods
NASA Technical Reports Server (NTRS)
Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.
2011-01-01
Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h
Advancing Methods for Estimating Cropland Area
NASA Astrophysics Data System (ADS)
King, L.; Hansen, M.; Stehman, S. V.; Adusei, B.; Potapov, P.; Krylov, A.
2014-12-01
Measurement and monitoring of complex and dynamic agricultural land systems is essential with increasing demands on food, feed, fuel and fiber production from growing human populations, rising consumption per capita, the expansion of crops oils in industrial products, and the encouraged emphasis on crop biofuels as an alternative energy source. Soybean is an important global commodity crop, and the area of land cultivated for soybean has risen dramatically over the past 60 years, occupying more than 5% of all global croplands (Monfreda et al 2008). Escalating demands for soy over the next twenty years are anticipated to be met by an increase of 1.5 times the current global production, resulting in expansion of soybean cultivated land area by nearly the same amount (Masuda and Goldsmith 2009). Soybean cropland area is estimated with the use of a sampling strategy and supervised non-linear hierarchical decision tree classification for the United States, Argentina and Brazil as the prototype in development of a new methodology for crop specific agricultural area estimation. Comparison of our 30 m2 Landsat soy classification with the National Agricultural Statistical Services Cropland Data Layer (CDL) soy map shows a strong agreement in the United States for 2011, 2012, and 2013. RapidEye 5m2 imagery was also classified for soy presence and absence and used at the field scale for validation and accuracy assessment of the Landsat soy maps, describing a nearly 1 to 1 relationship in the United States, Argentina and Brazil. The strong correlation found between all products suggests high accuracy and precision of the prototype and has proven to be a successful and efficient way to assess soybean cultivated area at the sub-national and national scale for the United States with great potential for application elsewhere.
Examining Method Effect of Synonym and Antonym Test in Verbal Abilities Measure
Widhiarso, Wahyu; Haryanta
2015-01-01
Many researchers have assumed that different methods could be substituted to measure the same attributes in assessment. Various models have been developed to accommodate the amount of variance attributable to the methods but these models application in empirical research is rare. The present study applied one of those models to examine whether method effects were presents in synonym and antonym tests. Study participants were 3,469 applicants to graduate school. The instrument used was the Graduate Academic Potential Test (PAPS), which includes synonym and antonym questions to measure verbal abilities. Our analysis showed that measurement models that using correlated trait–correlated methods minus one, CT-C(M–1), that separated trait and method effect into distinct latent constructs yielded slightly better values for multiple goodness-of-fit indices than one factor model. However, either for the synonym or antonym items, the proportion of variance accounted for by the method is smaller than trait variance. The correlation between factor scores of both methods is high (r = 0.994). These findings confirm that synonym and antonym tests represent the same attribute so that both tests cannot be treated as two unique methods for measuring verbal ability. PMID:27247667
Examining Method Effect of Synonym and Antonym Test in Verbal Abilities Measure.
Widhiarso, Wahyu; Haryanta
2015-08-01
Many researchers have assumed that different methods could be substituted to measure the same attributes in assessment. Various models have been developed to accommodate the amount of variance attributable to the methods but these models application in empirical research is rare. The present study applied one of those models to examine whether method effects were presents in synonym and antonym tests. Study participants were 3,469 applicants to graduate school. The instrument used was the Graduate Academic Potential Test (PAPS), which includes synonym and antonym questions to measure verbal abilities. Our analysis showed that measurement models that using correlated trait-correlated methods minus one, CT-C(M-1), that separated trait and method effect into distinct latent constructs yielded slightly better values for multiple goodness-of-fit indices than one factor model. However, either for the synonym or antonym items, the proportion of variance accounted for by the method is smaller than trait variance. The correlation between factor scores of both methods is high (r = 0.994). These findings confirm that synonym and antonym tests represent the same attribute so that both tests cannot be treated as two unique methods for measuring verbal ability. PMID:27247667
Comparison of three methods for estimating complete life tables
NASA Astrophysics Data System (ADS)
Ibrahim, Rose Irnawaty
2013-04-01
A question of interest in the demographic and actuarial fields is the estimation of the complete sets of qx values when the data are given in age groups. When the complete life tables are not available, estimating it from abridged life tables is necessary. Three methods such as King's Osculatory Interpolation, Six-point Lagrangian Interpolation and Heligman-Pollard Model are compared using data on abridged life tables for Malaysian population. Each of these methods considered was applied on the abridged data sets to estimate the complete sets of qx values. Then, the estimated complete sets of qx values were used to produce the estimated abridged ones by each of the three methods. The results were then compared with the actual values published in the abridged life tables. Among the three methods, the Six-point Lagrangian Interpolation method produces the best estimates of complete life tables from five-year abridged life tables.
Seismic Methods of Identifying Explosions and Estimating Their Yield
NASA Astrophysics Data System (ADS)
Walter, W. R.; Ford, S. R.; Pasyanos, M.; Pyle, M. L.; Myers, S. C.; Mellors, R. J.; Pitarka, A.; Rodgers, A. J.; Hauk, T. F.
2014-12-01
Seismology plays a key national security role in detecting, locating, identifying and determining the yield of explosions from a variety of causes, including accidents, terrorist attacks and nuclear testing treaty violations (e.g. Koper et al., 2003, 1999; Walter et al. 1995). A collection of mainly empirical forensic techniques has been successfully developed over many years to obtain source information on explosions from their seismic signatures (e.g. Bowers and Selby, 2009). However a lesson from the three DPRK declared nuclear explosions since 2006, is that our historic collection of data may not be representative of future nuclear test signatures (e.g. Selby et al., 2012). To have confidence in identifying future explosions amongst the background of other seismic signals, and accurately estimate their yield, we need to put our empirical methods on a firmer physical footing. Goals of current research are to improve our physical understanding of the mechanisms of explosion generation of S- and surface-waves, and to advance our ability to numerically model and predict them. As part of that process we are re-examining regional seismic data from a variety of nuclear test sites including the DPRK and the former Nevada Test Site (now the Nevada National Security Site (NNSS)). Newer relative location and amplitude techniques can be employed to better quantify differences between explosions and used to understand those differences in term of depth, media and other properties. We are also making use of the Source Physics Experiments (SPE) at NNSS. The SPE chemical explosions are explicitly designed to improve our understanding of emplacement and source material effects on the generation of shear and surface waves (e.g. Snelson et al., 2013). Finally we are also exploring the value of combining seismic information with other technologies including acoustic and InSAR techniques to better understand the source characteristics. Our goal is to improve our explosion models
Chen, Bor-Sen; Chen, Po-Wei
2009-12-01
Inherently, biochemical regulatory networks suffer from process delays, internal parametrical perturbations as well as external disturbances. Robustness is the property to maintain the functions of intracellular biochemical regulatory networks despite these perturbations. In this study, system and signal processing theories are employed for measurement of robust stability and filtering ability of linear and nonlinear time-delay biochemical regulatory networks. First, based on Lyapunov stability theory, the robust stability of biochemical network is measured for the tolerance of additional process delays and additive internal parameter fluctuations. Then the filtering ability of attenuating additive external disturbances is estimated for time-delay biochemical regulatory networks. In order to overcome the difficulty of solving the Hamilton Jacobi inequality (HJI), the global linearization technique is employed to simplify the measurement procedure by a simple linear matrix inequality (LMI) method. Finally, an example is given in silico to illustrate how to measure the robust stability and filtering ability of a nonlinear time-delay perturbative biochemical network. This robust stability and filtering ability measurement for biochemical network has potential application to synthetic biology, gene therapy and drug design. PMID:19788895
System and method for correcting attitude estimation
NASA Technical Reports Server (NTRS)
Josselson, Robert H. (Inventor)
2010-01-01
A system includes an angular rate sensor disposed in a vehicle for providing angular rates of the vehicle, and an instrument disposed in the vehicle for providing line-of-sight control with respect to a line-of-sight reference. The instrument includes an integrator which is configured to integrate the angular rates of the vehicle to form non-compensated attitudes. Also included is a compensator coupled across the integrator, in a feed-forward loop, for receiving the angular rates of the vehicle and outputting compensated angular rates of the vehicle. A summer combines the non-compensated attitudes and the compensated angular rates of the to vehicle to form estimated vehicle attitudes for controlling the instrument with respect to the line-of-sight reference. The compensator is configured to provide error compensation to the instrument free-of any feedback loop that uses an error signal. The compensator may include a transfer function providing a fixed gain to the received angular rates of the vehicle. The compensator may, alternatively, include a is transfer function providing a variable gain as a function of frequency to operate on the received angular rates of the vehicle.
Bohling, Justin H; Adams, Jennifer R; Waits, Lisette P
2013-01-01
Bayesian clustering methods have emerged as a popular tool for assessing hybridization using genetic markers. Simulation studies have shown these methods perform well under certain conditions; however, these methods have not been evaluated using empirical data sets with individuals of known ancestry. We evaluated the performance of two clustering programs, baps and structure, with genetic data from a reintroduced red wolf (Canis rufus) population in North Carolina, USA. Red wolves hybridize with coyotes (C. latrans), and a single hybridization event resulted in introgression of coyote genes into the red wolf population. A detailed pedigree has been reconstructed for the wild red wolf population that includes individuals of 50-100% red wolf ancestry, providing an ideal case study for evaluating the ability of these methods to estimate admixture. Using 17 microsatellite loci, we tested the programs using different training set compositions and varying numbers of loci. structure was more likely than baps to detect an admixed genotype and correctly estimate an individual's true ancestry composition. However, structure was more likely to misclassify a pure individual as a hybrid. Both programs were outperformed by a maximum-likelihood-based test designed specifically for this system, which never misclassified a hybrid (50-75% red wolf) as a red wolf or vice versa. Training set composition and the number of loci both had an impact on accuracy but their relative importance varied depending on the program. Our findings demonstrate the importance of evaluating methods used for detecting admixture in the context of endangered species management. PMID:23163531
Spatial Statistics Preserving Interpolation Methods for Estimation of Missing Precipitation Data
NASA Astrophysics Data System (ADS)
El Sharif, H.; Teegavarapu, R. S.
2011-12-01
Spatial interpolation methods used for estimation of missing precipitation data at a site seldom check for their ability to preserve site and regional statistics. Such statistics are primarily defined by spatial correlations and other site-to-site statistics in a region. Preservation of site and regional statistics represents a means of assessing the validity of missing precipitation estimates at a site. This study will evaluate the efficacy of traditional deterministic and stochastic interpolation methods aimed at estimation of missing data in preserving site and regional statistics. New optimal spatial interpolation methods that are intended to preserve these statistics are also proposed and evaluated in this study. Rain gauge sites in the state of Kentucky, USA, are used as a case study for evaluation of existing and newly proposed methods. Several error and performance measures will be used to evaluate the methods and trade-offs in accuracy of estimation and preservation of site and regional statistics.
Evaluation of Two Methods to Estimate and Monitor Bird Populations
Taylor, Sandra L.; Pollard, Katherine S.
2008-01-01
Background Effective management depends upon accurately estimating trends in abundance of bird populations over time, and in some cases estimating abundance. Two population estimation methods, double observer (DO) and double sampling (DS), have been advocated for avian population studies and the relative merits and short-comings of these methods remain an area of debate. Methodology/Principal Findings We used simulations to evaluate the performances of these two population estimation methods under a range of realistic scenarios. For three hypothetical populations with different levels of clustering, we generated DO and DS population size estimates for a range of detection probabilities and survey proportions. Population estimates for both methods were centered on the true population size for all levels of population clustering and survey proportions when detection probabilities were greater than 20%. The DO method underestimated the population at detection probabilities less than 30% whereas the DS method remained essentially unbiased. The coverage probability of 95% confidence intervals for population estimates was slightly less than the nominal level for the DS method but was substantially below the nominal level for the DO method at high detection probabilities. Differences in observer detection probabilities did not affect the accuracy and precision of population estimates of the DO method. Population estimates for the DS method remained unbiased as the proportion of units intensively surveyed changed, but the variance of the estimates decreased with increasing proportion intensively surveyed. Conclusions/Significance The DO and DS methods can be applied in many different settings and our evaluations provide important information on the performance of these two methods that can assist researchers in selecting the method most appropriate for their particular needs. PMID:18728775
The method of assessment of the grinding wheel cutting ability in the plunge grinding
NASA Astrophysics Data System (ADS)
Nadolny, Krzysztof
2012-09-01
This article presents the method of comparative assessment of the grinding wheel cutting ability in the plunge grinding kinematics. A new method has been developed to facilitate multicriterial assessment of the working conditions of the abrasive grains and the bond bridges, as well as the wear mechanisms of the GWAS, which occur during the grinding process, with simultaneous limitation of the workshop tests range. The work hereby describes the methodology of assessment of the grinding wheel cutting ability in a short grinding test that lasts for 3 seconds, for example, with a specially shaped grinding wheel, in plunge grinding. The grinding wheel macrogeometry modification applied in the developed method consists in forming a cone or a few zones of various diameters on its surface in the dressing cut. It presents an exemplary application of two variants of the method in the internal cylindrical plunge grinding, in 100Cr6 steel. Grinding wheels with microcrystalline corundum grains and ceramic bond underwent assessment. Analysis of the registered machining results showed greater efficacy of the method of cutting using a grinding wheel with zones of various diameters. The method allows for comparative tests upon different grinding wheels, with various grinding parameters and different machined materials.
Robust time and frequency domain estimation methods in adaptive control
NASA Technical Reports Server (NTRS)
Lamaire, Richard Orville
1987-01-01
A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.
Gabre, P; Martinsson, T; Gahnberg, L
1999-08-01
The aim of the present study was to evaluate whether estimation of lactobacilli was possible with simplified saliva sampling methods. Dentocult LB (Orion Diagnostica AB, Trosa, Sweden) was used to estimate the number of lactobacilli in saliva sampled by 3 different methods from 96 individuals: (i) Collecting and pouring stimulated saliva over a Dentocult dip-slide; (ii) direct licking of the Dentocult LB dip-slide; (iii) contaminating a wooden spatula with saliva and pressing against the Dentocult dip-slide. The first method was in accordance with the manufacturer's instructions and selected as the 'gold standard'; the other 2 methods were compared with this result. The 2 simplified methods for estimating levels of lactobacilli in saliva showed good reliability and specificity. Sensitivity, defined as the ability to detect individuals with a high number of lactabacilli in saliva, was sufficient for the licking method (85%), but significantly reduced for the wooden spatula method (52%). PMID:10540926
System and method for motor parameter estimation
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values for motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.
Carbon footprint: current methods of estimation.
Pandey, Divya; Agrawal, Madhoolika; Pandey, Jai Shanker
2011-07-01
Increasing greenhouse gaseous concentration in the atmosphere is perturbing the environment to cause grievous global warming and associated consequences. Following the rule that only measurable is manageable, mensuration of greenhouse gas intensiveness of different products, bodies, and processes is going on worldwide, expressed as their carbon footprints. The methodologies for carbon footprint calculations are still evolving and it is emerging as an important tool for greenhouse gas management. The concept of carbon footprinting has permeated and is being commercialized in all the areas of life and economy, but there is little coherence in definitions and calculations of carbon footprints among the studies. There are disagreements in the selection of gases, and the order of emissions to be covered in footprint calculations. Standards of greenhouse gas accounting are the common resources used in footprint calculations, although there is no mandatory provision of footprint verification. Carbon footprinting is intended to be a tool to guide the relevant emission cuts and verifications, its standardization at international level are therefore necessary. Present review describes the prevailing carbon footprinting methods and raises the related issues. PMID:20848311
Estimate octane numbers using an enhanced method
Twu, C.H.; Coon, J.E.
1997-03-01
An improved model, based on the Twu-Coon method, is not only internally consistent, but also retains the same level of accuracy as the previous model in predicting octanes of gasoline blends. The enhanced model applies the same binary interaction parameters to components in each gasoline cut and their blends. Thus, the enhanced model can blend gasoline cuts in any order, in any combination or from any splitting of gasoline cuts and still yield the identical value of octane number for blending the same number of gasoline cuts. Setting binary interaction parameters to zero for identical gasoline cuts during the blending process is not required. The new model changes the old model`s methodology so that the same binary interaction parameters can be applied between components inside a gasoline cut as are applied to the same components between gasoline cuts. The enhanced model is more consistent in methodology than the original model, but it has equal accuracy for predicting octane numbers of gasoline blends, and it has the same number of binary interaction parameters. The paper discusses background, enhancement of the Twu-Coon interaction model, and three examples: blend of 2 identical gasoline cuts, blend of 3 gasoline cuts, and blend of the same 3 gasoline cuts in a different order.
A relative humidity processing method for the sampling of aerosol particles with low growth-ability
NASA Astrophysics Data System (ADS)
Martinsson, Bengt G.; Hansson, Hans-Christen; Asking, Lars; Cederfelt, Sven-Inge
1992-11-01
A method for the fractionation of aerosol particles with respect to size and ability to grow with an increased relative humidity has been developed. The system consists of cascade impactors, diffusion driers, a humidifier, and a temperature stabilizer. Diffusion driers were designed and the vapor penetration was modeled below 20 percent. A humidifier which can be operated with an output relative humidity above 95 percent was developed. Flow-rates up to 51/min can be used and the relative humidity can be controlled within approximately 1 percent. The ability of the system to fractionate aerosol particles with respect to growth with relative humidity was investigated. The equivalent aerodynamic diameter growth factor for sodium chloride was determined to 2 at a relative humidity of 98 percent, in good agreement with theory.
Evaluating combinational illumination estimation methods on real-world images.
Bing Li; Weihua Xiong; Weiming Hu; Funt, Brian
2014-03-01
Illumination estimation is an important component of color constancy and automatic white balancing. A number of methods of combining illumination estimates obtained from multiple subordinate illumination estimation methods now appear in the literature. These combinational methods aim to provide better illumination estimates by fusing the information embedded in the subordinate solutions. The existing combinational methods are surveyed and analyzed here with the goals of determining: 1) the effectiveness of fusing illumination estimates from multiple subordinate methods; 2) the best method of combination; 3) the underlying factors that affect the performance of a combinational method; and 4) the effectiveness of combination for illumination estimation in multiple-illuminant scenes. The various combinational methods are categorized in terms of whether or not they require supervised training and whether or not they rely on high-level scene content cues (e.g., indoor versus outdoor). Extensive tests and enhanced analyzes using three data sets of real-world images are conducted. For consistency in testing, the images were labeled according to their high-level features (3D stages, indoor/outdoor) and this label data is made available on-line. The tests reveal that the trained combinational methods (direct combination by support vector regression in particular) clearly outperform both the non-combinational methods and those combinational methods based on scene content cues. PMID:23974624
A TRMM Rainfall Estimation Method Applicable to Land Areas
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, R.; Weinman, J.; Dalu, G.
1999-01-01
Methods developed to estimate rain rate on a footprint scale over land with the satellite-borne multispectral dual-polarization Special Sensor Microwave Imager (SSM/1) radiometer have met with limited success. Variability of surface emissivity on land and beam filling are commonly cited as the weaknesses of these methods. On the contrary, we contend a more significant reason for this lack of success is that the information content of spectral and polarization measurements of the SSM/I is limited. because of significant redundancy. As a result, the complex nature and vertical distribution C, of frozen and melting ice particles of different densities, sizes, and shapes cannot resolved satisfactorily. Extinction in the microwave region due to these complex particles can mask the extinction due to rain drops. Because of these reasons, theoretical models that attempt to retrieve rain rate do not succeed on a footprint scale. To illustrate the weakness of these models, as an example we can consider the brightness temperature measurement made by the radiometer in the 85 GHz channel (T85). Models indicate that T85 should be inversely related to the rain rate, because of scattering. However, rain rate derived from 15-minute rain gauges on land indicate that this is not true in a majority of footprints. This is also supported by the ship-borne radar observations of rain in the Tropical Oceans and Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA-COARE) region over the ocean. Based on these observations. we infer that theoretical models that attempt to retrieve rain rate do not succeed on a footprint scale. We do not follow the above path of rain retrieval on a footprint scale. Instead, we depend on the limited ability of the microwave radiometer to detect the presence of rain. This capability is useful to determine the rain area in a mesoscale region. We find in a given rain event that this rain area is closely related to the mesoscale-average rain rate
Estimating Tree Height-Diameter Models with the Bayesian Method
Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei
2014-01-01
Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733
Estimating tree height-diameter models with the Bayesian method.
Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei
2014-01-01
Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733
Evaluation of Methods to Estimate Understory Fruit Biomass
Lashley, Marcus A.; Thompson, Jeffrey R.; Chitwood, M. Colter; DePerno, Christopher S.; Moorman, Christopher E.
2014-01-01
Fleshy fruit is consumed by many wildlife species and is a critical component of forest ecosystems. Because fruit production may change quickly during forest succession, frequent monitoring of fruit biomass may be needed to better understand shifts in wildlife habitat quality. Yet, designing a fruit sampling protocol that is executable on a frequent basis may be difficult, and knowledge of accuracy within monitoring protocols is lacking. We evaluated the accuracy and efficiency of 3 methods to estimate understory fruit biomass (Fruit Count, Stem Density, and Plant Coverage). The Fruit Count method requires visual counts of fruit to estimate fruit biomass. The Stem Density method uses counts of all stems of fruit producing species to estimate fruit biomass. The Plant Coverage method uses land coverage of fruit producing species to estimate fruit biomass. Using linear regression models under a censored-normal distribution, we determined the Fruit Count and Stem Density methods could accurately estimate fruit biomass; however, when comparing AIC values between models, the Fruit Count method was the superior method for estimating fruit biomass. After determining that Fruit Count was the superior method to accurately estimate fruit biomass, we conducted additional analyses to determine the sampling intensity (i.e., percentage of area) necessary to accurately estimate fruit biomass. The Fruit Count method accurately estimated fruit biomass at a 0.8% sampling intensity. In some cases, sampling 0.8% of an area may not be feasible. In these cases, we suggest sampling understory fruit production with the Fruit Count method at the greatest feasible sampling intensity, which could be valuable to assess annual fluctuations in fruit production. PMID:24819253
Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano
2015-01-01
We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926
ERIC Educational Resources Information Center
Magis, David; Raiche, Gilles; Beland, Sebastien
2012-01-01
This paper focuses on two likelihood-based indices of person fit, the index "l[subscript z]" and the Snijders's modified index "l[subscript z]*". The first one is commonly used in practical assessment of person fit, although its asymptotic standard normal distribution is not valid when true abilities are replaced by sample ability estimates. The…
Huang Xianghui; Chang Jiang
2008-06-03
Nanocrystalline bredigite (Ca{sub 7}MgSi{sub 4}O{sub 16}) powders were synthesized by a simple solution combustion method. Phase pure bredigite powders with particle sizes ranging from 234 to 463 nm could be obtained at a relatively low temperature of 650 deg. C. The apatite-forming ability of the bredigite powders was examined by soaking them in a stimulated body fluid. The compositional and morphological changes of the powders before and after soaking were analyzed by X-ray diffraction and scanning electron microscopy and the results showed that hydroxyapatite was formed after soaking for 4 days.
A source number estimation method for single optical fiber sensor
NASA Astrophysics Data System (ADS)
Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu
2015-10-01
The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.
An automated method of tuning an attitude estimator
NASA Technical Reports Server (NTRS)
Mason, Paul A. C.; Mook, D. Joseph
1995-01-01
Attitude determination is a major element of the operation and maintenance of a spacecraft. There are several existing methods of determining the attitude of a spacecraft. One of the most commonly used methods utilizes the Kalman filter to estimate the attitude of the spacecraft. Given an accurate model of a system and adequate observations, a Kalman filter can produce accurate estimates of the attitude. If the system model, filter parameters, or observations are inaccurate, the attitude estimates may be degraded. Therefore, it is advantageous to develop a method of automatically tuning the Kalman filter to produce the accurate estimates. In this paper, a three-axis attitude determination Kalman filter, which uses only magnetometer measurements, is developed and tested using real data. The appropriate filter parameters are found via the Process Noise Covariance Estimator (PNCE). The PNCE provides an optimal criterion for determining the best filter parameters.
Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method
ERIC Educational Resources Information Center
Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey
2013-01-01
Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…
Methods for Estimating Medical Expenditures Attributable to Intimate Partner Violence
ERIC Educational Resources Information Center
Brown, Derek S.; Finkelstein, Eric A.; Mercy, James A.
2008-01-01
This article compares three methods for estimating the medical cost burden of intimate partner violence against U.S. adult women (18 years and older), 1 year postvictimization. To compute the estimates, prevalence data from the National Violence Against Women Survey are combined with cost data from the Medical Expenditure Panel Survey, the…
A Novel Monopulse Angle Estimation Method for Wideband LFM Radars
Zhang, Yi-Xiong; Liu, Qi-Fan; Hong, Ru-Jia; Pan, Ping-Ping; Deng, Zhen-Miao
2016-01-01
Traditional monopulse angle estimations are mainly based on phase comparison and amplitude comparison methods, which are commonly adopted in narrowband radars. In modern radar systems, wideband radars are becoming more and more important, while the angle estimation for wideband signals is little studied in previous works. As noise in wideband radars has larger bandwidth than narrowband radars, the challenge lies in the accumulation of energy from the high resolution range profile (HRRP) of monopulse. In wideband radars, linear frequency modulated (LFM) signals are frequently utilized. In this paper, we investigate the monopulse angle estimation problem for wideband LFM signals. To accumulate the energy of the received echo signals from different scatterers of a target, we propose utilizing a cross-correlation operation, which can achieve a good performance in low signal-to-noise ratio (SNR) conditions. In the proposed algorithm, the problem of angle estimation is converted to estimating the frequency of the cross-correlation function (CCF). Experimental results demonstrate the similar performance of the proposed algorithm compared with the traditional amplitude comparison method. It means that the proposed method for angle estimation can be adopted. When adopting the proposed method, future radars may only need wideband signals for both tracking and imaging, which can greatly increase the data rate and strengthen the capability of anti-jamming. More importantly, the estimated angle will not become ambiguous under an arbitrary angle, which can significantly extend the estimated angle range in wideband radars. PMID:27271629
A Novel Monopulse Angle Estimation Method for Wideband LFM Radars.
Zhang, Yi-Xiong; Liu, Qi-Fan; Hong, Ru-Jia; Pan, Ping-Ping; Deng, Zhen-Miao
2016-01-01
Traditional monopulse angle estimations are mainly based on phase comparison and amplitude comparison methods, which are commonly adopted in narrowband radars. In modern radar systems, wideband radars are becoming more and more important, while the angle estimation for wideband signals is little studied in previous works. As noise in wideband radars has larger bandwidth than narrowband radars, the challenge lies in the accumulation of energy from the high resolution range profile (HRRP) of monopulse. In wideband radars, linear frequency modulated (LFM) signals are frequently utilized. In this paper, we investigate the monopulse angle estimation problem for wideband LFM signals. To accumulate the energy of the received echo signals from different scatterers of a target, we propose utilizing a cross-correlation operation, which can achieve a good performance in low signal-to-noise ratio (SNR) conditions. In the proposed algorithm, the problem of angle estimation is converted to estimating the frequency of the cross-correlation function (CCF). Experimental results demonstrate the similar performance of the proposed algorithm compared with the traditional amplitude comparison method. It means that the proposed method for angle estimation can be adopted. When adopting the proposed method, future radars may only need wideband signals for both tracking and imaging, which can greatly increase the data rate and strengthen the capability of anti-jamming. More importantly, the estimated angle will not become ambiguous under an arbitrary angle, which can significantly extend the estimated angle range in wideband radars. PMID:27271629
Adaptive frequency estimation by MUSIC (Multiple Signal Classification) method
NASA Astrophysics Data System (ADS)
Karhunen, Juha; Nieminen, Esko; Joutsensalo, Jyrki
During the last years, the eigenvector-based method called MUSIC has become very popular in estimating the frequencies of sinusoids in additive white noise. Adaptive realizations of the MUSIC method are studied using simulated data. Several of the adaptive realizations seem to give in practice equally good results as the nonadaptive standard realization. The only exceptions are instantaneous gradient type algorithms that need considerably more samples to achieve a comparable performance. A new method is proposed for constructing initial estimates to the signal subspace. The method improves often dramatically the performance of instantaneous gradient type algorithms. The new signal subspace estimate can also be used to define a frequency estimator directly or to simplify eigenvector computation.
Methods for Estimating Uncertainty in Factor Analytic Solutions
The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...
Evapotranspiration: Mass balance measurements compared with flux estimation methods
Technology Transfer Automated Retrieval System (TEKTRAN)
Evapotranspiration (ET) may be measured by mass balance methods and estimated by flux sensing methods. The mass balance methods are typically restricted in terms of the area that can be represented (e.g., surface area of weighing lysimeter (LYS) or equivalent representative area of neutron probe (NP...
Recent developments in the methods of estimating shooting distance.
Zeichner, Arie; Glattstein, Baruch
2002-03-01
A review of developments during the past 10 years in the methods of estimating shooting distance is provided. This review discusses the examination of clothing targets, cadavers, and exhibits that cannot be processed in the laboratory. The methods include visual/microscopic examinations, color tests, and instrumental analysis of the gunshot residue deposits around the bullet entrance holes. The review does not cover shooting distance estimation from shotguns that fired pellet loads. PMID:12805985
Using the Mercy Method for Weight Estimation in Indian Children
Batmanabane, Gitanjali; Jena, Pradeep Kumar; Dikshit, Roshan
2015-01-01
This study was designed to compare the performance of a new weight estimation strategy (Mercy Method) with 12 existing weight estimation methods (APLS, Best Guess, Broselow, Leffler, Luscombe-Owens, Nelson, Shann, Theron, Traub-Johnson, Traub-Kichen) in children from India. Otherwise healthy children, 2 months to 16 years, were enrolled and weight, height, humeral length (HL), and mid-upper arm circumference (MUAC) were obtained by trained raters. Weight estimation was performed as described for each method. Predicted weights were regressed against actual weights and the slope, intercept, and Pearson correlation coefficient estimated. Agreement between estimated weight and actual weight was determined using Bland–Altman plots with log-transformation. Predictive performance of each method was assessed using mean error (ME), mean percentage error (MPE), and root mean square error (RMSE). Three hundred seventy-five children (7.5 ± 4.3 years, 22.1 ± 12.3 kg, 116.2 ± 26.3 cm) participated in this study. The Mercy Method (MM) offered the best correlation between actual and estimated weight when compared with the other methods (r2 = .967 vs .517-.844). The MM also demonstrated the lowest ME, MPE, and RMSE. Finally, the MM estimated weight within 20% of actual for nearly all children (96%) as opposed to the other methods for which these values ranged from 14% to 63%. The MM performed extremely well in Indian children with performance characteristics comparable to those observed for US children in whom the method was developed. It appears that the MM can be used in Indian children without modification, extending the utility of this weight estimation strategy beyond Western populations. PMID:27335932
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1988-01-01
Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.
A posteriori pointwise error estimates for the boundary element method
Paulino, G.H.; Gray, L.J.; Zarikian, V.
1995-01-01
This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.
Two-dimensional location and direction estimating method.
Haga, Teruhiro; Tsukamoto, Sosuke; Hoshino, Hiroshi
2008-01-01
In this paper, a method of estimating both the position and the rotation angle of an object on a measurement stage was proposed. The system utilizes the radio communication technology and the directivity of an antenna. As a prototype system, a measurement stage (a circle 240mm in diameter) with 36 antennas that placed in each 10 degrees was developed. Two transmitter antennas are settled in a right angle on the stage as the target object, and the position and the rotation angle is estimated by measuring efficiency of the radio communication of each 36 antennas. The experimental result revealed that even when the estimated location is not so accurate (about a 30 mm error), the rotation angle is accurately estimated (about 2.33 degree error on average). The result suggests that the proposed method will be useful for estimating the location and the direction of an object. PMID:19162938
A Channelization-Based DOA Estimation Method for Wideband Signals
Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping
2016-01-01
In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566
A Channelization-Based DOA Estimation Method for Wideband Signals.
Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping
2016-01-01
In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566
Comparison of several methods for estimating low speed stability derivatives
NASA Technical Reports Server (NTRS)
Fletcher, H. S.
1971-01-01
Methods presented in five different publications have been used to estimate the low-speed stability derivatives of two unpowered airplane configurations. One configuration had unswept lifting surfaces, the other configuration was the D-558-II swept-wing research airplane. The results of the computations were compared with each other, with existing wind-tunnel data, and with flight-test data for the D-558-II configuration to assess the relative merits of the methods for estimating derivatives. The results of the study indicated that, in general, for low subsonic speeds, no one text appeared consistently better for estimating all derivatives.
Correa, John B; Apolzan, John W; Shepard, Desti N; Heil, Daniel P; Rood, Jennifer C; Martin, Corby K
2016-07-01
Activity monitors such as the Actical accelerometer, the Sensewear armband, and the Intelligent Device for Energy Expenditure and Activity (IDEEA) are commonly validated against gold standards (e.g., doubly labeled water, or DLW) to determine whether they accurately measure total daily energy expenditure (TEE) or activity energy expenditure (AEE). However, little research has assessed whether these parameters or others (e.g., posture allocation) predict body weight change over time. The aims of this study were to (i) test whether estimated energy expenditure or posture allocation from the devices was associated with weight change during and following a low-calorie diet (LCD) and (ii) compare free-living TEE and AEE predictions from the devices against DLW before weight change. Eighty-seven participants from 2 clinical trials wore 2 of the 3 devices simultaneously for 1 week of a 2-week DLW period. Participants then completed an 8-week LCD and were weighed at the start and end of the LCD and 6 and 12 months after the LCD. More time spent walking at baseline, measured by the IDEEA, significantly predicted greater weight loss during the 8-week LCD. Measures of posture allocation demonstrated medium effect sizes in their relationships with weight change. Bland-Altman analyses indicated that the Sensewear and the IDEEA accurately estimated TEE, and the IDEEA accurately measured AEE. The results suggest that the ability of energy expenditure and posture allocation to predict weight change is limited, and the accuracy of TEE and AEE measurements varies across activity monitoring devices, with multi-sensor monitors demonstrating stronger validity. PMID:27270210
A Computationally Efficient Method for Polyphonic Pitch Estimation
NASA Astrophysics Data System (ADS)
Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio
2009-12-01
This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.
A robust method for rotation estimation using spherical harmonics representation.
Althloothi, Salah; Mahoor, Mohammad H; Voyles, Richard M
2013-06-01
This paper presents a robust method for 3D object rotation estimation using spherical harmonics representation and the unit quaternion vector. The proposed method provides a closed-form solution for rotation estimation without recurrence relations or searching for point correspondences between two objects. The rotation estimation problem is casted as a minimization problem, which finds the optimum rotation angles between two objects of interest in the frequency domain. The optimum rotation angles are obtained by calculating the unit quaternion vector from a symmetric matrix, which is constructed from the two sets of spherical harmonics coefficients using eigendecomposition technique. Our experimental results on hundreds of 3D objects show that our proposed method is very accurate in rotation estimation, robust to noisy data, missing surface points, and can handle intra-class variability between 3D objects. PMID:23475364
A Fast Estimation Method of Railway Passengers' Flow
NASA Astrophysics Data System (ADS)
Nagasaki, Yusaku; Asuka, Masashi; Komaya, Kiyotoshi
To evaluate a train schedule from the viewpoint of passengers' convenience, it is important to know each passenger's choice of trains and transfer stations to arrive at his/her destination. Because of difficulties of measuring such passengers' behavior, estimation methods of railway passengers' flow are proposed to execute such an evaluation. However, a train schedule planning system equipped with those methods is not practical due to necessity of much time to complete the estimation. In this article, the authors propose a fast passengers' flow estimation method that employs features of passengers' flow graph using preparative search based on each train's arrival time at each station. And the authors show the results of passengers' flow estimation applied on a railway in an urban area.
Different Donor Cell Culture Methods Can Influence the Developmental Ability of Cloned Sheep Embryos
Chen, Shan; Li, WenDa
2015-01-01
It was proposed that arresting nuclear donor cells in G0/G1 phase facilitates the development of embryos that are derived from somatic cell nuclear transfer (SCNT). Full confluency or serum starvation is commonly used to arrest in vitro cultured somatic cells in G0/G1 phase. However, it is controversial as to whether these two methods have the same efficiency in arresting somatic cells in G0/G1 phase. Moreover, it is unclear whether the cloned embryos have comparable developmental ability after somatic cells are subjected to one of these methods and then used as nuclear donors in SCNT. In the present study, in vitro cultured sheep skin fibroblasts were divided into four groups: (1) cultured to 70–80% confluency (control group), (2) cultured to full confluency, (3) starved in low serum medium for 4 d, or (4) cultured to full confluency and then further starved for 4 d. Flow cytometry was used to assay the percentage of fibroblasts in G0/G1 phase, and cell counting was used to assay the viability of the fibroblasts. Then, real-time reverse transcription PCR was used to determine the levels of expression of several cell cycle-related genes. Subsequently, the four groups of fibroblasts were separately used as nuclear donors in SCNT, and the developmental ability and the quality of the cloned embryos were compared. The results showed that the percentage of fibroblasts in G0/G1 phase, the viability of fibroblasts, and the expression levels of cell cycle-related genes was different among the four groups of fibroblasts. Moreover, the quality of the cloned embryos was comparable after these four groups of fibroblasts were separately used as nuclear donors in SCNT. However, cloned embryos derived from fibroblasts that were cultured to full confluency combined with serum starvation had the highest developmental ability. The results of the present study indicate that there are synergistic effects of full confluency and serum starvation on arresting fibroblasts in G0/G1 phase
Evaluation of the Mercy weight estimation method in Ouelessebougou, Mali
2014-01-01
Background This study evaluated the performance of a new weight estimation strategy (Mercy Method) with four existing weight-estimation methods (APLS, ARC, Broselow, and Nelson) in children from Ouelessebougou, Mali. Methods Otherwise healthy children, 2 mos to 16 yrs, were enrolled and weight, height, humeral length (HL) and mid-upper arm circumference (MUAC) obtained by trained raters. Weight estimation was performed as described for each method. Predicted weights were regressed against actual weights. Agreement between estimated and actual weight was determined using Bland-Altman plots with log-transformation. Predictive performance of each method was assessed using residual error (RE), percentage error (PE), root mean square error (RMSE), and percent predicted within 10, 20 and 30% of actual weight. Results 473 children (8.1 ± 4.8 yr, 25.1 ± 14.5 kg, 120.9 ± 29.5 cm) participated in this study. The Mercy Method (MM) offered the best correlation between actual and estimated weight when compared with the other methods (r2 = 0.97 vs. 0.80-0.94). The MM also demonstrated the lowest ME (0.06 vs. 0.92-4.1 kg), MPE (1.6 vs. 7.8-19.8%) and RMSE (2.6 vs. 3.0-6.7). Finally, the MM estimated weight within 20% of actual for nearly all children (97%) as opposed to the other methods for which these values ranged from 50-69%. Conclusions The MM performed extremely well in Malian children with performance characteristics comparable to those observed for U.S and India and could be used in sub-Saharan African children without modification extending the utility of this weight estimation strategy. PMID:24650051
Geometry optimization method versus predictive ability in QSPR modeling for ionic liquids.
Rybinska, Anna; Sosnowska, Anita; Barycki, Maciej; Puzyn, Tomasz
2016-02-01
Computational techniques, such as Quantitative Structure-Property Relationship (QSPR) modeling, are very useful in predicting physicochemical properties of various chemicals. Building QSPR models requires calculating molecular descriptors and the proper choice of the geometry optimization method, which will be dedicated to specific structure of tested compounds. Herein, we examine the influence of the ionic liquids' (ILs) geometry optimization methods on the predictive ability of QSPR models by comparing three models. The models were developed based on the same experimental data on density collected for 66 ionic liquids, but with employing molecular descriptors calculated from molecular geometries optimized at three different levels of the theory, namely: (1) semi-empirical (PM7), (2) ab initio (HF/6-311+G*) and (3) density functional theory (B3LYP/6-311+G*). The model in which the descriptors were calculated by using ab initio HF/6-311+G* method indicated the best predictivity capabilities ([Formula: see text] = 0.87). However, PM7-based model has comparable values of quality parameters ([Formula: see text] = 0.84). Obtained results indicate that semi-empirical methods (faster and less expensive regarding CPU time) can be successfully employed to geometry optimization in QSPR studies for ionic liquids. PMID:26830600
Demographic estimation methods for plants with unobservable life-states
Kery, M.; Gregg, K.B.; Schaub, M.
2005-01-01
Demographic estimation of vital parameters in plants with an unobservable dormant state is complicated, because time of death is not known. Conventional methods assume that death occurs at a particular time after a plant has last been seen aboveground but the consequences of assuming a particular duration of dormancy have never been tested. Capture-recapture methods do not make assumptions about time of death; however, problems with parameter estimability have not yet been resolved. To date, a critical comparative assessment of these methods is lacking. We analysed data from a 10 year study of Cleistes bifaria, a terrestrial orchid with frequent dormancy, and compared demographic estimates obtained by five varieties of the conventional methods, and two capture-recapture methods. All conventional methods produced spurious unity survival estimates for some years or for some states, and estimates of demographic rates sensitive to the time of death assumption. In contrast, capture-recapture methods are more parsimonious in terms of assumptions, are based on well founded theory and did not produce spurious estimates. In Cleistes, dormant episodes lasted for 1-4 years (mean 1.4, SD 0.74). The capture-recapture models estimated ramet survival rate at 0.86 (SE~ 0.01), ranging from 0.77-0.94 (SEs # 0.1) in anyone year. The average fraction dormant was estimated at 30% (SE 1.5), ranging 16 -47% (SEs # 5.1) in anyone year. Multistate capture-recapture models showed that survival rates were positively related to precipitation in the current year, but transition rates were more strongly related to precipitation in the previous than in the current year, with more ramets going dormant following dry years. Not all capture-recapture models of interest have estimable parameters; for instance, without excavating plants in years when they do not appear aboveground, it is not possible to obtain independent timespecific survival estimates for dormant plants. We introduce rigorous
A new method for parameter estimation in nonlinear dynamical equations
NASA Astrophysics Data System (ADS)
Wang, Liu; He, Wen-Ping; Liao, Le-Jian; Wan, Shi-Quan; He, Tao
2015-01-01
Parameter estimation is an important scientific problem in various fields such as chaos control, chaos synchronization and other mathematical models. In this paper, a new method for parameter estimation in nonlinear dynamical equations is proposed based on evolutionary modelling (EM). This will be achieved by utilizing the following characteristics of EM which includes self-organizing, adaptive and self-learning features which are inspired by biological natural selection, and mutation and genetic inheritance. The performance of the new method is demonstrated by using various numerical tests on the classic chaos model—Lorenz equation (Lorenz 1963). The results indicate that the new method can be used for fast and effective parameter estimation irrespective of whether partial parameters or all parameters are unknown in the Lorenz equation. Moreover, the new method has a good convergence rate. Noises are inevitable in observational data. The influence of observational noises on the performance of the presented method has been investigated. The results indicate that the strong noises, such as signal noise ratio (SNR) of 10 dB, have a larger influence on parameter estimation than the relatively weak noises. However, it is found that the precision of the parameter estimation remains acceptable for the relatively weak noises, e.g. SNR is 20 or 30 dB. It indicates that the presented method also has some anti-noise performance.
Scanning linear estimation: improvements over region of interest (ROI) methods
NASA Astrophysics Data System (ADS)
Kupinski, Meredith K.; Clarkson, Eric W.; Barrett, Harrison H.
2013-03-01
In tomographic medical imaging, a signal activity is typically estimated by summing voxels from a reconstructed image. We introduce an alternative estimation scheme that operates on the raw projection data and offers a substantial improvement, as measured by the ensemble mean-square error (EMSE), when compared to using voxel values from a maximum-likelihood expectation-maximization (MLEM) reconstruction. The scanning-linear (SL) estimator operates on the raw projection data and is derived as a special case of maximum-likelihood estimation with a series of approximations to make the calculation tractable. The approximated likelihood accounts for background randomness, measurement noise and variability in the parameters to be estimated. When signal size and location are known, the SL estimate of signal activity is unbiased, i.e. the average estimate equals the true value. By contrast, unpredictable bias arising from the null functions of the imaging system affect standard algorithms that operate on reconstructed data. The SL method is demonstrated for two different tasks: (1) simultaneously estimating a signal’s size, location and activity; (2) for a fixed signal size and location, estimating activity. Noisy projection data are realistically simulated using measured calibration data from the multi-module multi-resolution small-animal SPECT imaging system. For both tasks, the same set of images is reconstructed using the MLEM algorithm (80 iterations), and the average and maximum values within the region of interest (ROI) are calculated for comparison. This comparison shows dramatic improvements in EMSE for the SL estimates. To show that the bias in ROI estimates affects not only absolute values but also relative differences, such as those used to monitor the response to therapy, the activity estimation task is repeated for three different signal sizes.
Stability and error estimation for Component Adaptive Grid methods
NASA Technical Reports Server (NTRS)
Oliger, Joseph; Zhu, Xiaolei
1994-01-01
Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.
Assessing the sensitivity of methods for estimating principal causal effects.
Stuart, Elizabeth A; Jo, Booil
2015-12-01
The framework of principal stratification provides a way to think about treatment effects conditional on post-randomization variables, such as level of compliance. In particular, the complier average causal effect (CACE) - the effect of the treatment for those individuals who would comply with their treatment assignment under either treatment condition - is often of substantive interest. However, estimation of the CACE is not always straightforward, with a variety of estimation procedures and underlying assumptions, but little advice to help researchers select between methods. In this article, we discuss and examine two methods that rely on very different assumptions to estimate the CACE: a maximum likelihood ('joint') method that assumes the 'exclusion restriction,' (ER) and a propensity score-based method that relies on 'principal ignorability.' We detail the assumptions underlying each approach, and assess each methods' sensitivity to both its own assumptions and those of the other method using both simulated data and a motivating example. We find that the ER-based joint approach appears somewhat less sensitive to its assumptions, and that the performance of both methods is significantly improved when there are strong predictors of compliance. Interestingly, we also find that each method performs particularly well when the assumptions of the other approach are violated. These results highlight the importance of carefully selecting an estimation procedure whose assumptions are likely to be satisfied in practice and of having strong predictors of principal stratum membership. PMID:21971481
A Simple Method to Estimate Harvest Index in Grain Crops
Technology Transfer Automated Retrieval System (TEKTRAN)
Several methods have been proposed to simulate yield in crop simulation models. In this work we present a simple method to estimate harvest index (HI) of grain crops based on fractional post-anthesis growth (fG = fraction of growth that occurred post-anthesis). We propose that there is a linear or c...
A Study of Methods for Estimating Distributions of Test Scores.
ERIC Educational Resources Information Center
Cope, Ronald T.; Kolen, Michael J.
This study compared five density estimation techniques applied to samples from a population of 272,244 examinees' ACT English Usage and Mathematics Usage raw scores. Unsmoothed frequencies, kernel method, negative hypergeometric, four-parameter beta compound binomial, and Cureton-Tukey methods were applied to 500 replications of random samples of…
Evaluation of alternative methods for estimating reference evapotranspiration
Technology Transfer Automated Retrieval System (TEKTRAN)
Evapotranspiration is an important component in water-balance and irrigation scheduling models. While the FAO-56 Penman-Monteith method has become the de facto standard for estimating reference evapotranspiration (ETo), it is a complex method requiring several weather parameters. Required weather ...
Precision of two methods for estimating age from burbot otoliths
Edwards, W.H.; Stapanian, M.A.; Stoneman, A.T.
2011-01-01
Lower reproductive success and older age structure are associated with many burbot (Lota lota L.) populations that are declining or of conservation concern. Therefore, reliable methods for estimating the age of burbot are critical for effective assessment and management. In Lake Erie, burbot populations have declined in recent years due to the combined effects of an aging population (&xmacr; = 10 years in 2007) and extremely low recruitment since 2002. We examined otoliths from burbot (N = 91) collected in Lake Erie in 2007 and compared the estimates of burbot age by two agers, each using two established methods (cracked-and-burned and thin-section) of estimating ages from burbot otoliths. One ager was experienced at estimating age from otoliths, the other was a novice. Agreement (precision) between the two agers was higher for the thin-section method, particularly at ages 6–11 years, based on linear regression analyses and 95% confidence intervals. As expected, precision between the two methods was higher for the more experienced ager. Both agers reported that the thin sections offered clearer views of the annuli, particularly near the margins on otoliths from burbot ages ≥8. Slides for the thin sections required some costly equipment and more than 2 days to prepare. In contrast, preparing the cracked-and-burned samples was comparatively inexpensive and quick. We suggest use of the thin-section method for estimating the age structure of older burbot populations.
Time domain attenuation estimation method from ultrasonic backscattered signals
Ghoshal, Goutam; Oelze, Michael L.
2012-01-01
Ultrasonic attenuation is important not only as a parameter for characterizing tissue but also for compensating other parameters that are used to classify tissues. Several techniques have been explored for estimating ultrasonic attenuation from backscattered signals. In the present study, a technique is developed to estimate the local ultrasonic attenuation coefficient by analyzing the time domain backscattered signal. The proposed method incorporates an objective function that combines the diffraction pattern of the source/receiver with the attenuation slope in an integral equation. The technique was assessed through simulations and validated through experiments with a tissue mimicking phantom and fresh rabbit liver samples. The attenuation values estimated using the proposed technique were compared with the attenuation estimated using insertion loss measurements. For a data block size of 15 pulse lengths axially and 15 beamwidths laterally, the mean attenuation estimates from the tissue mimicking phantoms were within 10% of the estimates using insertion loss measurements. With a data block size of 20 pulse lengths axially and 20 beamwidths laterally, the error in the attenuation values estimated from the liver samples were within 10% of the attenuation values estimated from the insertion loss measurements. PMID:22779499
A New Method for Radar Rainfall Estimation Using Merged Radar and Gauge Derived Fields
NASA Astrophysics Data System (ADS)
Hasan, M. M.; Sharma, A.; Johnson, F.; Mariethoz, G.; Seed, A.
2014-12-01
Accurate estimation of rainfall is critical for any hydrological analysis. The advantage of radar rainfall measurements is their ability to cover large areas. However, the uncertainties in the parameters of the power law, that links reflectivity to rainfall intensity, have to date precluded the widespread use of radars for quantitative rainfall estimates for hydrological studies. There is therefore considerable interest in methods that can combine the strengths of radar and gauge measurements by merging the two data sources. In this work, we propose two new developments to advance this area of research. The first contribution is a non-parametric radar rainfall estimation method (NPZR) which is based on kernel density estimation. Instead of using a traditional Z-R relationship, the NPZR accounts for the uncertainty in the relationship between reflectivity and rainfall intensity. More importantly, this uncertainty can vary for different values of reflectivity. The NPZR method reduces the Mean Square Error (MSE) of the estimated rainfall by 16 % compared to a traditionally fitted Z-R relation. Rainfall estimates are improved at 90% of the gauge locations when the method is applied to the densely gauged Sydney Terrey Hills radar region. A copula based spatial interpolation method (SIR) is used to estimate rainfall from gauge observations at the radar pixel locations. The gauge-based SIR estimates have low uncertainty in areas with good gauge density, whilst the NPZR method provides more reliable rainfall estimates than the SIR method, particularly in the areas of low gauge density. The second contribution of the work is to merge the radar rainfall field with spatially interpolated gauge rainfall estimates. The two rainfall fields are combined using a temporally and spatially varying weighting scheme that can account for the strengths of each method. The weight for each time period at each location is calculated based on the expected estimation error of each method
Estimating Population Size Using the Network Scale Up Method
Maltiel, Rachael; Raftery, Adrian E.; McCormick, Tyler H.; Baraff, Aaron J.
2015-01-01
We develop methods for estimating the size of hard-to-reach populations from data collected using network-based questions on standard surveys. Such data arise by asking respondents how many people they know in a specific group (e.g. people named Michael, intravenous drug users). The Network Scale up Method (NSUM) is a tool for producing population size estimates using these indirect measures of respondents’ networks. Killworth et al. (1998a,b) proposed maximum likelihood estimators of population size for a fixed effects model in which respondents’ degrees or personal network sizes are treated as fixed. We extend this by treating personal network sizes as random effects, yielding principled statements of uncertainty. This allows us to generalize the model to account for variation in people’s propensity to know people in particular subgroups (barrier effects), such as their tendency to know people like themselves, as well as their lack of awareness of or reluctance to acknowledge their contacts’ group memberships (transmission bias). NSUM estimates also suffer from recall bias, in which respondents tend to underestimate the number of members of larger groups that they know, and conversely for smaller groups. We propose a data-driven adjustment method to deal with this. Our methods perform well in simulation studies, generating improved estimates and calibrated uncertainty intervals, as well as in back estimates of real sample data. We apply them to data from a study of HIV/AIDS prevalence in Curitiba, Brazil. Our results show that when transmission bias is present, external information about its likely extent can greatly improve the estimates. The methods are implemented in the NSUM R package. PMID:26949438
Benchmarking Method for Estimation of Biogas Upgrading Schemes
NASA Astrophysics Data System (ADS)
Blumberga, D.; Kuplais, Ģ.; Veidenbergs, I.; Dāce, E.
2009-01-01
The paper describes a new benchmarking method proposed for estimation of different biogas upgrading schemes. The method has been developed to compare the indicators of alternative biogas purification and upgrading solutions and their threshold values. The chosen indicators cover both economic and ecologic aspects of these solutions, e.g. the prime cost of biogas purification and storage, and the cost efficiency of greenhouse gas emission reduction. The proposed benchmarking method has been tested at "Daibe" - a landfill for solid municipal waste.
New method for the estimation of platelet ascorbic acid
Lloyd, J. V.; Davis, P. S.; Lander, Harry
1969-01-01
Present techniques for the estimation of platelet ascorbic acid allow interference by other substances in the sample. A new and more specific method of analysis is presented. The proposed method owes its increased specificity to resolution of the extract by thin-layer chromatography. By this means ascorbic acid is separated from other reducing substances present. The separated ascorbic acid is eluted from the thin layer and estimated by a new and very sensitive procedure: ascorbic acid is made to react with ferric chloride and the ferrous ions so formed are estimated spectrophotometrically by the coloured derivative which they form with tripyridyl-Striazine. Results obtained with normal blood platelets were consistently lower than simultaneous determinations by the dinitrophenylhydrazine (DNPH) method. PMID:5798633
Fault detection in electromagnetic suspension systems with state estimation methods
Sinha, P.K.; Zhou, F.B.; Kutiyal, R.S. . Dept. of Engineering)
1993-11-01
High-speed maglev vehicles need a high level of safety that depends on the whole vehicle system's reliability. There are many ways of attaining high reliability for the system. Conventional method uses redundant hardware with majority vote logic circuits. Hardware redundancy costs more, weigh more and occupy more space than that of analytically redundant methods. Analytically redundant systems use parameter identification and state estimation methods based on the system models to detect and isolate the fault of instruments (sensors), actuator and components. In this paper the authors use the Luenberger observer to estimate three state variables of the electromagnetic suspension system: position (airgap), vehicle velocity, and vertical acceleration. These estimates are compared with the corresponding sensor outputs for fault detection. In this paper, they consider FDI of the accelerometer, the sensor which provides the ride quality.
A novel tracer method for estimating sewer exfiltration
NASA Astrophysics Data System (ADS)
Rieckermann, J.; Borsuk, M.; Reichert, P.; Gujer, W.
2005-05-01
A novel method is presented to estimate exfiltration from sewer systems using artificial tracers. The method relies upon use of an upstream indicator signal and a downstream reference signal to eliminate the dependence of exfiltration estimates on the accuracy of discharge measurement. An experimental design, a data analysis procedure, and an uncertainty assessment process are described and illustrated by a case study. In a 2-km reach of unknown condition, exfiltration was estimated at 9.9 +/- 2.7%. Uncertainty in this estimate was primarily due to the use of sodium chloride (NaCl) as the tracer substance. NaCl is measured using conductivity, which is present at nonnegligible levels in wastewater, thus confounding accurate identification of tracer peaks. As estimates of exfiltration should have as low a measurement error as possible, future development of the method will concentrate on improved experimental design and tracer selection. Although the method is not intended to replace traditional CCTV inspections, it can provide additional information to urban water managers for rational rehabilitation planning.
A novel tracer method for estimating sewer exfiltration
NASA Astrophysics Data System (ADS)
Rieckermann, J.; Borsuk, M.; Reichert, P.; Gujer, W.
2005-05-01
A novel method is presented to estimate exfiltration from sewer systems using artificial tracers. The method relies upon use of an upstream indicator signal and a downstream reference signal to eliminate the dependence of exfiltration estimates on the accuracy of discharge measurement. An experimental design, a data analysis procedure, and an uncertainty assessment process are described and illustrated by a case study. In a 2-km reach of unknown condition, exfiltration was estimated at 9.9 ± 2.7%. Uncertainty in this estimate was primarily due to the use of sodium chloride (NaCl) as the tracer substance. NaCl is measured using conductivity, which is present at nonnegligible levels in wastewater, thus confounding accurate identification of tracer peaks. As estimates of exfiltration should have as low a measurement error as possible, future development of the method will concentrate on improved experimental design and tracer selection. Although the method is not intended to replace traditional CCTV inspections, it can provide additional information to urban water managers for rational rehabilitation planning.
Models and estimation methods for clinical HIV-1 data
NASA Astrophysics Data System (ADS)
Verotta, Davide
2005-12-01
Clinical HIV-1 data include many individual factors, such as compliance to treatment, pharmacokinetics, variability in respect to viral dynamics, race, sex, income, etc., which might directly influence or be associated with clinical outcome. These factors need to be taken into account to achieve a better understanding of clinical outcome and mathematical models can provide a unifying framework to do so. The first objective of this paper is to demonstrate the development of comprehensive HIV-1 dynamics models that describe viral dynamics and also incorporate different factors influencing such dynamics. The second objective of this paper is to describe alternative estimation methods that can be applied to the analysis of data with such models. In particular, we consider: (i) simple but effective two-stage estimation methods, in which data from each patient are analyzed separately and summary statistics derived from the results, (ii) more complex nonlinear mixed effect models, used to pool all the patient data in a single analysis. Bayesian estimation methods are also considered, in particular: (iii) maximum posterior approximations, MAP, and (iv) Markov chain Monte Carlo, MCMC. Bayesian methods incorporate prior knowledge into the models, thus avoiding some of the model simplifications introduced when the data are analyzed using two-stage methods, or a nonlinear mixed effect framework. We demonstrate the development of the models and the different estimation methods using real AIDS clinical trial data involving patients receiving multiple drugs regimens.
Comparison of methods of estimating body fat in normal subjects and cancer patients
Cohn, S.H.; Ellis, K.J.; Vartsky, D.; Sawitsky, A.; Gartenhaus, W.; Yasumura, S.; Vaswani, A.N.
1981-12-01
Total body fat can be indirectly estimated by the following noninvasive techniques: determination of lean body mass by measurement of body potassium or body water, and determination of density by underwater weighing or by skinfold measurements. The measurement of total body nitrogen by neutron activation provides another technique for estimating lean body mass and hence body fat. The nitrogen measurement can also be combined with the measurement of total body potassium in a two compartment model of the lean body mass from which another estimate of body fat can be derived. All of the above techniques are subject to various errors and are based on a number of assumptions, some of which are incompletely validated. These techniques were applied to a population of normal subjects and to a group of cancer patients. The advantages and disadvantages of each method are discussed in terms of their ability to estimate total body fat.
Estimation Method of Body Temperature from Upper Arm Temperature
NASA Astrophysics Data System (ADS)
Suzuki, Arata; Ryu, Kazuteru; Kanai, Nobuyuki
This paper proposes a method for estimation of a body temperature by using a relation between the upper arm temperature and the atmospheric temperature. Conventional method has measured by armpit or oral, because the body temperature from the body surface is influenced by the atmospheric temperature. However, there is a correlation between the body surface temperature and the atmospheric temperature. By using this correlation, the body temperature can estimated from the body surface temperature. Proposed method enables to measure body temperature by the temperature sensor that is embedded in the blood pressure monitor cuff. Therefore, simultaneous measurement of blood pressure and body temperature can be realized. The effectiveness of the proposed method is verified through the actual body temperature experiment. The proposed method might contribute to reduce the medical staff's workloads in the home medical care, and more.
Electromechanical Mode Online Estimation using Regularized Robust RLS Methods
Zhou, Ning; Trudnowski, Daniel; Pierre, John W; Mittelstadt, William
2008-11-01
This paper proposes a regularized robust recursive least square (R3LS) method for on-line estimation of power-system electromechanical modes based on synchronized phasor measurement unit (PMU) data. The proposed method utilizes an autoregressive moving average exogenous (ARMAX) model to account for typical measurement data, which includes low-level pseudo-random probing, ambient, and ringdown data. A robust objective function is utilized to reduce the negative influence from non-typical data, which include outliers and missing data. A dynamic regularization method is introduced to help include a priori knowledge about the system and reduce the influence of under-determined problems. Based on a 17-machine simulation model, it is shown through the Monte-Carlo method that the proposed R3LS method can estimate and track electromechani-cal modes by effectively using combined typical and non-typical measurement data.
A review of action estimation methods for galactic dynamics
NASA Astrophysics Data System (ADS)
Sanders, Jason L.; Binney, James
2016-04-01
We review the available methods for estimating actions, angles and frequencies of orbits in both axisymmetric and triaxial potentials. The methods are separated into two classes. Unless an orbit has been trapped by a resonance, convergent, or iterative, methods are able to recover the actions to arbitrarily high accuracy given sufficient computing time. Faster non-convergent methods rely on the potential being sufficiently close to a separable potential, and the accuracy of the action estimate cannot be improved through further computation. We critically compare the accuracy of the methods and the required computation time for a range of orbits in an axisymmetric multicomponent Galactic potential. We introduce a new method for estimating actions that builds on the adiabatic approximation of Schönrich & Binney and discuss the accuracy required for the actions, angles and frequencies using suitable distribution functions for the thin and thick discs, the stellar halo and a star stream. We conclude that for studies of the disc and smooth halo component of the Milky Way, the most suitable compromise between speed and accuracy is the Stäckel Fudge, whilst when studying streams the non-convergent methods do not offer sufficient accuracy and the most suitable method is computing the actions from an orbit integration via a generating function. All the software used in this study can be downloaded from https://github.com/jls713/tact.
Methods of evaluating the spermatogenic ability of male raccoons (Procyon lotor).
Uno, Taiki; Kato, Takuya; Seki, Yoshikazu; Kawakami, Eiichi; Hayama, Shin-ichi
2014-01-01
Feral raccoons (Procyon lotor) have been growing in number in Japan, and they are becoming a problematic invasive species. Consequently, they are commonly captured and killed in pest control programs. For effective population control of feral raccoons, it is necessary to understand their reproductive physiology and ecology. Although the reproductive traits of female raccoons are well known, those of the males are not well understood because specialized knowledge and facilities are required to study them. In this study, we first used a simple evaluation method to assess spermatogenesis and presence of spermatozoa in the tail of the epididymis of feral male raccoons by histologically examining the testis and epididymis. We then evaluated the possibility of using 7 variables-body weight, body length, body mass index, testicular weight, epididymal weight, testicular size and gonadosomatic index (GSI)-to estimate spermatogenesis and presence of spermatozoa in the tail of the epididymis. GSI and body weight were chosen as criteria for spermatogenesis, and GSI was chosen as the criterion for presence of spermatozoa in the tail of the epididymis. Because GSI is calculated from body weight and testicular weight, this model should be able to be used to estimate the reproductive state of male raccoons regardless of season and age when just these two parameters are known. In this study, GSI was demonstrated to be an index of reproductive state in male raccoons. To our knowledge, this is the first report of such a use for GSI in a member of the Carnivora. PMID:25168086
Statistical methods of parameter estimation for deterministically chaotic time series.
Pisarenko, V F; Sornette, D
2004-03-01
We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A "segmentation fitting" maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x(1) considered as an additional unknown parameter. The segmentation fitting method, called "piece-wise" ML, is similar in spirit but simpler and has smaller bias than the "multiple shooting" previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically). PMID:15089376
Hydrological model uncertainty due to spatial evapotranspiration estimation methods
NASA Astrophysics Data System (ADS)
Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub
2016-05-01
Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.
Thompson, J K; Spana, R E
1991-08-01
The relationship between visuospatial ability and size accuracy in perception was assessed in 69 normal college females. In general, correlations indicated small associations between visuospatial defects and size overestimation and little relationship between visuospatial ability and level of bulimic disturbance. Implications for research on the size overestimation of body image are addressed. PMID:1945715
Global parameter estimation methods for stochastic biochemical systems
2010-01-01
Background The importance of stochasticity in cellular processes having low number of molecules has resulted in the development of stochastic models such as chemical master equation. As in other modelling frameworks, the accompanying rate constants are important for the end-applications like analyzing system properties (e.g. robustness) or predicting the effects of genetic perturbations. Prior knowledge of kinetic constants is usually limited and the model identification routine typically includes parameter estimation from experimental data. Although the subject of parameter estimation is well-established for deterministic models, it is not yet routine for the chemical master equation. In addition, recent advances in measurement technology have made the quantification of genetic substrates possible to single molecular levels. Thus, the purpose of this work is to develop practical and effective methods for estimating kinetic model parameters in the chemical master equation and other stochastic models from single cell and cell population experimental data. Results Three parameter estimation methods are proposed based on the maximum likelihood and density function distance, including probability and cumulative density functions. Since stochastic models such as chemical master equations are typically solved using a Monte Carlo approach in which only a finite number of Monte Carlo realizations are computationally practical, specific considerations are given to account for the effect of finite sampling in the histogram binning of the state density functions. Applications to three practical case studies showed that while maximum likelihood method can effectively handle low replicate measurements, the density function distance methods, particularly the cumulative density function distance estimation, are more robust in estimating the parameters with consistently higher accuracy, even for systems showing multimodality. Conclusions The parameter estimation methodologies
MONTE CARLO ERROR ESTIMATION APPLIED TO NONDESTRUCTIVE ASSAY METHODS
R. ESTEP; ET AL
2000-06-01
Monte Carlo randomization of nuclear counting data into N replicate sets is the basis of a simple and effective method for estimating error propagation through complex analysis algorithms such as those using neural networks or tomographic image reconstructions. The error distributions of properly simulated replicate data sets mimic those of actual replicate measurements and can be used to estimate the std. dev. for an assay along with other statistical quantities. We have used this technique to estimate the standard deviation in radionuclide masses determined using the tomographic gamma scanner (TGS) and combined thermal/epithermal neutron (CTEN) methods. The effectiveness of this approach is demonstrated by a comparison of our Monte Carlo error estimates with the error distributions in actual replicate measurements and simulations of measurements. We found that the std. dev. estimated this way quickly converges to an accurate value on average and has a predictable error distribution similar to N actual repeat measurements. The main drawback of the Monte Carlo method is that N additional analyses of the data are required, which may be prohibitively time consuming with slow analysis algorithms.
Estimation of uncertainty for contour method residual stress measurements
Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; Hill, Michael R.
2014-12-03
This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulnessmore » of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).« less
Estimation of uncertainty for contour method residual stress measurements
Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; Hill, Michael R.
2014-12-03
This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulness of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).
A new colorimetric method for the estimation of glycosylated hemoglobin.
Nayak, S S; Pattabiraman, T N
1981-02-01
A new colorimetric method, based on the phenol sulphuric acid reaction of carbohydrates, is described for the determination of glycosylated hemoglobin. Hemolyzates were treated with 1 mol/l oxalic acid in 2 mol/l Hcl for 4 h at 100 degrees C, the protein was precipitated with trichloroacetic acid, and the free sugars and hydroxymethyl furfural in the protein free supernatant were treated with phenol and sulphuric acid to form the color. The new method is compared to the thiobarbituric acid method and the ion-exchange chromatographic method for the estimation of glycosylated hemoglobin in normals and diabetics. The increase in glycosylated hemoglobin in diabetic patients as estimated by the phenol-sulphuric acid method was more significant (P less than 0.001) than the increase observed by the thiobarbituric acid method (P less than 0.01). The correlation between the phenol-sulphuric acid method and the column method was better (r = 0.91) than the correlation between the thiobarbituric acid method and the column method (r = 0.84). No significant correlation between fasting and postprandial blood sugar level and glycosylated hemoglobin level as determined by the two colorimetric methods was observed in diabetic patients. PMID:7226519
Inertial sensor-based methods in walking speed estimation: a systematic review.
Yang, Shuozhi; Li, Qingguo
2012-01-01
Self-selected walking speed is an important measure of ambulation ability used in various clinical gait experiments. Inertial sensors, i.e., accelerometers and gyroscopes, have been gradually introduced to estimate walking speed. This research area has attracted a lot of attention for the past two decades, and the trend is continuing due to the improvement of performance and decrease in cost of the miniature inertial sensors. With the intention of understanding the state of the art of current development in this area, a systematic review on the exiting methods was done in the following electronic engines/databases: PubMed, ISI Web of Knowledge, SportDiscus and IEEE Xplore. Sixteen journal articles and papers in proceedings focusing on inertial sensor based walking speed estimation were fully reviewed. The existing methods were categorized by sensor specification, sensor attachment location, experimental design, and walking speed estimation algorithm. PMID:22778632
Inertial Sensor-Based Methods in Walking Speed Estimation: A Systematic Review
Yang, Shuozhi; Li, Qingguo
2012-01-01
Self-selected walking speed is an important measure of ambulation ability used in various clinical gait experiments. Inertial sensors, i.e., accelerometers and gyroscopes, have been gradually introduced to estimate walking speed. This research area has attracted a lot of attention for the past two decades, and the trend is continuing due to the improvement of performance and decrease in cost of the miniature inertial sensors. With the intention of understanding the state of the art of current development in this area, a systematic review on the exiting methods was done in the following electronic engines/databases: PubMed, ISI Web of Knowledge, SportDiscus and IEEE Xplore. Sixteen journal articles and papers in proceedings focusing on inertial sensor based walking speed estimation were fully reviewed. The existing methods were categorized by sensor specification, sensor attachment location, experimental design, and walking speed estimation algorithm. PMID:22778632
Correction of Misclassifications Using a Proximity-Based Estimation Method
NASA Astrophysics Data System (ADS)
Niemistö, Antti; Shmulevich, Ilya; Lukin, Vladimir V.; Dolia, Alexander N.; Yli-Harja, Olli
2004-12-01
An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial) information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.
Detecting diversity: emerging methods to estimate species diversity.
Iknayan, Kelly J; Tingley, Morgan W; Furnas, Brett J; Beissinger, Steven R
2014-02-01
Estimates of species richness and diversity are central to community and macroecology and are frequently used in conservation planning. Commonly used diversity metrics account for undetected species primarily by controlling for sampling effort. Yet the probability of detecting an individual can vary among species, observers, survey methods, and sites. We review emerging methods to estimate alpha, beta, gamma, and metacommunity diversity through hierarchical multispecies occupancy models (MSOMs) and multispecies abundance models (MSAMs) that explicitly incorporate observation error in the detection process for species or individuals. We examine advantages, limitations, and assumptions of these detection-based hierarchical models for estimating species diversity. Accounting for imperfect detection using these approaches has influenced conclusions of comparative community studies and creates new opportunities for testing theory. PMID:24315534
Inverse method for estimating shear stress in machining
NASA Astrophysics Data System (ADS)
Burns, T. J.; Mates, S. P.; Rhorer, R. L.; Whitenton, E. P.; Basak, D.
2016-01-01
An inverse method is presented for estimating shear stress in the work material in the region of chip-tool contact along the rake face of the tool during orthogonal machining. The method is motivated by a model of heat generation in the chip, which is based on a two-zone contact model for friction along the rake face, and an estimate of the steady-state flow of heat into the cutting tool. Given an experimentally determined discrete set of steady-state temperature measurements along the rake face of the tool, it is shown how to estimate the corresponding shear stress distribution on the rake face, even when no friction model is specified.
NASA Astrophysics Data System (ADS)
Forbes, B. T.
2015-12-01
Due to the predominantly arid climate in Arizona, access to adequate water supply is vital to the economic development and livelihood of the State. Water supply has become increasingly important during periods of prolonged drought, which has strained reservoir water levels in the Desert Southwest over past years. Arizona's water use is dominated by agriculture, consuming about seventy-five percent of the total annual water demand. Tracking current agricultural water use is important for managers and policy makers so that current water demand can be assessed and current information can be used to forecast future demands. However, many croplands in Arizona are irrigated outside of areas where water use reporting is mandatory. To estimate irrigation withdrawals on these lands, we use a combination of field verification, evapotranspiration (ET) estimation, and irrigation system qualification. ET is typically estimated in Arizona using the Modified Blaney-Criddle method which uses meteorological data to estimate annual crop water requirements. The Modified Blaney-Criddle method assumes crops are irrigated to their full potential over the entire growing season, which may or may not be realistic. We now use the Operational Simplified Surface Energy Balance (SSEBop) ET data in a remote-sensing and energy-balance framework to estimate cropland ET. SSEBop data are of sufficient resolution (30m by 30m) for estimation of field-scale cropland water use. We evaluate our SSEBop-based estimates using ground-truth information and irrigation system qualification obtained in the field. Our approach gives the end user an estimate of crop consumptive use as well as inefficiencies in irrigation system performance—both of which are needed by water managers for tracking irrigated water use in Arizona.
A method to determine the ability of drugs to diffuse through the blood-brain barrier.
Seelig, A; Gottschlich, R; Devant, R M
1994-01-01
A method has been devised for predicting the ability of drugs to cross the blood-brain barrier. The criteria depend on the amphiphilic properties of a drug as reflected in its surface activity. The assessment was made with various drugs that either penetrate or do not penetrate the blood-brain barrier. The surface activity of these drugs was quantified by their Gibbs adsorption isotherms in terms of three parameters: (i) the onset of surface activity, (ii) the critical micelle concentration, and (iii) the surface area requirement of the drug at the air/water interface. A calibration diagram is proposed in which the critical micelle concentration is plotted against the concentration required for the onset of surface activity. Three different regions are easily distinguished in this diagram: a region of very hydrophobic drugs which fail to enter the central nervous system because they remain adsorbed to the membrane, a central area of less hydrophobic drugs which can cross the blood-brain barrier, and a region of relatively hydrophilic drugs which do not cross the blood-brain barrier unless applied at high concentrations. This diagram can be used to predict reliably the central nervous system permeability of an unknown compound from a simple measurement of its Gibbs adsorption isotherm. PMID:8278409
Williams, Justin H. G.; Nicolson, Andrew T. A.; Clephan, Katie J.; de Grauw, Haro; Perrett, David I.
2013-01-01
Social communication relies on intentional control of emotional expression. Its variability across cultures suggests important roles for imitation in developing control over enactment of subtly different facial expressions and therefore skills in emotional communication. Both empathy and the imitation of an emotionally communicative expression may rely on a capacity to share both the experience of an emotion and the intention or motor plan associated with its expression. Therefore, we predicted that facial imitation ability would correlate with empathic traits. We built arrays of visual stimuli by systematically blending three basic emotional expressions in controlled proportions. Raters then assessed accuracy of imitation by reconstructing the same arrays using photographs of participants’ attempts at imitations of the stimuli. Accuracy was measured as the mean proximity of the participant photographs to the target stimuli in the array. Levels of performance were high, and rating was highly reliable. More empathic participants, as measured by the empathy quotient (EQ), were better facial imitators and, in particular, performed better on the more complex, blended stimuli. This preliminary study offers a simple method for the measurement of facial imitation accuracy and supports the hypothesis that empathic functioning may utilise motor control mechanisms which are also used for emotional expression. PMID:23626756
Optimal Input Signal Design for Data-Centric Estimation Methods.
Deshpande, Sunil; Rivera, Daniel E
2013-01-01
Data-centric estimation methods such as Model-on-Demand and Direct Weight Optimization form attractive techniques for estimating unknown functions from noisy data. These methods rely on generating a local function approximation from a database of regressors at the current operating point with the process repeated at each new operating point. This paper examines the design of optimal input signals formulated to produce informative data to be used by local modeling procedures. The proposed method specifically addresses the distribution of the regressor vectors. The design is examined for a linear time-invariant system under amplitude constraints on the input. The resulting optimization problem is solved using semidefinite relaxation methods. Numerical examples show the benefits in comparison to a classical PRBS input design. PMID:24317042
Optimal Input Signal Design for Data-Centric Estimation Methods
Deshpande, Sunil; Rivera, Daniel E.
2013-01-01
Data-centric estimation methods such as Model-on-Demand and Direct Weight Optimization form attractive techniques for estimating unknown functions from noisy data. These methods rely on generating a local function approximation from a database of regressors at the current operating point with the process repeated at each new operating point. This paper examines the design of optimal input signals formulated to produce informative data to be used by local modeling procedures. The proposed method specifically addresses the distribution of the regressor vectors. The design is examined for a linear time-invariant system under amplitude constraints on the input. The resulting optimization problem is solved using semidefinite relaxation methods. Numerical examples show the benefits in comparison to a classical PRBS input design. PMID:24317042
NASA Astrophysics Data System (ADS)
Leavesley, G.; Hay, L.; Viger, R.; de Jong, C.
The use of distributed-parameter models in mountainous terrain requires the ability to define the spatial and temporal distributions of input meteorological variables and the physical basin characteristics that affect the processes being simulated. Application of these models to complex problems, such as assessing the impacts of land-use and climate change, limits one's ability to calibrate model parameters and necessitates the use of parameter-estimation methods that rely on measurable climate and basin char- acteristics. The increasing availability of high-resolution spatial and temporal data sets now enables the development and evaluation of a variety of parameter-estimation methods over a wide range of climatic and physiographic regions. For example, pa- rameters related to basin characteristics can be estimated from digital soils, vegetation, and topographic databases. Parameters related to the temporal and spatial distribution of meteorological variables, such as precipitation and temperature, can be estimated from multiple linear regression relations using latitude, longitude, and elevation of measurement stations and basin subareas. This approach also supports the use of sta- tistical and dynamical downscaling of atmospheric model output for use in distributed hydrological model applications. A set of tools to objectively apply and evaluate distributed meteorological and hydro- logical parameter-estimation methods, and process models, is being developed using the U.S. Geological Survey's Modular Modeling System (MMS). Tools include meth- ods to analyze model parameters and evaluate the extent to which uncertainty in model parameters affects uncertainty in simulation results. Methodologies that integrate re- motely sensed information with the distributed-model results are being incorporated in the tool set to facilitate the assessment of the spatial and temporal accuracy of model results. An application of selected models and parameter-estimation methods is
A study of methods to estimate debris flow velocity
Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.
2008-01-01
Debris flow velocities are commonly back-calculated from superelevation events which require subjective estimates of radii of curvature of bends in the debris flow channel or predicted using flow equations that require the selection of appropriate rheological models and material property inputs. This research investigated difficulties associated with the use of these conventional velocity estimation methods. Radii of curvature estimates were found to vary with the extent of the channel investigated and with the scale of the media used, and back-calculated velocities varied among different investigated locations along a channel. Distinct populations of Bingham properties were found to exist between those measured by laboratory tests and those back-calculated from field data; thus, laboratory-obtained values would not be representative of field-scale debris flow behavior. To avoid these difficulties with conventional methods, a new preliminary velocity estimation method is presented that statistically relates flow velocity to the channel slope and the flow depth. This method presents ranges of reasonable velocity predictions based on 30 previously measured velocities. ?? 2008 Springer-Verlag.
Computational methods for estimation of parameters in hyperbolic systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.; Murphy, K. A.
1983-01-01
Approximation techniques for estimating spatially varying coefficients and unknown boundary parameters in second order hyperbolic systems are discussed. Methods for state approximation (cubic splines, tau-Legendre) and approximation of function space parameters (interpolatory splines) are outlined and numerical findings for use of the resulting schemes in model "one dimensional seismic inversion' problems are summarized.
Stress intensity estimates by a computer assisted photoelastic method
NASA Technical Reports Server (NTRS)
Smith, C. W.
1977-01-01
Following an introductory history, the frozen stress photoelastic method is reviewed together with analytical and experimental aspects of cracks in photoelastic models. Analytical foundations are then presented upon which a computer assisted frozen stress photoelastic technique is based for extracting estimates of stress intensity factors from three-dimensional cracked body problems. The use of the method is demonstrated for two currently important three-dimensional crack problems.
Nonparametric methods for drought severity estimation at ungauged sites
NASA Astrophysics Data System (ADS)
Sadri, S.; Burn, D. H.
2012-12-01
The objective in frequency analysis is, given extreme events such as drought severity or duration, to estimate the relationship between that event and the associated return periods at a catchment. Neural networks and other artificial intelligence approaches in function estimation and regression analysis are relatively new techniques in engineering, providing an attractive alternative to traditional statistical models. There are, however, few applications of neural networks and support vector machines in the area of severity quantile estimation for drought frequency analysis. In this paper, we compare three methods for this task: multiple linear regression, radial basis function neural networks, and least squares support vector regression (LS-SVR). The area selected for this study includes 32 catchments in the Canadian Prairies. From each catchment drought severities are extracted and fitted to a Pearson type III distribution, which act as observed values. For each method-duration pair, we use a jackknife algorithm to produce estimated values at each site. The results from these three approaches are compared and analyzed, and it is found that LS-SVR provides the best quantile estimates and extrapolating capacity.
Three Different Methods of Estimating LAI in a Small Watershed
NASA Astrophysics Data System (ADS)
Speckman, H. N.; Ewers, B. E.; Beverly, D.
2015-12-01
Leaf area index (LAI) is a critical input of models that improve predictive understanding of ecology, hydrology, and climate change. Multiple techniques exist to quantify LAI, most of which are labor intensive, and all often fail to converge on similar estimates. . Recent large-scale bark beetle induced mortality greatly altered LAI, which is now dominated by younger and more metabolically active trees compared to the pre-beetle forest. Tree mortality increases error in optical LAI estimates due to the lack of differentiation between live and dead branches in dense canopy. Our study aims to quantify LAI using three different LAI methods, and then to compare the techniques to each other and topographic drivers to develop an effective predictive model of LAI. This study focuses on quantifying LAI within a small (~120 ha) beetle infested watershed in Wyoming's Snowy Range Mountains. The first technique estimated LAI using in-situ hemispherical canopy photographs that were then analyzed with Hemisfer software. The second LAI estimation technique was use of the Kaufmann 1982 allometrerics from forest inventories conducted throughout the watershed, accounting for stand basal area, species composition, and the extent of bark beetle driven mortality. The final technique used airborne light detection and ranging (LIDAR) first DMS returns, which were used to estimating canopy heights and crown area. LIDAR final returns provided topographical information and were then ground-truthed during forest inventories. Once data was collected, a fractural analysis was conducted comparing the three methods. Species composition was driven by slope position and elevation Ultimately the three different techniques provided very different estimations of LAI, but each had their advantage: estimates from hemisphere photos were well correlated with SWE and snow depth measurements, forest inventories provided insight into stand health and composition, and LIDAR were able to quickly and
A New Method for Deriving Global Estimates of Maternal Mortality.
Wilmoth, John R; Mizoguchi, Nobuko; Oestergaard, Mikkel Z; Say, Lale; Mathers, Colin D; Zureick-Brown, Sarah; Inoue, Mie; Chou, Doris
2012-07-13
Maternal mortality is widely regarded as a key indicator of population health and of social and economic development. Its levels and trends are monitored closely by the United Nations and others, inspired in part by the UN's Millennium Development Goals (MDGs), which call for a three-fourths reduction in the maternal mortality ratio between 1990 and 2015. Unfortunately, the empirical basis for such monitoring remains quite weak, requiring the use of statistical models to obtain estimates for most countries. In this paper we describe a new method for estimating global levels and trends in maternal mortality. For countries lacking adequate data for direct calculation of estimates, we employed a parametric model that separates maternal deaths related to HIV/AIDS from all others. For maternal deaths unrelated to HIV/AIDS, the model consists of a hierarchical linear regression with three predictors and variable intercepts for both countries and regions. The uncertainty of estimates was assessed by simulating the estimation process, accounting for variability both in the data and in other model inputs. The method was used to obtain the most recent set of UN estimates, published in September 2010. Here, we provide a concise description and explanation of the approach, including a new analysis of the components of variability reflected in the uncertainty intervals. Final estimates provide evidence of a more rapid decline in the global maternal mortality ratio than suggested by previous work, including another study published in April 2010. We compare findings from the two recent studies and discuss topics for further research to help resolve differences. PMID:24416714
Methods for Measuring and Estimating Methane Emission from Ruminants
Storm, Ida M. L. D.; Hellwing, Anne Louise F.; Nielsen, Nicolaj I.; Madsen, Jørgen
2012-01-01
Simple Summary Knowledge about methods used in quantification of greenhouse gasses is currently needed due to international commitments to reduce the emissions. In the agricultural sector one important task is to reduce enteric methane emissions from ruminants. Different methods for quantifying these emissions are presently being used and others are under development, all with different conditions for application. For scientist and other persons working with the topic it is very important to understand the advantages and disadvantage of the different methods in use. This paper gives a brief introduction to existing methods but also a description of newer methods and model-based techniques. Abstract This paper is a brief introduction to the different methods used to quantify the enteric methane emission from ruminants. A thorough knowledge of the advantages and disadvantages of these methods is very important in order to plan experiments, understand and interpret experimental results, and compare them with other studies. The aim of the paper is to describe the principles, advantages and disadvantages of different methods used to quantify the enteric methane emission from ruminants. The best-known methods: Chambers/respiration chambers, SF6 technique and in vitro gas production technique and the newer CO2 methods are described. Model estimations, which are used to calculate national budget and single cow enteric emission from intake and diet composition, are also discussed. Other methods under development such as the micrometeorological technique, combined feeder and CH4 analyzer and proxy methods are briefly mentioned. Methods of choice for estimating enteric methane emission depend on aim, equipment, knowledge, time and money available, but interpretation of results obtained with a given method can be improved if knowledge about the disadvantages and advantages are used in the planning of experiments. PMID:26486915
ERIC Educational Resources Information Center
Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M.
2010-01-01
Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…
ERIC Educational Resources Information Center
Batley, Rose-Marie; Boss, Marvin W.
The purpose of this study was to assess the effects of correlated dimensions and differential ability on one dimension on parameter estimation when using a two-dimensional item response theory model. Multidimensional analysis of simulated two-dimensional item response data fitting the M2PL model of M. D. Reckase (1985, 1986) was conducted using…
Statistical estimation of mineral age by K-Ar method
Vistelius, A.B.; Drubetzkoy, E.R.; Faas, A.V. )
1989-11-01
Statistical estimation of age of {sup 40}Ar/{sup 40}K ratios may be considered a result of convolution of uniform and normal distributions with different weights for different minerals. Data from Gul'shad Massif (Nearbalkhash, Kazakhstan, USSR) indicate that {sup 40}Ar/{sup 40}K ratios reflecting the intensity of geochemical processes can be resolved using convolutions. Loss of {sup 40}Ar in biotites is shown whereas hornblende retained the original content of {sup 40}Ar throughout the geological history of the massif. Results demonstrate that different estimation methods must be used for different minerals and different rocks when radiometric ages are employed for dating.
A New Method to Estimate Halo Mass of Galaxy Groups
NASA Astrophysics Data System (ADS)
Lu, Yi; Yang, Xiaohu; Shen, Shiyin
2015-08-01
Reliable halo mass estimation for a given galaxy system plays an important role both in cosmology and galaxy formation studies. Here we set out to find the way that can improve the halo mass estimation for those galaxy systems with limited brightest member galaxies been observed. Using four mock galaxy samples constructed from semi-analytical formation models, the subhalo abundance matching method and the conditional luminosity functions, respectively, we find that the luminosity gap between the brightest and the subsequent brightest member galaxies in a halo (group) can be used to significantly reduce the scatter in the halo mass estimation based on the luminosity of the brightest galaxy alone. Tests show that these corrections can significantly reduce the scatter in the halo mass estimations by $\\sim 50\\%$ to $\\sim 70\\%$ in massive halos depending on which member galaxies are considered. Comparing to the traditional ranking method, we find that this method works better for groups with less than five members, or in observations with very bright magnitude cut.
Phenology of Net Ecosystem Exchange: A Simple Estimation Method
NASA Astrophysics Data System (ADS)
Losleben, M. V.
2007-12-01
Carbon sequestration is important to global carbon budget and ecosystem function and dynamics research. Direct measurement of Net Ecosystem Exchange (NEE), a measure of the carbon sequestration of an ecosystem, is instrument, labor, and fiscally intensive, thus there is value to establish a simple, robust estimation method. Six ecosystem types across the United States, ranging from deciduous and coniferous forests to desert shrub land and grasslands, are compared. Initial results suggest instrumentally measured NEE and this proxy method are promising, showing excellent temporal matches of the two methods for onset and termination of carbon sequestration in a sub-alpine forest for the study period, 1997-2006. Moreover, the similarity of climatic signatures in all six ecosystems of this study suggests this proxy estimation method may be widely applicable across diverse environmental zones This estimation method is simply the interpretation of annual accumulated daily precipitation plotted against the annual daily accumulated degree growing days above a zero degree C base. Applicability at sub-seasonal time scales will also be discussed in this presentation.
An aerial survey method to estimate sea otter abundance
Bodkin, J.L.; Udevitz, M.S.
1999-01-01
Sea otters (Enhydra lutris) occur in shallow coastal habitats and can be highly visible on the sea surface. They generally rest in groups and their detection depends on factors that include sea conditions, viewing platform, observer technique and skill, distance, habitat and group size. While visible on the surface, they are difficult to see while diving and may dive in response to an approaching survey platform. We developed and tested an aerial survey method that uses intensive searches within portions of strip transects to adjust for availability and sightability biases. Correction factors are estimated independently for each survey and observer. In tests of our method using shore-based observers, we estimated detection probabilities of 0.52-0.72 in standard strip-transects and 0.96 in intensive searches. We used the survey method in Prince William Sound, Alaska to estimate a sea otter population size of 9,092 (SE = 1422). The new method represents an improvement over various aspects of previous methods, but additional development and testing will be required prior to its broad application.
A new analytical method for groundwater recharge and discharge estimation
NASA Astrophysics Data System (ADS)
Liang, Xiuyu; Zhang, You-Kuan
2012-07-01
SummaryA new analytical method was proposed for groundwater recharge and discharge estimation in an unconfined aquifer. The method is based on an analytical solution to the Boussinesq equation linearized in terms of h2, where h is the water table elevation, with a time-dependent source term. The solution derived was validated with numerical simulation and was shown to be a better approximation than an existing solution to the Boussinesq equation linearized in terms of h. By calibrating against the observed water levels in a monitoring well during a period of 100 days, we shown that the method proposed in this study can be used to estimate daily recharge (R) and evapotranspiration (ET) as well as the lateral drainage. It was shown that the total R was reasonably estimated with a water-table fluctuation (WTF) method if the water table measurements away from a fixed-head boundary were used, but the total ET was overestimated and the total net recharge was underestimated because of the lack of consideration of lateral drainage and aquifer storage in the WTF method.
New Statistical Learning Methods for Estimating Optimal Dynamic Treatment Regimes
Zhao, Ying-Qi; Zeng, Donglin; Laber, Eric B.; Kosorok, Michael R.
2014-01-01
Dynamic treatment regimes (DTRs) are sequential decision rules for individual patients that can adapt over time to an evolving illness. The goal is to accommodate heterogeneity among patients and find the DTR which will produce the best long term outcome if implemented. We introduce two new statistical learning methods for estimating the optimal DTR, termed backward outcome weighted learning (BOWL), and simultaneous outcome weighted learning (SOWL). These approaches convert individualized treatment selection into an either sequential or simultaneous classification problem, and can thus be applied by modifying existing machine learning techniques. The proposed methods are based on directly maximizing over all DTRs a nonparametric estimator of the expected long-term outcome; this is fundamentally different than regression-based methods, for example Q-learning, which indirectly attempt such maximization and rely heavily on the correctness of postulated regression models. We prove that the resulting rules are consistent, and provide finite sample bounds for the errors using the estimated rules. Simulation results suggest the proposed methods produce superior DTRs compared with Q-learning especially in small samples. We illustrate the methods using data from a clinical trial for smoking cessation. PMID:26236062
A semi-automatic multi-view depth estimation method
NASA Astrophysics Data System (ADS)
Wildeboer, Meindert Onno; Fukushima, Norishige; Yendo, Tomohiro; Panahpour Tehrani, Mehrdad; Fujii, Toshiaki; Tanimoto, Masayuki
2010-07-01
In this paper, we propose a semi-automatic depth estimation algorithm whereby the user defines object depth boundaries and disparity initialization. Automatic depth estimation methods generally have difficulty to obtain good depth results around object edges and in areas with low texture. The goal of our method is to improve the depth in these areas and reduce view synthesis artifacts in Depth Image Based Rendering. Good view synthesis quality is very important in applications such as 3DTV and Free-viewpoint Television (FTV). In our proposed method, initial disparity values for smooth areas can be input through a so-called manual disparity map, and depth boundaries are defined by a manually created edge map which can be supplied for one or multiple frames. For evaluation we used MPEG multi-view videos and we demonstrate our algorithm can significantly improve the depth maps and reduce view synthesis artifacts.
Noninvasive method of estimating human newborn regional cerebral blood flow
Younkin, D.P.; Reivich, M.; Jaggi, J.; Obrist, W.; Delivoria-Papadopoulos, M.
1982-12-01
A noninvasive method of estimating regional cerebral blood flow (rCBF) in premature and full-term babies has been developed. Based on a modification of the /sup 133/Xe inhalation rCBF technique, this method uses eight extracranial NaI scintillation detectors and an i.v. bolus injection of /sup 133/Xe (approximately 0.5 mCi/kg). Arterial xenon concentration was estimated with an external chest detector. Cerebral blood flow was measured in 15 healthy, neurologically normal premature infants. Using Obrist's method of two-compartment analysis, normal values were calculated for flow in both compartments, relative weight and fractional flow in the first compartment (gray matter), initial slope of gray matter blood flow, mean cerebral blood flow, and initial slope index of mean cerebral blood flow. The application of this technique to newborns, its relative advantages, and its potential uses are discussed.
Method to Estimate the Dissolved Air Content in Hydraulic Fluid
NASA Technical Reports Server (NTRS)
Hauser, Daniel M.
2011-01-01
In order to verify the air content in hydraulic fluid, an instrument was needed to measure the dissolved air content before the fluid was loaded into the system. The instrument also needed to measure the dissolved air content in situ and in real time during the de-aeration process. The current methods used to measure the dissolved air content require the fluid to be drawn from the hydraulic system, and additional offline laboratory processing time is involved. During laboratory processing, there is a potential for contamination to occur, especially when subsaturated fluid is to be analyzed. A new method measures the amount of dissolved air in hydraulic fluid through the use of a dissolved oxygen meter. The device measures the dissolved air content through an in situ, real-time process that requires no additional offline laboratory processing time. The method utilizes an instrument that measures the partial pressure of oxygen in the hydraulic fluid. By using a standardized calculation procedure that relates the oxygen partial pressure to the volume of dissolved air in solution, the dissolved air content is estimated. The technique employs luminescent quenching technology to determine the partial pressure of oxygen in the hydraulic fluid. An estimated Henry s law coefficient for oxygen and nitrogen in hydraulic fluid is calculated using a standard method to estimate the solubility of gases in lubricants. The amount of dissolved oxygen in the hydraulic fluid is estimated using the Henry s solubility coefficient and the measured partial pressure of oxygen in solution. The amount of dissolved nitrogen that is in solution is estimated by assuming that the ratio of dissolved nitrogen to dissolved oxygen is equal to the ratio of the gas solubility of nitrogen to oxygen at atmospheric pressure and temperature. The technique was performed at atmospheric pressure and room temperature. The technique could be theoretically carried out at higher pressures and elevated
NEW COMPLETENESS METHODS FOR ESTIMATING EXOPLANET DISCOVERIES BY DIRECT DETECTION
Brown, Robert A.; Soummer, Remi
2010-05-20
We report on new methods for evaluating realistic observing programs that search stars for planets by direct imaging, where observations are selected from an optimized star list and stars can be observed multiple times. We show how these methods bring critical insight into the design of the mission and its instruments. These methods provide an estimate of the outcome of the observing program: the probability distribution of discoveries (detection and/or characterization) and an estimate of the occurrence rate of planets ({eta}). We show that these parameters can be accurately estimated from a single mission simulation, without the need for a complete Monte Carlo mission simulation, and we prove the accuracy of this new approach. Our methods provide tools to define a mission for a particular science goal; for example, a mission can be defined by the expected number of discoveries and its confidence level. We detail how an optimized star list can be built and how successive observations can be selected. Our approach also provides other critical mission attributes, such as the number of stars expected to be searched and the probability of zero discoveries. Because these attributes depend strongly on the mission scale (telescope diameter, observing capabilities and constraints, mission lifetime, etc.), our methods are directly applicable to the design of such future missions and provide guidance to the mission and instrument design based on scientific performance. We illustrate our new methods with practical calculations and exploratory design reference missions for the James Webb Space Telescope (JWST) operating with a distant starshade to reduce scattered and diffracted starlight on the focal plane. We estimate that five habitable Earth-mass planets would be discovered and characterized with spectroscopy, with a probability of zero discoveries of 0.004, assuming a small fraction of JWST observing time (7%), {eta} = 0.3, and 70 observing visits, limited by starshade
Estimation of quality factors by energy ratio method
NASA Astrophysics Data System (ADS)
Wang, Zong-Jun; Cao, Si-Yuan; Zhang, Hao-Ran; Qu, Ying-Ming; Yuan, Dian; Yang, Jin-Hao; Shao, Guan-Ming
2015-03-01
The quality factor Q, which reflects the energy attenuation of seismic waves in subsurface media, is a diagnostic tool for hydrocarbon detection and reservoir characterization. In this paper, we propose a new Q extraction method based on the energy ratio before and after the wavelet attenuation, named the energy-ratio method (ERM). The proposed method uses multipoint signal data in the time domain to estimate the wavelet energy without invoking the source wavelet spectrum, which is necessary in conventional Q extraction methods, and is applicable to any source wavelet spectrum; however, it requires high-precision seismic data. Forward zero-offset VSP modeling suggests that the ERM can be used for reliable Q inversion after nonintrinsic attenuation (geometric dispersion, reflection, and transmission loss) compensation. The application to real zero-offset VSP data shows that the Q values extracted by the ERM and spectral ratio methods are identical, which proves the reliability of the new method.
Experimental evaluation of chromatic dispersion estimation method using polynomial fitting
NASA Astrophysics Data System (ADS)
Jiang, Xin; Wang, Junyi; Pan, Zhongqi
2014-11-01
We experimentally validate a non-data-aided, modulation-format independent chromatic dispersion (CD) estimation method based on polynomial fitting algorithm in single-carrier coherent optical system with a 40 Gb/s polarization-division-multiplexed quadrature-phase-shift-keying (PDM-QPSK) system. The non-data-aided CD estimation for arbitrary modulation formats is achieved by measuring the differential phase between frequency f±fs/2 (fs is the symbol rate) in digital coherent receivers. The estimation range for a 40 Gb/s PDM-QPSK signal can be up to 20,000 ps/nm with a measurement accuracy of ±200 ps/nm. The maximum CD measurement is 25,000 ps/nm with a measurement error of 2%.
A probabilistic method for estimating system susceptibility to HPM
Mensing, R.W.
1989-05-18
Interruption of the operation of electronic systems by HPM is a stochastic process. Thus, a realistic estimate of system susceptibility to HPM is best expressed in terms of the probability the HPM have an effect on the system (probability of effect). To estimate susceptibility of complex electronic systems by extensive testing is not practical. Thus, it is necessary to consider alternative approaches. One approach is to combine information from extensive low level testing and computer modeling with limited high level field test data. A method for estimating system susceptibility based on a pretest analysis of low level test and computer model data combined with a post test analysis after high level testing is described in this paper. 4 figs.
The Lyapunov dimension and its estimation via the Leonov method
NASA Astrophysics Data System (ADS)
Kuznetsov, N. V.
2016-06-01
Along with widely used numerical methods for estimating and computing the Lyapunov dimension there is an effective analytical approach, proposed by G.A. Leonov in 1991. The Leonov method is based on the direct Lyapunov method with special Lyapunov-like functions. The advantage of the method is that it allows one to estimate the Lyapunov dimension of invariant sets without localization of the set in the phase space and, in many cases, to get effectively an exact Lyapunov dimension formula. In this work the invariance of the Lyapunov dimension with respect to diffeomorphisms and its connection with the Leonov method are discussed. For discrete-time dynamical systems an analog of Leonov method is suggested. In a simple but rigorous way, here it is presented the connection between the Leonov method and the key related works: Kaplan and Yorke (the concept of the Lyapunov dimension, 1979), Douady and Oesterlé (upper bounds of the Hausdorff dimension via the Lyapunov dimension of maps, 1980), Constantin, Eden, Foiaş, and Temam (upper bounds of the Hausdorff dimension via the Lyapunov exponents and Lyapunov dimension of dynamical systems, 1985-90), and the numerical calculation of the Lyapunov exponents and dimension.
Point estimation of simultaneous methods for solving polynomial equations
NASA Astrophysics Data System (ADS)
Petkovic, Miodrag S.; Petkovic, Ljiljana D.; Rancic, Lidija Z.
2007-08-01
The construction of computationally verifiable initial conditions which provide both the guaranteed and fast convergence of the numerical root-finding algorithm is one of the most important problems in solving nonlinear equations. Smale's "point estimation theory" from 1981 was a great advance in this topic; it treats convergence conditions and the domain of convergence in solving an equation f(z)=0 using only the information of f at the initial point z0. The study of a general problem of the construction of initial conditions of practical interest providing guaranteed convergence is very difficult, even in the case of algebraic polynomials. In the light of Smale's point estimation theory, an efficient approach based on some results concerning localization of polynomial zeros and convergent sequences is applied in this paper to iterative methods for the simultaneous determination of simple zeros of polynomials. We state new, improved initial conditions which provide the guaranteed convergence of frequently used simultaneous methods for solving algebraic equations: Ehrlich-Aberth's method, Ehrlich-Aberth's method with Newton's correction, Borsch-Supan's method with Weierstrass' correction and Halley-like (or Wang-Zheng) method. The introduced concept offers not only a clear insight into the convergence analysis of sequences generated by the considered methods, but also explicitly gives their order of convergence. The stated initial conditions are of significant practical importance since they are computationally verifiable; they depend only on the coefficients of a given polynomial, its degree n and initial approximations to polynomial zeros.
A new simple method to estimate fracture pressure gradient
Rocha, L.A.; Bourgoyne, A.T.
1994-12-31
Projecting safer and more economic wells calls for estimating correctly the fracture pressure gradient. On the other hand, a poor prediction of the fracture pressure gradient may lead to serious accidents such as lost circulation followed by a kick. Although these kinds of accidents can occur in any phase of the well, drilling shallow formations can offer additional dangerous due to shallow gas kicks, because they have the potential of becoming a shallow gas blowout leading sometimes to the formation of craters. Often, one of the main problems when estimating the fracture pressure gradient is the lack of data. In fact, drilling engineers generally face situations where only leak off test data (frequently having questionable results) are available. This problem is normally the case when drilling shallow formations where very few information is collected. This paper presents a new method to estimate fracture pressure gradient. The proposed method has the advantage of (a) using only the knowledge of leak off test data and (b) being independent of the pore pressure. The method is based on a new concept called pseudo-overburden pressure, defined as the overburden pressure a formation would exhibit if it were plastic. The method was applied in several areas of the world such as US Gulf Coast (Mississippi Canyon and Green Canyon) with very good results.
Estimating the extreme low-temperature event using nonparametric methods
NASA Astrophysics Data System (ADS)
D'Silva, Anisha
This thesis presents a new method of estimating the one-in-N low temperature threshold using a non-parametric statistical method called kernel density estimation applied to daily average wind-adjusted temperatures. We apply our One-in-N Algorithm to local gas distribution companies (LDCs), as they have to forecast the daily natural gas needs of their consumers. In winter, demand for natural gas is high. Extreme low temperature events are not directly related to an LDCs gas demand forecasting, but knowledge of extreme low temperatures is important to ensure that an LDC has enough capacity to meet customer demands when extreme low temperatures are experienced. We present a detailed explanation of our One-in-N Algorithm and compare it to the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution. We show that our One-in-N Algorithm estimates the one-in- N low temperature threshold more accurately than the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution according to root mean square error (RMSE) measure at a 5% level of significance. The One-in- N Algorithm is tested by counting the number of times the daily average wind-adjusted temperature is less than or equal to the one-in- N low temperature threshold.
NASA Astrophysics Data System (ADS)
Safari, A.; Sohrabi, H.
2016-06-01
The role of forests as a reservoir for carbon has prompted the need for timely and reliable estimation of aboveground carbon stocks. Since measurement of aboveground carbon stocks of forests is a destructive, costly and time-consuming activity, aerial and satellite remote sensing techniques have gained many attentions in this field. Despite the fact that using aerial data for predicting aboveground carbon stocks has been proved as a highly accurate method, there are challenges related to high acquisition costs, small area coverage, and limited availability of these data. These challenges are more critical for non-commercial forests located in low-income countries. Landsat program provides repetitive acquisition of high-resolution multispectral data, which are freely available. The aim of this study was to assess the potential of multispectral Landsat 8 Operational Land Imager (OLI) derived texture metrics in quantifying aboveground carbon stocks of coppice Oak forests in Zagros Mountains, Iran. We used four different window sizes (3×3, 5×5, 7×7, and 9×9), and four different offsets ([0,1], [1,1], [1,0], and [1,-1]) to derive nine texture metrics (angular second moment, contrast, correlation, dissimilar, entropy, homogeneity, inverse difference, mean, and variance) from four bands (blue, green, red, and infrared). Totally, 124 sample plots in two different forests were measured and carbon was calculated using species-specific allometric models. Stepwise regression analysis was applied to estimate biomass from derived metrics. Results showed that, in general, larger size of window for deriving texture metrics resulted models with better fitting parameters. In addition, the correlation of the spectral bands for deriving texture metrics in regression models was ranked as b4>b3>b2>b5. The best offset was [1,-1]. Amongst the different metrics, mean and entropy were entered in most of the regression models. Overall, different models based on derived texture metrics
ERIC Educational Resources Information Center
Thissen, David; Wainer, Howard
Simulation studies of the performance of (potentially) robust statistical estimation produce large quantities of numbers in the form of performance indices of the various estimators under various conditions. This report presents a multivariate graphical display used to aid in the digestion of the plentiful results in a current study of Item…
ERIC Educational Resources Information Center
Blais, Jean-Guy; Raiche, Gilles
This paper examines some characteristics of the statistics associated with the sampling distribution of the proficiency level estimate when the Rasch model is used. These characteristics allow the judgment of the meaning to be given to the proficiency level estimate obtained in adaptive testing, and as a consequence, they can illustrate the…
Richardson, John G.
2009-11-17
An impedance estimation method includes measuring three or more impedances of an object having a periphery using three or more probes coupled to the periphery. The three or more impedance measurements are made at a first frequency. Three or more additional impedance measurements of the object are made using the three or more probes. The three or more additional impedance measurements are made at a second frequency different from the first frequency. An impedance of the object at a point within the periphery is estimated based on the impedance measurements and the additional impedance measurements.
Estimating surface acoustic impedance with the inverse method.
Piechowicz, Janusz
2011-01-01
Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics. PMID:21939599
NASA Astrophysics Data System (ADS)
Bambang Avip Priatna, M.; Lukman, Sumiaty, Encum
2016-02-01
This paper aims to determine the properties of Correspondence Analysis (CA) estimator to estimate latent variable models. The method used is the High-Dimensional AIC (HAIC) method with simulation of Bernoulli distribution data. Stages are: (1) determine the matrix CA; (2) create a model of the CA estimator to estimate the latent variables by using HAIC; (3) simulated the Bernoulli distribution data with repetition 1,000,748 times. The simulation results show the CA estimator models work well.
Adaptive error covariances estimation methods for ensemble Kalman filters
Zhen, Yicun; Harlim, John
2015-08-01
This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.
Rotate-and-Stare: A new method for PSF estimation.
NASA Astrophysics Data System (ADS)
Teuber, J.; Ostensen, R.; Stabell, R.; Florentin-Nielsen, R.
1994-12-01
We present a new and simple method for the determination of a digital Point Spread Function (PSF), utilizing the approximate circular symmetry in stellar images of normal quality. Using an optimal estimation of total intensity and object centering, the application of this type of PSF is found to be comparable to analytical or semi-analytical modelling, e.g., that employed in the DAOPHOT package (Stetson 1987). Further improvements are suggested.
A Sensitivity Analysis of a Thin Film Conductivity Estimation Method
McMasters, Robert L; Dinwiddie, Ralph Barton
2010-01-01
An analysis method was developed for determining the thermal conductivity of a thin film on a substrate of known thermal properties using the flash diffusivity method. In order to determine the thermal conductivity of the film using this method, the volumetric heat capacity of the film must be known, as determined in a separate experiment. Additionally, the thermal properties of the substrate must be known, including conductivity and volumetric heat capacity. The ideal conditions for the experiment are a low conductivity film adhered to a higher conductivity substrate. As the film becomes thinner with respect to the substrate or, as the conductivity of the film approaches that of the substrate, the estimation of thermal conductivity of the film becomes more difficult. The present research examines the effect of inaccuracies in the known parameters on the estimation of the parameter of interest, the thermal conductivity of the film. As such, perturbations are introduced into the other parameters in the experiment, which are assumed to be known, to find the effect on the estimated thermal conductivity of the film. A baseline case is established with the following parameters: Substrate thermal conductivity 1.0 W/m-K Substrate volumetric heat capacity 106 J/m3-K Substrate thickness 0.8 mm Film thickness 0.2 mm Film volumetric heat capacity 106 J/m3-K Film thermal conductivity 0.01 W/m-K Convection coefficient 20 W/m2-K Magnitude of heat absorbed during the flash 1000 J/m2 Each of these parameters, with the exception of film thermal conductivity, the parameter of interest, is varied from its baseline value, in succession, and placed into a synthetic experimental data file. Each of these data files is individually analyzed by the program to determine the effect on the estimated film conductivity, thus quantifying the vulnerability of the method to measurement errors.
Estimation of race admixture--a new method.
Chakraborty, R
1975-05-01
The contribution of a parental population in the gene pool of a hybrid population which arose by hybridization with one or more other populations is estimated here at the population level from the probability of gene identity. The dynamics of accumulation of such admixture is studied incorporating the fluctuations due to finite size of the hybrid population. The method is illustrated with data on admixture in Cherokee Indians. PMID:1146991
Improving stochastic estimates with inference methods: calculating matrix diagonals.
Selig, Marco; Oppermann, Niels; Ensslin, Torsten A
2012-02-01
Estimating the diagonal entries of a matrix, that is not directly accessible but only available as a linear operator in the form of a computer routine, is a common necessity in many computational applications, especially in image reconstruction and statistical inference. Here, methods of statistical inference are used to improve the accuracy or the computational costs of matrix probing methods to estimate matrix diagonals. In particular, the generalized Wiener filter methodology, as developed within information field theory, is shown to significantly improve estimates based on only a few sampling probes, in cases in which some form of continuity of the solution can be assumed. The strength, length scale, and precise functional form of the exploited autocorrelation function of the matrix diagonal is determined from the probes themselves. The developed algorithm is successfully applied to mock and real world problems. These performance tests show that, in situations where a matrix diagonal has to be calculated from only a small number of computationally expensive probes, a speedup by a factor of 2 to 10 is possible with the proposed method. PMID:22463179
Geometric estimation method for x-ray digital intraoral tomosynthesis
NASA Astrophysics Data System (ADS)
Li, Liang; Yang, Yao; Chen, Zhiqiang
2016-06-01
It is essential for accurate image reconstruction to obtain a set of parameters that describes the x-ray scanning geometry. A geometric estimation method is presented for x-ray digital intraoral tomosynthesis (DIT) in which the detector remains stationary while the x-ray source rotates. The main idea is to estimate the three-dimensional (3-D) coordinates of each shot position using at least two small opaque balls adhering to the detector surface as the positioning markers. From the radiographs containing these balls, the position of each x-ray focal spot can be calculated independently relative to the detector center no matter what kind of scanning trajectory is used. A 3-D phantom which roughly simulates DIT was designed to evaluate the performance of this method both quantitatively and qualitatively in the sense of mean square error and structural similarity. Results are also presented for real data acquired with a DIT experimental system. These results prove the validity of this geometric estimation method.
Minimally important difference estimates and methods: a protocol
Johnston, Bradley C; Ebrahim, Shanil; Carrasco-Labra, Alonso; Furukawa, Toshi A; Patrick, Donald L; Crawford, Mark W; Hemmelgarn, Brenda R; Schunemann, Holger J; Guyatt, Gordon H; Nesrallah, Gihad
2015-01-01
Introduction Patient-reported outcomes (PROs) are often the outcomes of greatest importance to patients. The minimally important difference (MID) provides a measure of the smallest change in the PRO that patients perceive as important. An anchor-based approach is the most appropriate method for MID determination. No study or database currently exists that provides all anchor-based MIDs associated with PRO instruments; nor are there any accepted standards for appraising the credibility of MID estimates. Our objectives are to complete a systematic survey of the literature to collect and characterise published anchor-based MIDs associated with PRO instruments used in evaluating the effects of interventions on chronic medical and psychiatric conditions and to assess their credibility. Methods and analysis We will search MEDLINE, EMBASE and PsycINFO (1989 to present) to identify studies addressing methods to estimate anchor-based MIDs of target PRO instruments or reporting empirical ascertainment of anchor-based MIDs. Teams of two reviewers will screen titles and abstracts, review full texts of citations, and extract relevant data. On the basis of findings from studies addressing methods to estimate anchor-based MIDs, we will summarise the available methods and develop an instrument addressing the credibility of empirically ascertained MIDs. We will evaluate the credibility of all studies reporting on the empirical ascertainment of anchor-based MIDs using the credibility instrument, and assess the instrument's inter-rater reliability. We will separately present reports for adult and paediatric populations. Ethics and dissemination No research ethics approval was required as we will be using aggregate data from published studies. Our work will summarise anchor-based methods available to establish MIDs, provide an instrument to assess the credibility of available MIDs, determine the reliability of that instrument, and provide a comprehensive compendium of published anchor
SCoPE: an efficient method of Cosmological Parameter Estimation
Das, Santanu; Souradeep, Tarun E-mail: tarun@iucaa.ernet.in
2014-07-01
Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of the chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data.
The composite method: An improved method for stream-water solute load estimation
Aulenbach, Brent T.; Hooper, R.P.
2006-01-01
The composite method is an alternative method for estimating stream-water solute loads, combining aspects of two commonly used methods: the regression-model method (which is used by the composite method to predict variations in concentrations between collected samples) and a period-weighted approach (which is used by the composite method to apply the residual concentrations from the regression model over time). The extensive dataset collected at the outlet of the Panola Mountain Research Watershed (PMRW) near Atlanta, Georgia, USA, was used in data analyses for illustrative purposes. A bootstrap (subsampling) experiment (using the composite method and the PMRW dataset along with various fixed-interval and large storm sampling schemes) obtained load estimates for the 8-year study period with a magnitude of the bias of less than 1%, even for estimates that included the fewest number of samples. Precisions were always <2% on a study period and annual basis, and <2% precisions were obtained for quarterly and monthly time intervals for estimates that had better sampling. The bias and precision of composite-method load estimates varies depending on the variability in the regression-model residuals, how residuals systematically deviated from the regression model over time, sampling design, and the time interval of the load estimate. The regression-model method did not estimate loads precisely during shorter time intervals, from annually to monthly, because the model could not explain short-term patterns in the observed concentrations. Load estimates using the period-weighted approach typically are biased as a result of sampling distribution and are accurate only with extensive sampling. The formulation of the composite method facilitates exploration of patterns (trends) contained in the unmodelled portion of the load. Published in 2006 by John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Kawasaki, Makoto; Kohno, Ryuji
Wireless communication devices in the field of medical implant, such as cardiac pacemakers and capsule endoscopes, have been studied and developed to improve healthcare systems. Especially it is very important to know the range and position of each device because it will contribute to an optimization of the transmission power. We adopt the time-based approach of position estimation using ultra wideband signals. However, the propagation velocity inside the human body differs in each tissue and each frequency. Furthermore, the human body is formed of various tissues with complex structures. For this reason, propagation velocity is different at a different point inside human body and the received signal so distorted through the channel inside human body. In this paper, we apply an adaptive template synthesis method in multipath channel for calculate the propagation time accurately based on the output of the correlator between the transmitter and the receiver. Furthermore, we propose a position estimation method using an estimation of the propagation velocity inside the human body. In addition, we show by computer simulation that the proposal method can perform accurate positioning with a size of medical implanted devices such as a medicine capsule.
Effects of Using Invention Learning Approach on Inventive Abilities: A Mixed Method Study
ERIC Educational Resources Information Center
Wongkraso, Paisan; Sitti, Somsong; Piyakun, Araya
2015-01-01
This study aims to enhance inventive abilities for secondary students by using the Invention Learning Approach. Its activities focus on creating new inventions based on the students' interests by using constructional tools. The participants were twenty secondary students who took an elective science course that provided instructional units…
Method to estimate center of rigidity using vibration recordings
Safak, Erdal; Celebi, Mehmet
1990-01-01
A method to estimate the center of rigidity of buildings by using vibration recordings is presented. The method is based on the criterion that the coherence of translational motions with the rotational motion is minimum at the center of rigidity. Since the coherence is a function of frequency, a gross but frequency-independent measure of the coherency is defined as the integral of the coherence function over the frequency. The center of rigidity is determined by minimizing this integral. The formulation is given for two-dimensional motions. Two examples are presented for the method; a rectangular building with ambient-vibration recordings, and a triangular building with earthquake-vibration recordings. Although the examples given are for buildings, the method can be applied to any structure with two-dimensional motions.
Cunningham, Marc; Brown, Niquelle; Sacher, Suzy; Hatch, Benjamin; Inglis, Andrew; Aronovich, Dana
2015-01-01
-based model for condoms, models were able to estimate public-sector prevalence rates for each short-acting method to within 2 percentage points in at least 85% of countries. Conclusions: Public-sector contraceptive logistics data are strongly correlated with public-sector prevalence rates for short-acting methods, demonstrating the quality of current logistics data and their ability to provide relatively accurate prevalence estimates. The models provide a starting point for generating interim estimates of contraceptive use when timely survey data are unavailable. All models except the condoms CYP model performed well; the regression models were most accurate but the CYP model offers the simplest calculation method. Future work extending the research to other modern methods, relating subnational logistics data with prevalence rates, and tracking that relationship over time is needed. PMID:26374805
Evaluation of estimation methods for organic carbon normalized sorption coefficients
Baker, James R.; Mihelcic, James R.; Luehrs, Dean C.; Hickey, James P.
1997-01-01
A critically evaluated set of 94 soil water partition coefficients normalized to soil organic carbon content (Koc) is presented for 11 classes of organic chemicals. This data set is used to develop and evaluate Koc estimation methods using three different descriptors. The three types of descriptors used in predicting Koc were octanol/water partition coefficient (Kow), molecular connectivity (mXt) and linear solvation energy relationships (LSERs). The best results were obtained estimating Koc from Kow, though a slight improvement in the correlation coefficient was obtained by using a two-parameter regression with Kow and the third order difference term from mXt. Molecular connectivity correlations seemed to be best suited for use with specific chemical classes. The LSER provided a better fit than mXt but not as good as the correlation with Koc. The correlation to predict Koc from Kow was developed for 72 chemicals; log Koc = 0.903* log Kow + 0.094. This correlation accounts for 91% of the variability in the data for chemicals with log Kow ranging from 1.7 to 7.0. The expression to determine the 95% confidence interval on the estimated Koc is provided along with an example for two chemicals of different hydrophobicity showing the confidence interval of the retardation factor determined from the estimated Koc. The data showed that Koc is not likely to be applicable for chemicals with log Kow < 1.7. Finally, the Koc correlation developed using Kow as a descriptor was compared with three nonclass-specific correlations and two 'commonly used' class-specific correlations to determine which method(s) are most suitable.
Estimates of tropical bromoform emissions using an inversion method
NASA Astrophysics Data System (ADS)
Ashfold, M. J.; Harris, N. R. P.; Manning, A. J.; Robinson, A. D.; Warwick, N. J.; Pyle, J. A.
2013-08-01
Bromine plays an important role in ozone chemistry in both the troposphere and stratosphere. When measured by mass, bromoform (CHBr3) is thought to be the largest organic source of bromine to the atmosphere. While seaweed and phytoplankton are known to be dominant sources, the size and the geographical distribution of CHBr3 emissions remains uncertain. Particularly little is known about emissions from the Maritime Continent, which have usually been assumed to be large, and which appear to be especially likely to reach the stratosphere. In this study we aim to use the first multi-annual set of CHBr3 measurements from this region, and an inversion method, to reduce this uncertainty. We find that local measurements of a short-lived gas like CHBr3 can only be used to constrain emissions from a relatively small, sub-regional domain. We then obtain detailed estimates of both the distribution and magnitude of CHBr3 emissions within this area. Our estimates appear to be relatively insensitive to the assumptions inherent in the inversion process. We extrapolate this information to produce estimated emissions for the entire tropics (defined as 20° S-20° N) of 225 GgCHBr3 y-1. This estimate is consistent with other recent studies, and suggests that CHBr3 emissions in the coastline-rich Maritime Continent may not be stronger than emissions in other parts of the tropics.
Modeling an exhumed basin: A method for estimating eroded overburden
Poelchau, H.S. )
1993-09-01
The Alberta Deep basin in western Canada has undergone a large amount of erosion after deep burial in the Eocene. Basin modeling and simulation of burial and temperature history require estimates of maximum overburden for each gridpoint in the basin. Erosion generally is estimated with shale compaction trends. For instance, the commonly used Magara technique attempts to establish a sonic log gradient for shales and uses the intercept with uncompacted shale values as a first indication of overcompaction and amount of erosion. Since such gradients are difficult to establish in many wells, an extension of this method was devised to help map erosion over a large area. Sonic values of shales are calibrated with compaction gradients to give an equation for amount of total restored overburden for the same formation in several wells. This equation then can be used to estimate and map total restored overburden for all wells in which this formation has been logged. The example from the Alberta Deep basin shows that trend and magnitudes of erosion or overburden agree with independent estimates using vitrinite maturity values.
Methods for cost estimation in software project management
NASA Astrophysics Data System (ADS)
Briciu, C. V.; Filip, I.; Indries, I. I.
2016-02-01
The speed in which the processes used in software development field have changed makes it very difficult the task of forecasting the overall costs for a software project. By many researchers, this task has been considered unachievable, but there is a group of scientist for which this task can be solved using the already known mathematical methods (e.g. multiple linear regressions) and the new techniques as genetic programming and neural networks. The paper presents a solution for building a model for the cost estimation models in the software project management using genetic algorithms starting from the PROMISE datasets related COCOMO 81 model. In the first part of the paper, a summary of the major achievements in the research area of finding a model for estimating the overall project costs is presented together with the description of the existing software development process models. In the last part, a basic proposal of a mathematical model of a genetic programming is proposed including here the description of the chosen fitness function and chromosome representation. The perspective of model described it linked with the current reality of the software development considering as basis the software product life cycle and the current challenges and innovations in the software development area. Based on the author's experiences and the analysis of the existing models and product lifecycle it was concluded that estimation models should be adapted with the new technologies and emerging systems and they depend largely by the chosen software development method.
Causes and methods to estimate cryptic sources of fishing mortality.
Gilman, E; Suuronen, P; Hall, M; Kennelly, S
2013-10-01
Cryptic, not readily detectable, components of fishing mortality are not routinely accounted for in fisheries management because of a lack of adequate data, and for some components, a lack of accurate estimation methods. Cryptic fishing mortalities can cause adverse ecological effects, are a source of wastage, reduce the sustainability of fishery resources and, when unaccounted for, can cause errors in stock assessments and population models. Sources of cryptic fishing mortality are (1) pre-catch losses, where catch dies from the fishing operation but is not brought onboard when the gear is retrieved, (2) ghost-fishing mortality by fishing gear that was abandoned, lost or discarded, (3) post-release mortality of catch that is retrieved and then released alive but later dies as a result of stress and injury sustained from the fishing interaction, (4) collateral mortalities indirectly caused by various ecological effects of fishing and (5) losses due to synergistic effects of multiple interacting sources of stress and injury from fishing operations, or from cumulative stress and injury caused by repeated sub-lethal interactions with fishing operations. To fill a gap in international guidance on best practices, causes and methods for estimating each component of cryptic fishing mortality are described, and considerations for their effective application are identified. Research priorities to fill gaps in understanding the causes and estimating cryptic mortality are highlighted. PMID:24090548
A Projection and Density Estimation Method for Knowledge Discovery
Stanski, Adam; Hellwich, Olaf
2012-01-01
A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675
A method to estimate groundwater depletion from confining layers
Konikow, L.F.; Neuzil, C.E.
2007-01-01
Although depletion of storage in low-permeability confining layers is the source of much of the groundwater produced from many confined aquifer systems, it is all too frequently overlooked or ignored. This makes effective management of groundwater resources difficult by masking how much water has been derived from storage and, in some cases, the total amount of water that has been extracted from an aquifer system. Analyzing confining layer storage is viewed as troublesome because of the additional computational burden and because the hydraulic properties of confining layers are poorly known. In this paper we propose a simplified method for computing estimates of confining layer depletion, as well as procedures for approximating confining layer hydraulic conductivity (K) and specific storage (Ss) using geologic information. The latter makes the technique useful in developing countries and other settings where minimal data are available or when scoping calculations are needed. As such, our approach may be helpful for estimating the global transfer of groundwater to surface water. A test of the method on a synthetic system suggests that the computational errors will generally be small. Larger errors will probably result from inaccuracy in confining layer property estimates, but these may be no greater than errors in more sophisticated analyses. The technique is demonstrated by application to two aquifer systems: the Dakota artesian aquifer system in South Dakota and the coastal plain aquifer system in Virginia. In both cases, depletion from confining layers was substantially larger than depletion from the aquifers.
Odor emission rate estimation of indoor industrial sources using a modified inverse modeling method.
Li, Xiang; Wang, Tingting; Sattayatewa, Chakkrid; Venkatesan, Dhesikan; Noll, Kenneth E; Pagilla, Krishna R; Moschandreas, Demetrios J
2011-08-01
Odor emission rates are commonly measured in the laboratory or occasionally estimated with inverse modeling techniques. A modified inverse modeling approach is used to estimate source emission rates inside of a postdigestion centrifuge building of a water reclamation plant. Conventionally, inverse modeling methods divide an indoor environment in zones on the basis of structural design and estimate source emission rates using models that assume homogeneous distribution of agent concentrations within a zone and experimentally determined link functions to simulate airflows among zones. The modified approach segregates zones as a function of agent distribution rather than building design and identifies near and far fields. Near-field agent concentrations do not satisfy the assumption of homogeneous odor concentrations; far-field concentrations satisfy this assumption and are the only ones used to estimate emission rates. The predictive ability of the modified inverse modeling approach was validated with measured emission rate values; the difference between corresponding estimated and measured odor emission rates is not statistically significant. Similarly, the difference between measured and estimated hydrogen sulfide emission rates is also not statistically significant. The modified inverse modeling approach is easy to perform because it uses odor and odorant field measurements instead of complex chamber emission rate measurements. PMID:21874959
A new simple method to estimate fracture pressure gradient
Rocha, L.A.; Bourgoyne, A.T.
1996-09-01
Projecting safety and more economic wells calls for estimating correctly the fracture pressure gradient. On the other hand, a poor prediction of the fracture pressure gradient may lead to serious accidents, such as lost circulation followed by a kick. Although these kind of accidents can occur in any phase of the well, drilling shallow formations can offer additional dangers caused by shallow gas kicks because they have the potential of becoming a shallow gas blowout leading sometimes to the formation of craters. This paper presents a new method to estimate fracture pressure gradient. The proposed method has the advantage of (1) using only the knowledge of leakoff test data and (2) being independent of the pore pressure. The method is based on a new concept called pseudo-overburden pressure, defined as the overburden pressure a formation would exhibit if it were plastic. The method was applied in several areas of the world, such as the US Gulf Coast (Mississippi Canyon and Green Canyon), with very good results.
Intensity estimation method of LED array for visible light communication
NASA Astrophysics Data System (ADS)
Ito, Takanori; Yendo, Tomohiro; Arai, Shintaro; Yamazato, Takaya; Okada, Hiraku; Fujii, Toshiaki
2013-03-01
This paper focuses on a road-to-vehicle visible light communication (VLC) system using LED traffic light as the transmitter and camera as the receiver. The traffic light is composed of a hundred of LEDs on two dimensional plain. In this system, data is sent as two dimensional brightness patterns by controlling each LED of the traffic light individually, and they are received as images by the camera. Here, there are problems that neighboring LEDs on the received image are merged due to less number of pixels in case that the receiver is distant from the transmitter, and/or due to blurring by defocus of the camera. Because of that, bit error rate (BER) increases due to recognition error of intensity of LEDs To solve the problem, we propose a method that estimates the intensity of LEDs by solving the inverse problem of communication channel characteristic from the transmitter to the receiver. The proposed method is evaluated by BER characteristics which are obtained by computer simulation and experiments. In the result, the proposed method can estimate with better accuracy than the conventional methods, especially in case that the received image is blurred a lot, and the number of pixels is small.
Estimating Return on Investment in Translational Research: Methods and Protocols
Trochim, William; Dilts, David M.; Kirk, Rosalind
2014-01-01
Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health and its Clinical and Translational Awards (CTSA). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This paper provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities. PMID:23925706
Estimating bacterial diversity for ecological studies: methods, metrics, and assumptions.
Birtel, Julia; Walser, Jean-Claude; Pichon, Samuel; Bürgmann, Helmut; Matthews, Blake
2015-01-01
Methods to estimate microbial diversity have developed rapidly in an effort to understand the distribution and diversity of microorganisms in natural environments. For bacterial communities, the 16S rRNA gene is the phylogenetic marker gene of choice, but most studies select only a specific region of the 16S rRNA to estimate bacterial diversity. Whereas biases derived from from DNA extraction, primer choice and PCR amplification are well documented, we here address how the choice of variable region can influence a wide range of standard ecological metrics, such as species richness, phylogenetic diversity, β-diversity and rank-abundance distributions. We have used Illumina paired-end sequencing to estimate the bacterial diversity of 20 natural lakes across Switzerland derived from three trimmed variable 16S rRNA regions (V3, V4, V5). Species richness, phylogenetic diversity, community composition, β-diversity, and rank-abundance distributions differed significantly between 16S rRNA regions. Overall, patterns of diversity quantified by the V3 and V5 regions were more similar to one another than those assessed by the V4 region. Similar results were obtained when analyzing the datasets with different sequence similarity thresholds used during sequences clustering and when the same analysis was used on a reference dataset of sequences from the Greengenes database. In addition we also measured species richness from the same lake samples using ARISA Fingerprinting, but did not find a strong relationship between species richness estimated by Illumina and ARISA. We conclude that the selection of 16S rRNA region significantly influences the estimation of bacterial diversity and species distributions and that caution is warranted when comparing data from different variable regions as well as when using different sequencing techniques. PMID:25915756
ERIC Educational Resources Information Center
Rose, Andrew M.; And Others
The relationships between the characteristics of human tasks and the abilities required for task performance are investigated. The goal of the program is to generate principles which can be used to identify ability requirements from knowledge of the characteristics of a task and of variations in the conditions of task performance. Such knowledge…
ERIC Educational Resources Information Center
Fingerman, Paul W.; And Others
This report describes the third study in a program of research dealing with the relationships between the characteristics of human tasks and the abilities required for task performance. The goal of the program is to generate principles which can be used to identify ability requirements from knowledge of the characteristics of a task and of…
Estimating microwave emissivity of sea foam by Rayleigh method
NASA Astrophysics Data System (ADS)
Liu, Shu-Bo; Wei, En-Bo; Jia, Yan-Xia
2013-01-01
To estimate microwave emissivity of sea foam consisting of dense seawater-coated air bubbles, the effective medium approximation is applied by regarding the foam layer as an isotropic dielectric medium. The Rayleigh method is developed to calculate effective permittivity of the sea foam layer at different microwave frequencies, air volume fraction, and seawater coating thickness. With a periodic lattice model of coated bubbles and multilayered structures of effective foam media, microwave emissivities of sea foam layers with different effective permittivities obtained by the Rayleigh method are calculated. Good agreement is obtained by comparing model results with experimental data at 1.4, 10.8, and 36.5 GHz. Furthermore, sea foam microwave emissivities calculated by well-known effective permittivity formulas are investigated, such as the Silberstein, refractive model, and Maxwell-Garnett formulas. Their results are compared with those of our model. It is shown that the Rayleigh method gives more reasonable results.
Estimating Fuel Cycle Externalities: Analytical Methods and Issues, Report 2
Barnthouse, L.W.; Cada, G.F.; Cheng, M.-D.; Easterly, C.E.; Kroodsma, R.L.; Lee, R.; Shriner, D.S.; Tolbert, V.R.; Turner, R.S.
1994-07-01
that also have not been fully addressed. This document contains two types of papers that seek to fill part of this void. Some of the papers describe analytical methods that can be applied to one of the five steps of the damage function approach. The other papers discuss some of the complex issues that arise in trying to estimate externalities. This report, the second in a series of eight reports, is part of a joint study by the U.S. Department of Energy (DOE) and the Commission of the European Communities (EC)* on the externalities of fuel cycles. Most of the papers in this report were originally written as working papers during the initial phases of this study. The papers provide descriptions of the (non-radiological) atmospheric dispersion modeling that the study uses; reviews much of the relevant literature on ecological and health effects, and on the economic valuation of those impacts; contains several papers on some of the more complex and contentious issues in estimating externalities; and describes a method for depicting the quality of scientific information that a study uses. The analytical methods and issues that this report discusses generally pertain to more than one of the fuel cycles, though not necessarily to all of them. The report is divided into six parts, each one focusing on a different subject area.
Semiempirical method for estimating the noise of a propeller
NASA Astrophysics Data System (ADS)
Samokhin, V. F.
2012-09-01
A semiempirical method for estimating the noise of a propeller on the basis of the Lighthill analogy is proposed. The main relations of the computational model for the acoustic-radiation power have been obtained from a dimensional analysis of the general solution of the inhomogeneous wave equation for the pulsed acoustic radiation from a propeller. A comparison of the calculation and experimental data on the acousticradiation power and the one-third octave spectra of the sound pressure of four- and eight-blade AV-72 and SV-24 propellers is presented.
ERIC Educational Resources Information Center
Wyra, Mirella; Lawson, Michael J.; Hungi, Njora
2007-01-01
The mnemonic keyword method is an effective technique for vocabulary acquisition. This study examines the effects on recall of word-meaning pairs of (a) training in use of the keyword procedure at the time of retrieval; and (b) the influence of the self-rated ability to image. The performance of students trained in bidirectional retrieval using…
ERIC Educational Resources Information Center
Derado, Josip; Garner, Mary L.; Tran, Thu-Hang
2016-01-01
Students' abilities and interests vary dramatically in the college mathematics classroom. How do we teach all of these students effectively? In this paper, we present the Point Reward System (PRS), a new method of assessment that addresses this problem. We designed the PRS with three main goals in mind: to increase the retention rates; to keep all…
Ayatollahi, Hossein; Sadeghian, Mohammad Hadi; Keramati, Mohammad Reza; Ayatollahi, Ali; Shajiei, Arezoo; Sheikhi, Maryam; Bakhshi, Samane
2016-01-01
Background: Nowadays, definitive diagnosis of numerous diseases is based on the genetic and molecular findings. Therefore, preparation of fundamental materials for these evaluations is necessary. Deoxyribonucleic acid (DNA) is the first material for the molecular pathology and genetic analysis, and better results need more pure DNA. Furthermore, higher concentration of achieved DNA causes better results and higher amplifying ability for subsequent steps. We aim to evaluate five DNA extraction methods to compare DNA intimacy including purity, concentration, and amplifying ability with each other. Materials and Methods: The lymphoid tissue DNA was extracted from formalin-fixed, paraffin embedded (FFPE) tissue through five different methods including phenol-chloroform as the reference method, DNA isolation kit (QIAamp DNA FFPE Tissue Kit, Qiagen, Germany), proteinase K and xylol extraction and heat alkaline plus mineral oil extraction as authorship innovative method. Finally, polymerase chain reaction (PCR) and real-time PCR method were assessed to compare each following method consider to DNA purity and its concentration. Results: Among five different applied methods, the highest mean of DNA purity was related to heat alkaline method. Moreover, the highest mean of DNA concentration was related to heat alkaline plus mineral oil. Furthermore, the best result in quantitative PCR was in proteinase K method that had the lowest cycle threshold averages among the other extraction methods. Conclusion: We concluded that our innovative method for DNA extraction (heat alkaline plus mineral oil) achieved high DNA purity and concentration.
A method for sex estimation using the proximal femur.
Curate, Francisco; Coelho, João; Gonçalves, David; Coelho, Catarina; Ferreira, Maria Teresa; Navega, David; Cunha, Eugénia
2016-09-01
The assessment of sex is crucial to the establishment of a biological profile of an unidentified skeletal individual. The best methods currently available for the sexual diagnosis of human skeletal remains generally rely on the presence of well-preserved pelvic bones, which is not always the case. Postcranial elements, including the femur, have been used to accurately estimate sex in skeletal remains from forensic and bioarcheological settings. In this study, we present an approach to estimate sex using two measurements (femoral neck width [FNW] and femoral neck axis length [FNAL]) of the proximal femur. FNW and FNAL were obtained in a training sample (114 females and 138 males) from the Luís Lopes Collection (National History Museum of Lisbon). Logistic regression and the C4.5 algorithm were used to develop models to predict sex in unknown individuals. Proposed cross-validated models correctly predicted sex in 82.5-85.7% of the cases. The models were also evaluated in a test sample (96 females and 96 males) from the Coimbra Identified Skeletal Collection (University of Coimbra), resulting in a sex allocation accuracy of 80.1-86.2%. This study supports the relative value of the proximal femur to estimate sex in skeletal remains, especially when other exceedingly dimorphic skeletal elements are not accessible for analysis. PMID:27373600
Analytical method to estimate resin cement diffusion into dentin
NASA Astrophysics Data System (ADS)
de Oliveira Ferraz, Larissa Cristina; Ubaldini, Adriana Lemos Mori; de Oliveira, Bruna Medeiros Bertol; Neto, Antonio Medina; Sato, Fracielle; Baesso, Mauro Luciano; Pascotto, Renata Corrêa
2016-05-01
This study analyzed the diffusion of two resin luting agents (resin cements) into dentin, with the aim of presenting an analytical method for estimating the thickness of the diffusion zone. Class V cavities were prepared in the buccal and lingual surfaces of molars (n=9). Indirect composite inlays were luted into the cavities with either a self-adhesive or a self-etch resin cement. The teeth were sectioned bucco-lingually and the cement-dentin interface was analyzed by using micro-Raman spectroscopy (MRS) and scanning electron microscopy. Evolution of peak intensities of the Raman bands, collected from the functional groups corresponding to the resin monomer (C–O–C, 1113 cm-1) present in the cements, and the mineral content (P–O, 961 cm-1) in dentin were sigmoid shaped functions. A Boltzmann function (BF) was then fitted to the peaks encountered at 1113 cm-1 to estimate the resin cement diffusion into dentin. The BF identified a resin cement-dentin diffusion zone of 1.8±0.4 μm for the self-adhesive cement and 2.5±0.3 μm for the self-etch cement. This analysis allowed the authors to estimate the diffusion of the resin cements into the dentin. Fitting the MRS data to the BF contributed to and is relevant for future studies of the adhesive interface.
Reliability and Discriminative Ability of a New Method for Soccer Kicking Evaluation
Radman, Ivan; Wessner, Barbara; Bachl, Norbert; Ruzic, Lana; Hackl, Markus; Baca, Arnold; Markovic, Goran
2016-01-01
The study aimed to evaluate the test–retest reliability of a newly developed 356 Soccer Shooting Test (356-SST), and the discriminative ability of this test with respect to the soccer players' proficiency level and leg dominance. Sixty-six male soccer players, divided into three groups based on their proficiency level (amateur, n = 24; novice semi-professional, n = 18; and experienced semi-professional players, n = 24), performed 10 kicks following a two-step run up. Forty-eight of them repeated the test on a separate day. The following shooting variables were derived: ball velocity (BV; measured via radar gun), shooting accuracy (SA; average distance from the ball-entry point to the goal centre), and shooting quality (SQ; shooting accuracy divided by the time elapsed from hitting the ball to the point of entry). No systematic bias was evident in the selected shooting variables (SA: 1.98±0.65 vs. 2.00±0.63 m; BV: 24.6±2.3 vs. 24.5±1.9 m s-1; SQ: 2.92±1.0 vs. 2.93±1.0 m s-1; all p>0.05). The intra-class correlation coefficients were high (ICC = 0.70–0.88), and the coefficients of variation were low (CV = 5.3–5.4%). Finally, all three 356-SST variables identify, with adequate sensitivity, differences in soccer shooting ability with respect to the players' proficiency and leg dominance. The results suggest that the 356-SST is a reliable and sensitive test of specific shooting ability in men’s soccer. Future studies should test the validity of these findings in a fatigued state, as well as in other populations. PMID:26812247
Reliability and Discriminative Ability of a New Method for Soccer Kicking Evaluation.
Radman, Ivan; Wessner, Barbara; Bachl, Norbert; Ruzic, Lana; Hackl, Markus; Baca, Arnold; Markovic, Goran
2016-01-01
The study aimed to evaluate the test-retest reliability of a newly developed 356 Soccer Shooting Test (356-SST), and the discriminative ability of this test with respect to the soccer players' proficiency level and leg dominance. Sixty-six male soccer players, divided into three groups based on their proficiency level (amateur, n = 24; novice semi-professional, n = 18; and experienced semi-professional players, n = 24), performed 10 kicks following a two-step run up. Forty-eight of them repeated the test on a separate day. The following shooting variables were derived: ball velocity (BV; measured via radar gun), shooting accuracy (SA; average distance from the ball-entry point to the goal centre), and shooting quality (SQ; shooting accuracy divided by the time elapsed from hitting the ball to the point of entry). No systematic bias was evident in the selected shooting variables (SA: 1.98±0.65 vs. 2.00±0.63 m; BV: 24.6±2.3 vs. 24.5±1.9 m s-1; SQ: 2.92±1.0 vs. 2.93±1.0 m s-1; all p>0.05). The intra-class correlation coefficients were high (ICC = 0.70-0.88), and the coefficients of variation were low (CV = 5.3-5.4%). Finally, all three 356-SST variables identify, with adequate sensitivity, differences in soccer shooting ability with respect to the players' proficiency and leg dominance. The results suggest that the 356-SST is a reliable and sensitive test of specific shooting ability in men's soccer. Future studies should test the validity of these findings in a fatigued state, as well as in other populations. PMID:26812247
Smallwood, D. O.
1996-01-01
It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.
Pharmacokinetic parameter estimations by minimum relative entropy method.
Amisaki, T; Eguchi, S
1995-10-01
For estimating pharmacokinetic parameters, we introduce the minimum relative entropy (MRE) method and compare its performance with least squares methods. There are several variants of least squares, such as ordinary least squares (OLS), weighted least squares, and iteratively reweighted least squares. In addition to these traditional methods, even extended least squares (ELS), a relatively new approach to nonlinear regression analysis, can be regarded as a variant of least squares. These methods are different from each other in their manner of handling weights. It has been recognized that least squares methods with an inadequate weighting scheme may cause misleading results (the "choice of weights" problem). Although least squares with uniform weights, i.e., OLS, is rarely used in pharmacokinetic analysis, it offers the principle of least squares. The objective function of OLS can be regarded as a distance between observed and theoretical pharmacokinetic values on the Euclidean space RN, where N is the number of observations. Thus OLS produces its estimates by minimizing the Euclidean distance. On the other hand, MRE works by minimizing the relative entropy which expresses discrepancy between two probability densities. Because pharmacokinetic functions are not density function in general, we use a particular form of the relative entropy whose domain is extended to the space of all positive functions. MRE never assumes any distribution of errors involved in observations. Thus, it can be a possible solution to the choice of weights problem. Moreover, since the mathematical form of the relative entropy, i.e., an expectation of the log-ratio of two probability density functions, is different from that of a usual Euclidean distance, the behavior of MRE may be different from those of least squares methods. To clarify the behavior of MRE, we have compared the performance of MRE with those of ELS and OLS by carrying out an intensive simulation study, where four pharmaco
Whitbeck, M.; Grace, J.B.
2006-01-01
The estimation of aboveground biomass is important in the management of natural resources. Direct measurements by clipping, drying, and weighing of herbaceous vegetation are time-consuming and costly. Therefore, non-destructive methods for efficiently and accurately estimating biomass are of interest. We compared two non-destructive methods, visual obstruction and light penetration, for estimating aboveground biomass in marshes of the upper Texas, USA coast. Visual obstruction was estimated using the Robel pole method, which primarily measures the density and height of the canopy. Light penetration through the canopy was measured using a Decagon light wand, with readings taken above the vegetation and at the ground surface. Clip plots were also taken to provide direct estimates of total aboveground biomass. Regression relationships between estimated and clipped biomass were significant using both methods. However, the light penetration method was much more strongly correlated with clipped biomass under these conditions (R2 value 0.65 compared to 0.35 for the visual obstruction approach). The primary difference between the two methods in this situation was the ability of the light-penetration method to account for variations in plant litter. These results indicate that light-penetration measurements may be better for estimating biomass in marshes when plant litter is an important component. We advise that, in all cases, investigators should calibrate their methods against clip plots to evaluate applicability to their situation. ?? 2006, The Society of Wetland Scientists.
A variable circular-plot method for estimated bird numbers
Reynolds, R.T.; Scott, J.M.; Nussbaum, R.A.
1980-01-01
A bird census method is presented that is designed for tall, structurally complex vegetation types, and rugged terrain. With this method the observer counts all birds seen or heard around a station, and estimates the horizontal distance from the station to each bird. Count periods at stations vary according to the avian community and structural complexity of the vegetation. The density of each species is determined by inspecting a histogram of the number of individuals per unit area in concentric bands of predetermined widths about the stations, choosing the band (with outside radius x) where the density begins to decline, and summing the number of individuals counted within the circle of radius x and dividing by the area (Bx2). Although all observations beyond radius x are rejected with this procedure, coefficients of maximum distance.
Accuracy of age estimation of radiographic methods using developing teeth.
Maber, M; Liversidge, H M; Hector, M P
2006-05-15
Developing teeth are used to assess maturity and estimate age in a number of disciplines, however the accuracy of different methods has not been systematically investigated. The aim of this study was to determine the accuracy of several methods. Tooth formation was assessed from radiographs of healthy children attending a dental teaching hospital. The sample was 946 children (491 boys, 455 girls, aged 3-16.99 years) with similar number of children from Bangladeshi and British Caucasian ethnic origin. Panoramic radiographs were examined and seven mandibular teeth staged according to Demirjian's dental maturity scale [A. Demirjian, Dental development, CD-ROM, Silver Platter Education, University of Montreal, Montreal, 1993-1994; A. Demirjian, H. Goldstein, J.M. Tanner, A new system of dental age assessment, Hum. Biol. 45 (1973) 211-227; A. Demirjian, H. Goldstein, New systems for dental maturity based on seven and four teeth, Ann. Hum. Biol. 3 (1976) 411-421], Nolla [C.M. Nolla, The development of the permanent teeth, J. Dent. Child. 27 (1960) 254-266] and Haavikko [K. Haavikko, The formation and the alveolar and clinical eruption of the permanent teeth. An orthopantomographic study. Proc. Finn. Dent. Soc. 66 (1970) 103-170]. Dental age was calculated for each method, including an adaptation of Demirjian's method with updated scoring [G. Willems, A. Van Olmen, B. Spiessens, C. Carels, Dental age estimation in Belgian children: Demirjian's technique revisited, J. Forensic Sci. 46 (2001) 893-895]. The mean difference (+/-S.D. in years) between dental and real age was calculated for each method and in the case of Haavikko, each tooth type; and tested using t-test. Mean difference was also calculated for the age group 3-13.99 years for Haavikko (mean and individual teeth). Results show that the most accurate method was by Willems [G. Willems, A. Van Olmen, B. Spiessens, C. Carels, Dental age estimation in Belgian children: Demirjian's technique revisited, J. Forensic Sci
Application of Common Mid-Point Method to Estimate Asphalt
NASA Astrophysics Data System (ADS)
Zhao, Shan; Al-Aadi, Imad
2015-04-01
3-D radar is a multi-array stepped-frequency ground penetration radar (GPR) that can measure at a very close sampling interval in both in-line and cross-line directions. Constructing asphalt layers in accordance with specified thicknesses is crucial for pavement structure capacity and pavement performance. Common mid-point method (CMP) is a multi-offset measurement method that can improve the accuracy of the asphalt layer thickness estimation. In this study, the viability of using 3-D radar to predict asphalt concrete pavement thickness with an extended CMP method was investigated. GPR signals were collected on asphalt pavements with various thicknesses. Time domain resolution of the 3-D radar was improved by applying zero-padding technique in the frequency domain. The performance of the 3-D radar was then compared to that of the air-coupled horn antenna. The study concluded that 3-D radar can be used to predict asphalt layer thickness using CMP method accurately when the layer thickness is larger than 0.13m. The lack of time domain resolution of 3-D radar can be solved by frequency zero-padding. Keywords: asphalt pavement thickness, 3-D Radar, stepped-frequency, common mid-point method, zero padding.
Study on color difference estimation method of medicine biochemical analysis
NASA Astrophysics Data System (ADS)
Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Sun, Jiashi; Zhou, Fengkun
2006-01-01
The biochemical analysis in medicine is an important inspection and diagnosis method in hospital clinic. The biochemical analysis of urine is one important item. The Urine test paper shows corresponding color with different detection project or different illness degree. The color difference between the standard threshold and the test paper color of urine can be used to judge the illness degree, so that further analysis and diagnosis to urine is gotten. The color is a three-dimensional physical variable concerning psychology, while reflectance is one-dimensional variable; therefore, the estimation method of color difference in urine test can have better precision and facility than the conventional test method with one-dimensional reflectance, it can make an accurate diagnose. The digital camera is easy to take an image of urine test paper and is used to carry out the urine biochemical analysis conveniently. On the experiment, the color image of urine test paper is taken by popular color digital camera and saved in the computer which installs a simple color space conversion (RGB -> XYZ -> L *a *b *)and the calculation software. Test sample is graded according to intelligent detection of quantitative color. The images taken every time were saved in computer, and the whole illness process will be monitored. This method can also use in other medicine biochemical analyses that have relation with color. Experiment result shows that this test method is quick and accurate; it can be used in hospital, calibrating organization and family, so its application prospect is extensive.
Comparison of carbon and biomass estimation methods for European forests
NASA Astrophysics Data System (ADS)
Neumann, Mathias; Mues, Volker; Harkonen, Sanna; Mura, Matteo; Bouriaud, Olivier; Lang, Mait; Achten, Wouter; Thivolle-Cazat, Alain; Bronisz, Karol; Merganicova, Katarina; Decuyper, Mathieu; Alberdi, Iciar; Astrup, Rasmus; Schadauer, Klemens; Hasenauer, Hubert
2015-04-01
National and international reporting systems as well as research, enterprises and political stakeholders require information on carbon stocks of forests. Terrestrial assessment systems like forest inventory data in combination with carbon calculation methods are often used for this purpose. To assess the effect of the calculation method used, a comparative analysis was done using the carbon calculation methods from 13 European countries and the research plots from ICP Forests (International Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests). These methods are applied for five European tree species (Fagus sylvatica L., Quercus robur L., Betula pendula Roth, Picea abies (L.) Karst. and Pinus sylvestris L.) using a standardized theoretical tree dataset to avoid biases due to data collection and sample design. The carbon calculation methods use allometric biomass and volume functions, carbon and biomass expansion factors or a combination thereof. The results of the analysis show a high variation in the results for total tree carbon as well as for carbon in the single tree compartments. The same pattern is found when comparing the respective volume estimates. This is consistent for all five tree species and the variation remains when the results are grouped according to the European forest regions. Possible explanations are differences in the sample material used for the biomass models, the model variables or differences in the definition of tree compartments. The analysed carbon calculation methods have a strong effect on the results both for single trees and forest stands. To avoid misinterpretation the calculation method has to be chosen carefully along with quality checks and the calculation method needs consideration especially in comparative studies to avoid biased and misleading conclusions.
Method for Estimating the Presence of Clostridium perfringens in Food
Harmon, S. M.; Kautter, D. A.
1970-01-01
The methods currently used for the enumeration of Clostridium perfringens in food are often inadequate because of the rapid loss of viability of this organism when the sample is frozen or refrigerated. A method for estimating the presence of C. perfringens in food which utilizes the hemolytic and lecithinase activities of alpha toxin was developed. The hemolytic activity was measured in hemolysin indicator plates. Lecithinase activity of the extract was determined by the lecithovitellin test. Of 34 strains of C. perfringens associated with foodborne disease outbreaks, 32 produced sufficient alpha toxin in roast beef with gravy and in chicken broth to permit a reliable estimate of growth in these foods. Alpha toxin was extracted from food with 0.4 m saline buffered (at pH 8.0) with 0.05 mN-2-hydroxyethylpiperazine-N′-2-ethanesulfonic acid and concentrated by dialysis against 30% polyethylene glycol. A detectable quantity of alpha toxin was produced by approximately 106C. perfringens cells per g of substrate, and the amount increased in proportion to the cell population. Results obtained with food samples responsible for gastroenteritis in humans indicate that a correlation can be made between the amount of alpha toxin present and previous growth of C. perfringens in food regardless of whether the organisms are viable when the examination is performed. Images PMID:4321712
a Review of the Method of Moho Fold Estimation
NASA Astrophysics Data System (ADS)
Shin, Y.; Lim, M.; Park, Y.; Rim, H.
2010-12-01
We review the method of Moho fold estimation and its validation introduced in the recently published papers by Shin et al.(2009, 2007) and by Jin et al.(1994). The Tibetan Plateau, the study area of the papers, is greatly affected by heavy compression between Eurasian and Indian Plates and consequently has particular deformation structures related with the tectonic collisional environment, including possible buckling of very deep Moho. The recent method suggested by Shin et al.(2009) enables one to reveal the three-dimensional structure of the Moho fold and to validate it in direction, amplitude, and wavelength of the fold by comparing with other geophysical (e.g. an elastic plate model under horizontal loading) or geodetic (e.g. current crustal movement by GPS) evidences. We also review the several particular features of the Moho fold beneath Tibet. Finally, in the viewpoint of Moho fold estimation, we present a comparison of the recent global gravity models; both of satellite-based models (GGM03S, EIGEN-5S, ITG-GRACE2010S, GOCO01S, and GO_CONS_GCF_2DIR) and combination models including terrestrial gravimetry (GGM03C, EIGEN-5C, EGM2008, EIGEN-GL04C, and EIGEN51C). Reference: [1] Jin, Y. et al., 1994, Nature, 371, 669-674. [2] Shin, Y. H. et al., 2009, Geophysical Research Letters, 36, L01302, doi:10.1029/2008GL036068. [3] Shin, Y. H. et al., 2007, Geophysical Journal International, 170, 971-985.
Method of Estimating Continuous Cooling Transformation Curves of Glasses
NASA Technical Reports Server (NTRS)
Zhu, Dongmei; Zhou, Wancheng; Ray, Chandra S.; Day, Delbert E.
2006-01-01
A method is proposed for estimating the critical cooling rate and continuous cooling transformation (CCT) curve from isothermal TTT data of glasses. The critical cooling rates and CCT curves for a group of lithium disilicate glasses containing different amounts of Pt as nucleating agent estimated through this method are compared with the experimentally measured values. By analysis of the experimental and calculated data of the lithium disilicate glasses, a simple relationship between the crystallized amount in the glasses during continuous cooling, X, and the temperature of undercooling, (Delta)T, was found to be X = AR(sup-4)exp(B (Delta)T), where (Delta)T is the temperature difference between the theoretical melting point of the glass composition and the temperature in discussion, R is the cooling rate, and A and B are constants. The relation between the amount of crystallisation during continuous cooling and isothermal hold can be expressed as (X(sub cT)/X(sub iT) = (4/B)(sup 4) (Delta)T(sup -4), where X(sub cT) stands for the crystallised amount in a glass during continuous cooling for a time t when the temperature comes to T, and X(sub iT) is the crystallised amount during isothermal hold at temperature T for a time t.
The Mayfield method of estimating nesting success: A model, estimators and simulation results
Hensler, G.L.; Nichols, J.D.
1981-01-01
Using a nesting model proposed by Mayfield we show that the estimator he proposes is a maximum likelihood estimator (m.l.e.). M.l.e. theory allows us to calculate the asymptotic distribution of this estimator, and we propose an estimator of the asymptotic variance. Using these estimators we give approximate confidence intervals and tests of significance for daily survival. Monte Carlo simulation results show the performance of our estimators and tests under many sets of conditions. A traditional estimator of nesting success is shown to be quite inferior to the Mayfield estimator. We give sample sizes required for a given accuracy under several sets of conditions.
Method for estimating road salt contamination of Norwegian lakes
NASA Astrophysics Data System (ADS)
Kitterød, Nils-Otto; Wike Kronvall, Kjersti; Turtumøygaard, Stein; Haaland, Ståle
2013-04-01
Consumption of road salt in Norway, used to improve winter road conditions, has been tripled during the last two decades, and there is a need to quantify limits for optimal use of road salt to avoid further environmental harm. The purpose of this study was to implement methodology to estimate chloride concentration in any given water body in Norway. This goal is feasible to achieve if the complexity of solute transport in the landscape is simplified. The idea was to keep computations as simple as possible to be able to increase spatial resolution of input functions. The first simplification we made was to treat all roads exposed to regular salt application as steady state sources of sodium chloride. This is valid if new road salt is applied before previous contamination is removed through precipitation. The main reasons for this assumption are the significant retention capacity of vegetation; organic matter; and soil. The second simplification we made was that the groundwater table is close to the surface. This assumption is valid for major part of Norway, which means that topography is sufficient to delineate catchment area at any location in the landscape. Given these two assumptions, we applied spatial functions of mass load (mass NaCl pr. time unit) and conditional estimates of normal water balance (volume of water pr. time unit) to calculate steady state chloride concentration along the lake perimeter. Spatial resolution of mass load and estimated concentration along the lake perimeter was 25 m x 25 m while water balance had 1 km x 1 km resolution. The method was validated for a limited number of Norwegian lakes and estimation results have been compared to observations. Initial results indicate significant overlap between measurements and estimations, but only for lakes where the road salt is the major contribution for chloride contamination. For lakes in catchments with high subsurface transmissivity, the groundwater table is not necessarily following the
ERIC Educational Resources Information Center
Zwick, Rebecca; And Others
A previous simulation study of methods for assessing item functioning (DIF) in computer-adaptive tests (CATs) showed that modified versions of the Mantel-Haenszel and standardization methods work well with CAT data. In that study, data were generated using the three-parameter logistic (3PL) model, and this same model was assumed in obtaining item…
Estimation of Anthocyanin Content of Berries by NIR Method
Zsivanovits, G.; Ludneva, D.; Iliev, A.
2010-01-21
Anthocyanin contents of fruits were estimated by VIS spectrophotometer and compared with spectra measured by NIR spectrophotometer (600-1100 nm step 10 nm). The aim was to find a relationship between NIR method and traditional spectrophotometric method. The testing protocol, using NIR, is easier, faster and non-destructive. NIR spectra were prepared in pairs, reflectance and transmittance. A modular spectrocomputer, realized on the basis of a monochromator and peripherals Bentham Instruments Ltd (GB) and a photometric camera created at Canning Research Institute, were used. An important feature of this camera is the possibility offered for a simultaneous measurement of both transmittance and reflectance with geometry patterns T0/180 and R0/45. The collected spectra were analyzed by CAMO Unscrambler 9.1 software, with PCA, PLS, PCR methods. Based on the analyzed spectra quality and quantity sensitive calibrations were prepared. The results showed that the NIR method allows measuring of the total anthocyanin content in fresh berry fruits or processed products without destroying them.
Estimation of Anthocyanin Content of Berries by NIR Method
NASA Astrophysics Data System (ADS)
Zsivanovits, G.; Ludneva, D.; Iliev, A.
2010-01-01
Anthocyanin contents of fruits were estimated by VIS spectrophotometer and compared with spectra measured by NIR spectrophotometer (600-1100 nm step 10 nm). The aim was to find a relationship between NIR method and traditional spectrophotometric method. The testing protocol, using NIR, is easier, faster and non-destructive. NIR spectra were prepared in pairs, reflectance and transmittance. A modular spectrocomputer, realized on the basis of a monochromator and peripherals Bentham Instruments Ltd (GB) and a photometric camera created at Canning Research Institute, were used. An important feature of this camera is the possibility offered for a simultaneous measurement of both transmittance and reflectance with geometry patterns T0/180 and R0/45. The collected spectra were analyzed by CAMO Unscrambler 9.1 software, with PCA, PLS, PCR methods. Based on the analyzed spectra quality and quantity sensitive calibrations were prepared. The results showed that the NIR method allows measuring of the total anthocyanin content in fresh berry fruits or processed products without destroying them.
A comparison of spectral estimation methods for the analysis of sibilant fricatives
Reidy, Patrick F.
2015-01-01
It has been argued that, to ensure accurate spectral feature estimates for sibilants, the spectral estimation method should include a low-variance spectral estimator; however, no empirical evaluation of estimation methods in terms of feature estimates has been given. The spectra of /s/ and /ʃ/ were estimated with different methods that varied the pre-emphasis filter and estimator. These methods were evaluated in terms of effects on two features (centroid and degree of sibilance) and on the detection of four linguistic contrasts within these features. Estimation method affected the spectral features but none of the tested linguistic contrasts. PMID:25920873
Estimating recharge at Yucca Mountain, Nevada, USA: comparison of methods
NASA Astrophysics Data System (ADS)
Flint, Alan L.; Flint, Lorraine E.; Kwicklis, Edward M.; Fabryka-Martin, June T.; Bodvarsson, Gudmundur S.
2002-02-01
Obtaining values of net infiltration, groundwater travel time, and recharge is necessary at the Yucca Mountain site, Nevada, USA, in order to evaluate the expected performance of a potential repository as a containment system for high-level radioactive waste. However, the geologic complexities of this site, its low precipitation and net infiltration, with numerous mechanisms operating simultaneously to move water through the system, provide many challenges for the estimation of the spatial distribution of recharge. A variety of methods appropriate for arid environments has been applied, including water-balance techniques, calculations using Darcy's law in the unsaturated zone, a soil-physics method applied to neutron-hole water-content data, inverse modeling of thermal profiles in boreholes extending through the thick unsaturated zone, chloride mass balance, atmospheric radionuclides, and empirical approaches. These methods indicate that near-surface infiltration rates at Yucca Mountain are highly variable in time and space, with local (point) values ranging from zero to several hundred millimeters per year. Spatially distributed net-infiltration values average 5 mm/year, with the highest values approaching 20 mm/year near Yucca Crest. Site-scale recharge estimates range from less than 1 to about 12 mm/year. These results have been incorporated into a site-scale model that has been calibrated using these data sets that reflect infiltration processes acting on highly variable temporal and spatial scales. The modeling study predicts highly non-uniform recharge at the water table, distributed significantly differently from the non-uniform infiltration pattern at the surface.
Computational methods estimating uncertainties for profile reconstruction in scatterometry
NASA Astrophysics Data System (ADS)
Gross, H.; Rathsfeld, A.; Scholze, F.; Model, R.; Bär, M.
2008-04-01
The solution of the inverse problem in scatterometry, i.e. the determination of periodic surface structures from light diffraction patterns, is incomplete without knowledge of the uncertainties associated with the reconstructed surface parameters. With decreasing feature sizes of lithography masks, increasing demands on metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line-space structures in order to determine geometric parameters like side-wall angles, heights, top and bottom widths and to evaluate the quality of the manufacturing process. The numerical simulation of the diffraction process is based on the finite element solution of the Helmholtz equation. The inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Restricting the class of gratings and the set of measurements, this inverse problem can be reformulated as a non-linear operator equation in Euclidean spaces. The operator maps the grating parameters to the efficiencies of diffracted plane wave modes. We employ a Gauss-Newton type iterative method to solve this operator equation and end up minimizing the deviation of the measured efficiency or phase shift values from the simulated ones. The reconstruction properties and the convergence of the algorithm, however, is controlled by the local conditioning of the non-linear mapping and the uncertainties of the measured efficiencies or phase shifts. In particular, the uncertainties of the reconstructed geometric parameters essentially depend on the uncertainties of the input data and can be estimated by various methods. We compare the results obtained from a Monte Carlo procedure to the estimations gained from the approximative covariance matrix of the profile parameters close to the optimal solution and apply them to EUV masks illuminated by plane waves with wavelengths in the range of 13 nm.
Estimating recharge at Yucca Mountain, Nevada, USA: Comparison of methods
Flint, A.L.; Flint, L.E.; Kwicklis, E.M.; Fabryka-Martin, J. T.; Bodvarsson, G.S.
2002-01-01
Obtaining values of net infiltration, groundwater travel time, and recharge is necessary at the Yucca Mountain site, Nevada, USA, in order to evaluate the expected performance of a potential repository as a containment system for high-level radioactive waste. However, the geologic complexities of this site, its low precipitation and net infiltration, with numerous mechanisms operating simultaneously to move water through the system, provide many challenges for the estimation of the spatial distribution of recharge. A variety of methods appropriate for arid environments has been applied, including water-balance techniques, calculations using Darcy's law in the unsaturated zone, a soil-physics method applied to neutron-hole water-content data, inverse modeling of thermal profiles in boreholes extending through the thick unsaturated zone, chloride mass balance, atmospheric radionuclides, and empirical approaches. These methods indicate that near-surface infiltration rates at Yucca Mountain are highly variable in time and space, with local (point) values ranging from zero to several hundred millimeters per year. Spatially distributed net-infiltration values average 5 mm/year, with the highest values approaching 20 mm/year near Yucca Crest. Site-scale recharge estimates range from less than 1 to about 12 mm/year. These results have been incorporated into a site-scale model that has been calibrated using these data sets that reflect infiltration processes acting on highly variable temporal and spatial scales. The modeling study predicts highly non-uniform recharge at the water table, distributed significantly differently from the non-uniform infiltration pattern at the surface.
Estimating recharge at yucca mountain, nevada, usa: comparison of methods
Flint, A. L.; Flint, L. E.; Kwicklis, E. M.; Fabryka-Martin, J. T.; Bodvarsson, G. S.
2001-11-01
Obtaining values of net infiltration, groundwater travel time, and recharge is necessary at the Yucca Mountain site, Nevada, USA, in order to evaluate the expected performance of a potential repository as a containment system for high-level radioactive waste. However, the geologic complexities of this site, its low precipitation and net infiltration, with numerous mechanisms operating simultaneously to move water through the system, provide many challenges for the estimation of the spatial distribution of recharge. A variety of methods appropriate for and environments has been applied, including water-balance techniques, calculations using Darcy's law in the unsaturated zone, a soil-physics method applied to neutron-hole water-content data, inverse modeling of thermal profiles in boreholes extending through the thick unsaturated zone, chloride mass balance, atmospheric radionuclides, and empirical approaches. These methods indicate that near-surface infiltration rates at Yucca Mountain are highly variable in time and space, with local (point) values ranging from zero to several hundred millimeters per year. Spatially distributed net-infiltration values average 5 mm/year, with the highest values approaching 20 nun/year near Yucca Crest. Site-scale recharge estimates range from less than I to about 12 mm/year. These results have been incorporated into a site-scale model that has been calibrated using these data sets that reflect infiltration processes acting on highly variable temporal and spatial scales. The modeling study predicts highly non-uniform recharge at the water table, distributed significantly differently from the non-uniform infiltration pattern at the surface. [References: 57
Effect of packing density on strain estimation by Fry method
NASA Astrophysics Data System (ADS)
Srivastava, Deepak; Ojha, Arun
2015-04-01
Fry method is a graphical technique that uses relative movement of material points, typically the grain centres or centroids, and yields the finite strain ellipse as the central vacancy of a point distribution. Application of the Fry method assumes an anticlustered and isotropic grain centre distribution in undistorted samples. This assumption is, however, difficult to test in practice. As an alternative, the sedimentological degree of sorting is routinely used as an approximation for the degree of clustering and anisotropy. The effect of the sorting on the Fry method has already been explored by earlier workers. This study tests the effect of the tightness of packing, the packing density%, which equals to the ratio% of the area occupied by all the grains and the total area of the sample. A practical advantage of using the degree of sorting or the packing density% is that these parameters, unlike the degree of clustering or anisotropy, do not vary during a constant volume homogeneous distortion. Using the computer graphics simulations and the programming, we approach the issue of packing density in four steps; (i) generation of several sets of random point distributions such that each set has same degree of sorting but differs from the other sets with respect to the packing density%, (ii) two-dimensional homogeneous distortion of each point set by various known strain ratios and orientation, (iii) estimation of strain in each distorted point set by the Fry method, and, (iv) error estimation by comparing the known strain and those given by the Fry method. Both the absolute errors and the relative root mean squared errors give consistent results. For a given degree of sorting, the Fry method gives better results in the samples having greater than 30% packing density. This is because the grain centre distributions show stronger clustering and a greater degree of anisotropy with the decrease in the packing density. As compared to the degree of sorting alone, a
Estimating Earth's modal Q with epicentral stacking method
NASA Astrophysics Data System (ADS)
Chen, X.; Park, J. J.
2014-12-01
The attenuation rates of Earth's normal modes are the most important constraints on the anelastic state of Earth's deep interior. Yet current measurements of Earth's attenuation rates suffer from 3 sources of biases: the mode coupling effect, the beating effect, and the background noise, which together lead to significant uncertainties in the attenuation rates. In this research, we present a new technique to estimate the attenuation rates of Earth's normal modes - the epicentral stacking method. Rather than using the conventional geographical coordinate system, we instead deal with Earth's normal modes in the epicentral coordinate system, in which only 5 singlets rather than 2l+1 are excited. By stacking records from the same events at a series of time lags, we are able to recover the time-varying amplitudes of the 5 excited singlets, and thus measure their attenuation rates. The advantage of our method is that it enhances the SNR through stacking and minimizes the background noise effect, yet it avoids the beating effect problem commonly associated with the conventional multiplet stacking method by singling out the singlets. The attenuation rates measured from our epicentral stacking method seem to be reliable measurements in that: a) the measured attenuation rates are generally consistent among the 10 large events we used, except for a few events with unexplained larger attenuation rates; b) the line for the log of singlet amplitudes and time lag is very close to a straight line, suggesting an accurate estimation of attenuation rate. The Q measurements from our method are consistently lower than previous modal Q measurements, but closer to the PREM model. For example, for mode 0S25 whose Coriolis force coupling is negligible, our measured Q is between 190 to 210 depending on the event, while the PREM modal Q of 0S25 is 205, and previous modal Q measurements are as high as 242. The difference between our results and previous measurements might be due to the lower
Estimates of tropical bromoform emissions using an inversion method
NASA Astrophysics Data System (ADS)
Ashfold, M. J.; Harris, N. R. P.; Manning, A. J.; Robinson, A. D.; Warwick, N. J.; Pyle, J. A.
2014-01-01
Bromine plays an important role in ozone chemistry in both the troposphere and stratosphere. When measured by mass, bromoform (CHBr3) is thought to be the largest organic source of bromine to the atmosphere. While seaweed and phytoplankton are known to be dominant sources, the size and the geographical distribution of CHBr3 emissions remains uncertain. Particularly little is known about emissions from the Maritime Continent, which have usually been assumed to be large, and which appear to be especially likely to reach the stratosphere. In this study we aim to reduce this uncertainty by combining the first multi-annual set of CHBr3 measurements from this region, and an inversion process, to investigate systematically the distribution and magnitude of CHBr3 emissions. The novelty of our approach lies in the application of the inversion method to CHBr3. We find that local measurements of a short-lived gas like CHBr3 can be used to constrain emissions from only a relatively small, sub-regional domain. We then obtain detailed estimates of CHBr3 emissions within this area, which appear to be relatively insensitive to the assumptions inherent in the inversion process. We extrapolate this information to produce estimated emissions for the entire tropics (defined as 20° S-20° N) of 225 Gg CHBr3 yr-1. The ocean in the area we base our extrapolations upon is typically somewhat shallower, and more biologically productive, than the tropical average. Despite this, our tropical estimate is lower than most other recent studies, and suggests that CHBr3 emissions in the coastline-rich Maritime Continent may not be stronger than emissions in other parts of the tropics.
An extended stochastic method for seismic hazard estimation
NASA Astrophysics Data System (ADS)
Abd el-aal, A. K.; El-Eraki, M. A.; Mostafa, S. I.
2015-12-01
In this contribution, we developed an extended stochastic technique for seismic hazard assessment purposes. This technique depends on the hypothesis of stochastic technique of Boore (2003) "Simulation of ground motion using the stochastic method. Appl. Geophy. 160:635-676". The essential characteristics of extended stochastic technique are to obtain and simulate ground motion in order to minimize future earthquake consequences. The first step of this technique is defining the seismic sources which mostly affect the study area. Then, the maximum expected magnitude is defined for each of these seismic sources. It is followed by estimating the ground motion using an empirical attenuation relationship. Finally, the site amplification is implemented in calculating the peak ground acceleration (PGA) at each site of interest. We tested and applied this developed technique at Cairo, Suez, Port Said, Ismailia, Zagazig and Damietta cities to predict the ground motion. Also, it is applied at Cairo, Zagazig and Damietta cities to estimate the maximum peak ground acceleration at actual soil conditions. In addition, 0.5, 1, 5, 10 and 20 % damping median response spectra are estimated using the extended stochastic simulation technique. The calculated highest acceleration values at bedrock conditions are found at Suez city with a value of 44 cm s-2. However, these acceleration values decrease towards the north of the study area to reach 14.1 cm s-2 at Damietta city. This comes in agreement with the results of previous studies of seismic hazards in northern Egypt and is found to be comparable. This work can be used for seismic risk mitigation and earthquake engineering purposes.
Can the gradient method improve our ability to predict soil respiration?
NASA Astrophysics Data System (ADS)
Phillips, Claire; Nickerson, Nicholas; Risk, Dave
2015-04-01
Soil surface flux measurements integrate respiration across steep vertical gradients of soil texture, moisture, temperature, and carbon substrates. Although there are benefits to integrating complex soil processes in a single surface measure, i.e. for constructing soil carbon budgets, one serious drawback of studying only surface respiration is the difficulty in generating predictive relationships from environmental drivers. For example, the relationship between depth-integrated soil respiration and temperature measured at a single discreet depth (apparent temperature sensitivity) can bear little resemblance to the temperature sensitivity of soil respiration within soil layers (actual temperature sensitivity). Here we present several examples of how the inferred environmental sensitivity of soil respiration can be improved from observations of CO2 flux profiles in contrast to surface fluxes alone. We present a theoretical approach for estimating the temperature sensitivity of soil respiration in situ, called the weighted heat flux approach, which avoids much of the hysteresis produced by typical respiration-temperature comparisons. The weighted heat flux approach gives more accurate estimates of within-soil temperature sensitivity, and is arguably the most theoretically robust analytical temperature model available. We also show how soil drying influences the effectiveness of the weighted heat flux approach, as well as the relative activity of discreet soil layers and specific soil organisms, such as mycorrhizal fungi. The additional information provided by within-soil flux profiles can improve the fidelity of both probabilistic and mechanistic soil respiration models
Zigler, Corwin Matthew; Dominici, Francesca
2014-01-01
Causal inference with observational data frequently relies on the notion of the propensity score (PS) to adjust treatment comparisons for observed confounding factors. As decisions in the era of “big data” are increasingly reliant on large and complex collections of digital data, researchers are frequently confronted with decisions regarding which of a high-dimensional covariate set to include in the PS model in order to satisfy the assumptions necessary for estimating average causal effects. Typically, simple or ad-hoc methods are employed to arrive at a single PS model, without acknowledging the uncertainty associated with the model selection. We propose three Bayesian methods for PS variable selection and model averaging that 1) select relevant variables from a set of candidate variables to include in the PS model and 2) estimate causal treatment effects as weighted averages of estimates under different PS models. The associated weight for each PS model reflects the data-driven support for that model’s ability to adjust for the necessary variables. We illustrate features of our proposed approaches with a simulation study, and ultimately use our methods to compare the effectiveness of surgical vs. nonsurgical treatment for brain tumors among 2,606 Medicare beneficiaries. Supplementary materials are available online. PMID:24696528
Estimating rotavirus vaccine effectiveness in Japan using a screening method.
Araki, Kaoru; Hara, Megumi; Sakanishi, Yuta; Shimanoe, Chisato; Nishida, Yuichiro; Matsuo, Muneaki; Tanaka, Keitaro
2016-05-01
Rotavirus gastroenteritis is a highly contagious, acute viral disease that imposes a significant health burden worldwide. In Japan, rotavirus vaccines have been commercially available since 2011 for voluntary vaccination, but vaccine coverage and effectiveness have not been evaluated. In the absence of a vaccination registry in Japan, vaccination coverage in the general population was estimated according to the number of vaccines supplied by the manufacturer, the number of children who received financial support for vaccination, and the size of the target population. Patients with rotavirus gastroenteritis were identified by reviewing the medical records of all children who consulted 6 major hospitals in Saga Prefecture with gastroenteritis symptoms. Vaccination status among these patients was investigated by reviewing their medical records or interviewing their guardians by telephone. Vaccine effectiveness was determined using a screening method. Vaccination coverage increased with time, and it was 2-times higher in municipalities where the vaccination fee was supported. In the 2012/13 season, vaccination coverage in Saga Prefecture was 14.9% whereas the proportion of patients vaccinated was 5.1% among those with clinically diagnosed rotavirus gastroenteritis and 1.9% among those hospitalized for rotavirus gastroenteritis. Thus, vaccine effectiveness was estimated as 69.5% and 88.8%, respectively. This is the first study to evaluate rotavirus vaccination coverage and effectiveness in Japan since vaccination began. PMID:26680277
Appendix A: other methods for estimating trends of Arctic birds
Bart, Jonathan; Brown, Stephen; Morrison, R.I. Guy; Smith, Paul A.
2012-01-01
The Arctic PRISM was designed to determine shorebird population size and trend. During an extensive peer review of PRISM, some reviewers suggested that measuring demographic rates or monitoring shorebirds on migration would be more appropriate than estimating population size on the breeding grounds. However, each method has its own limitations. For demographic monitoring, an unbiased estimate based on a large sample of first-year survivorship would be extremely difficult for shorebirds in the arctic because the needed sample size would be unobtainable (in Canada at least) and the level of effort that would need to be expended (both financial and human resource-wise) would far exceed that of the current Arctic PRISM methodology. For migration monitoring, issues such as changes in use of monitored to non-monitored sites, residency times, and detection rates introduce bias that has not yet been resolved. While we believe demographic and migration monitoring are very valuable and are already components of the PRISM approach (e.g., Tier 2 sites focus on the collection of demographic data), we do not believe that either is likely to achieve the PRISM accuracy target of an 80% power to detect a 50% decline.
Effect of radon measurement methods on dose estimation.
Kávási, Norbert; Kobayashi, Yosuke; Kovács, Tibor; Somlai, János; Jobbágy, Viktor; Nagy, Katalin; Deák, Eszter; Berhés, István; Bender, Tamás; Ishikawa, Tetsuo; Tokonami, Shinji; Vaupotic, Janja; Yoshinaga, Shinji; Yonehara, Hidenori
2011-05-01
Different radon measurement methods were applied in the old and new buildings of the Turkish bath of Eger, Hungary, in order to elaborate a radon measurement protocol. Besides, measurements were also made concerning the radon and thoron short-lived decay products, gamma dose from external sources and water radon. The most accurate results for dose estimation were provided by the application of personal radon meters. Estimated annual effective doses from radon and its short-lived decay products in the old and new buildings, using 0.2 and 0.1 measured equilibrium factors, were 0.83 and 0.17 mSv, respectively. The effective dose from thoron short-lived decay products was only 5 % of these values. The respective external gamma radiation effective doses were 0.19 and 0.12 mSv y(-1). Effective dose from the consumption of tap water containing radon was 0.05 mSv y(-1), while in the case of spring water, it was 0.14 mSv y(-1). PMID:21450699
Estimating rotavirus vaccine effectiveness in Japan using a screening method
Araki, Kaoru; Hara, Megumi; Sakanishi, Yuta; Shimanoe, Chisato; Nishida, Yuichiro; Matsuo, Muneaki; Tanaka, Keitaro
2016-01-01
abstract Rotavirus gastroenteritis is a highly contagious, acute viral disease that imposes a significant health burden worldwide. In Japan, rotavirus vaccines have been commercially available since 2011 for voluntary vaccination, but vaccine coverage and effectiveness have not been evaluated. In the absence of a vaccination registry in Japan, vaccination coverage in the general population was estimated according to the number of vaccines supplied by the manufacturer, the number of children who received financial support for vaccination, and the size of the target population. Patients with rotavirus gastroenteritis were identified by reviewing the medical records of all children who consulted 6 major hospitals in Saga Prefecture with gastroenteritis symptoms. Vaccination status among these patients was investigated by reviewing their medical records or interviewing their guardians by telephone. Vaccine effectiveness was determined using a screening method. Vaccination coverage increased with time, and it was 2-times higher in municipalities where the vaccination fee was supported. In the 2012/13 season, vaccination coverage in Saga Prefecture was 14.9% whereas the proportion of patients vaccinated was 5.1% among those with clinically diagnosed rotavirus gastroenteritis and 1.9% among those hospitalized for rotavirus gastroenteritis. Thus, vaccine effectiveness was estimated as 69.5% and 88.8%, respectively. This is the first study to evaluate rotavirus vaccination coverage and effectiveness in Japan since vaccination began. PMID:26680277
Issues and advances in research methods on video games and cognitive abilities
Sobczyk, Bart; Dobrowolski, Paweł; Skorko, Maciek; Michalak, Jakub; Brzezicka, Aneta
2015-01-01
The impact of video game playing on cognitive abilities has been the focus of numerous studies over the last 10 years. Some cross-sectional comparisons indicate the cognitive advantages of video game players (VGPs) over non-players (NVGPs) and the benefits of video game trainings, while others fail to replicate these findings. Though there is an ongoing discussion over methodological practices and their impact on observable effects, some elementary issues, such as the representativeness of recruited VGP groups and lack of genre differentiation have not yet been widely addressed. In this article we present objective and declarative gameplay time data gathered from large samples in order to illustrate how playtime is distributed over VGP populations. The implications of this data are then discussed in the context of previous studies in the field. We also argue in favor of differentiating video games based on their genre when recruiting study samples, as this form of classification reflects the core mechanics that they utilize and therefore provides a measure of insight into what cognitive functions are likely to be engaged most. Additionally, we present the Covert Video Game Experience Questionnaire as an example of how this sort of classification can be applied during the recruitment process. PMID:26483717
Issues and advances in research methods on video games and cognitive abilities.
Sobczyk, Bart; Dobrowolski, Paweł; Skorko, Maciek; Michalak, Jakub; Brzezicka, Aneta
2015-01-01
The impact of video game playing on cognitive abilities has been the focus of numerous studies over the last 10 years. Some cross-sectional comparisons indicate the cognitive advantages of video game players (VGPs) over non-players (NVGPs) and the benefits of video game trainings, while others fail to replicate these findings. Though there is an ongoing discussion over methodological practices and their impact on observable effects, some elementary issues, such as the representativeness of recruited VGP groups and lack of genre differentiation have not yet been widely addressed. In this article we present objective and declarative gameplay time data gathered from large samples in order to illustrate how playtime is distributed over VGP populations. The implications of this data are then discussed in the context of previous studies in the field. We also argue in favor of differentiating video games based on their genre when recruiting study samples, as this form of classification reflects the core mechanics that they utilize and therefore provides a measure of insight into what cognitive functions are likely to be engaged most. Additionally, we present the Covert Video Game Experience Questionnaire as an example of how this sort of classification can be applied during the recruitment process. PMID:26483717
[Methods for the estimation of the renal function].
Fontseré Baldellou, Néstor; Bonal I Bastons, Jordi; Romero González, Ramón
2007-10-13
The chronic kidney disease represents one of the pathologies with greater incidence and prevalence in the present sanitary systems. The ambulatory application of different methods that allow a suitable detection, monitoring and stratification of the renal functionalism is of crucial importance. On the basis of the vagueness obtained by means of the application of the serum creatinine, a set of predictive equations for the estimation of the glomerular filtration rate have been developed. Nevertheless, it is essential for the physician to know its limitations, in situations of normal renal function and hyperfiltration, certain associate pathologies and extreme situations of nutritional status and age. In these cases, the application of the isotopic techniques for the calculation of the renal function is more recommendable. PMID:17980123
The sisterhood method of estimating maternal mortality: the Matlab experience.
Shahidullah, M
1995-01-01
This study reports the results of a test of validation of the sisterhood method of measuring the level of maternal mortality using data from a Demographic Surveillance System (DSS) operating since 1966 in Matlab, Bangladesh. The records of maternal deaths that occurred during 1976-90 in the Matlab DSS area were used. One of the deceased woman's surviving brothers or sisters, aged 15 or older and born to the same mother, was asked if the deceased sister had died of maternity-related causes. Of the 384 maternal deaths for which siblings were interviewed, 305 deaths were correctly reported, 16 deaths were underreported, and the remaining 63 were misreported as nonmaternal deaths. Information on maternity-related deaths obtained in a sisterhood survey conducted in the Matlab DSS area was compared with the information recorded in the DSS. Results suggest that in places similar to Matlab, the sisterhood method can be used to provide an indication of the level of maternal mortality if no other data exist, though the method will produce negative bias in maternal mortality estimates. PMID:7618193
Application of throughfall methods to estimate dry deposition of mercury
Lindberg, S.E.; Owens, J.G.; Stratton, W.
1992-12-31
Several dry deposition methods for Mercury (Hg) are being developed and tested in our laboratory. These include big-leaf and multilayer resistance models, micrometeorological methods such as Bowen ratio gradient approaches, laboratory controlled plant chambers, and throughfall. We have previously described our initial results using modeling and gradient methods. Throughfall may be used to estimate Hg dry deposition if some simplifying assumptions are met. We describe here the application and initial results of throughfull studies at the Walker Branch Watershed forest, and discuss the influence of certain assumptions on interpretation of the data. Throughfall appears useful in that it can place a lower bound to dry deposition under field conditions. Our preliminary throughfall data indicate net dry deposition rates to a pine canopy which increase significantly from winter to summer, as previously predicted by our resistance model. Atmospheric data suggest that rainfall washoff of fine aerosol dry deposition at this site is not sufficient to account for all of the Hg in net throughfall. Potential additional sources include dry deposited gas-phase compounds, soil-derived coarse aerosols, and oxidation reactions at the leaf surface.
An automatic iris occlusion estimation method based on high-dimensional density estimation.
Li, Yung-Hui; Savvides, Marios
2013-04-01
Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation. PMID:22868651
New Method of Estimating Binary's Mass Ratios by Using Superhumps
NASA Astrophysics Data System (ADS)
Kato, Taichi; Osaki, Yoji
2013-12-01
We propose a new dynamical method of estimating binary's mass ratios by using the period of superhumps in SU UMa-type dwarf novae during the growing stage (the stage A superhumps). This method is based on the working hypothesis that the period of superhumps in the growing stage is determined by the dynamical precession rate at the 3:1 resonance radius, and is suggested in our new interpretation of the superhump period evolution during a superoutburst (2013, PASJ, 65, 95). By comparing objects having known mass ratios, we show that our method can provide sufficiently accurate mass ratios comparable to those obtained by eclipse observations in quiescence. One of the advantages of this method is that it requires neither an eclipse nor any experimental calibration. It is particularly suitable for exploring the low mass-ratio end of the evolution of cataclysmic variables, where the secondary is not detectable by conventional methods. Our analysis suggests that previous determinations of the mass ratio by using superhump periods during a superoutburst were systematically underestimated for low mass-ratio systems, and we provided a new calibration. It reveals that most WZ Sge-type dwarf novae have either secondaries close to the border of the lower main-sequence or brown dwarfs, and most of the objects have not yet reached the evolutionary stage of period bouncers. Our results are not in contradiction with an assumption that an observed minimum period (˜77 min) of ordinary hydrogen-rich cataclysmic variables is indeed the minimum period. We highlight how important the early observation of stage A superhumps is, and propose an effective future strategy of observation.
NASA Astrophysics Data System (ADS)
Saçkes, Mesut; Trundle, Kathy Cabe
2014-06-01
This study investigated the predictive ability of an intentional learning model in the change of preservice early childhood teachers' conceptual understanding of lunar phases. Fifty-two preservice early childhood teachers who were enrolled in an early childhood science methods course participated in the study. Results indicated that the use of metacognitive strategies facilitated preservice early childhood teachers' use of deep-level cognitive strategies, which in turn promoted conceptual change. Also, preservice early childhood teachers with high motivational beliefs were more likely to use cognitive and metacognitive strategies. Thus, they were more likely to engage in conceptual change. The results provided evidence that the hypothesized model of intentional learning has a high predictive ability in explaining the change in preservice early childhood teachers' conceptual understandings from the pre to post-interviews. Implications for designing a science methods course for preservice early childhood teachers are provided.
A new rapid method for rockfall energies and distances estimation
NASA Astrophysics Data System (ADS)
Giacomini, Anna; Ferrari, Federica; Thoeni, Klaus; Lambert, Cedric
2016-04-01
Rockfalls are characterized by long travel distances and significant energies. Over the last decades, three main methods have been proposed in the literature to assess the rockfall runout: empirical, process-based and GIS-based methods (Dorren, 2003). Process-based methods take into account the physics of rockfall by simulating the motion of a falling rock along a slope and they are generally based on a probabilistic rockfall modelling approach that allows for taking into account the uncertainties associated with the rockfall phenomenon. Their application has the advantage of evaluating the energies, bounce heights and distances along the path of a falling block, hence providing valuable information for the design of mitigation measures (Agliardi et al., 2009), however, the implementation of rockfall simulations can be time-consuming and data-demanding. This work focuses on the development of a new methodology for estimating the expected kinetic energies and distances of the first impact at the base of a rock cliff, subject to the conditions that the geometry of the cliff and the properties of the representative block are known. The method is based on an extensive two-dimensional sensitivity analysis, conducted by means of kinematic simulations based on probabilistic modelling of two-dimensional rockfall trajectories (Ferrari et al., 2016). To take into account for the uncertainty associated with the estimation of the input parameters, the study was based on 78400 rockfall scenarios performed by systematically varying the input parameters that are likely to affect the block trajectory, its energy and distance at the base of the rock wall. The variation of the geometry of the rock cliff (in terms of height and slope angle), the roughness of the rock surface and the properties of the outcropping material were considered. A simplified and idealized rock wall geometry was adopted. The analysis of the results allowed finding empirical laws that relate impact energies
EVALUATION OF RIVER LOAD ESTIMATION METHODS FOR TOTAL PHOSPHORUS
Accurate estimates of pollutant loadings to the Great Lakes are required for trend detection, model development, and planning. On many major rivers, infrequent sampling of most pollutants makes these estimates difficult. However, most large rivers have complete daily flow records...
Knowledge, Skills, and Abilities for Entry-Level Business Analytics Positions: A Multi-Method Study
ERIC Educational Resources Information Center
Cegielski, Casey G.; Jones-Farmer, L. Allison
2016-01-01
It is impossible to deny the significant impact from the emergence of big data and business analytics on the fields of Information Technology, Quantitative Methods, and the Decision Sciences. Both industry and academia seek to hire talent in these areas with the hope of developing organizational competencies. This article describes a multi-method…
ERIC Educational Resources Information Center
Furnham, Adrian; Christopher, Andrew; Garwood, Jeanette; Martin, Neil G.
2008-01-01
More than 400 students from four universities in America and Britain completed measures of learning style preference, general knowledge (as a proxy for intelligence), and preference for examination method. Learning style was consistently associated with preferences: surface learners preferred multiple choice and group work options, and viewed…
ERIC Educational Resources Information Center
Johnson, Erin Phinney; Perry, Justin; Shamir, Haya
2010-01-01
This study examines the effects on early reading skills of three different methods of presenting material with computer-assisted instruction (CAI): (1) learner-controlled picture menu, which allows the student to choose activities, (2) linear sequencer, which progresses the students through lessons at a pre-specified pace, and (3) mastery-based…
ERIC Educational Resources Information Center
Tucker-Drob, Elliot M.; Salthouse, Timothy A.
2009-01-01
Although factor analysis is the most commonly-used method for examining the structure of cognitive variable interrelations, multidimensional scaling (MDS) can provide visual representations highlighting the continuous nature of interrelations among variables. Using data (N = 8,813; ages 17-97 years) aggregated across 38 separate studies, MDS was…
Practical Methods for Estimating Software Systems Fault Content and Location
NASA Technical Reports Server (NTRS)
Nikora, A.; Schneidewind, N.; Munson, J.
1999-01-01
Over the past several years, we have developed techniques to discriminate between fault-prone software modules and those that are not, to estimate a software system's residual fault content, to identify those portions of a software system having the highest estimated number of faults, and to estimate the effects of requirements changes on software quality.
Methods for estimating dispersal probabilities and related parameters using marked animals
Bennetts, R.E.; Nichols, J.D.; Pradel, R.; Lebreton, J.D.; Kitchens, W.M.
2001-01-01
Deriving valid inferences about the causes and consequences of dispersal from empirical studies depends largely on our ability reliably to estimate parameters associated with dispersal. Here, we present a review of the methods available for estimating dispersal and related parameters using marked individuals. We emphasize methods that place dispersal in a probabilistic framework. In this context, we define a dispersal event as a movement of a specified distance or from one predefined patch to another, the magnitude of the distance or the definition of a `patch? depending on the ecological or evolutionary question(s) being addressed. We have organized the chapter based on four general classes of data for animals that are captured, marked, and released alive: (1) recovery data, in which animals are recovered dead at a subsequent time, (2) recapture/resighting data, in which animals are either recaptured or resighted alive on subsequent sampling occasions, (3) known-status data, in which marked animals are reobserved alive or dead at specified times with probability 1.0, and (4) combined data, in which data are of more than one type (e.g., live recapture and ring recovery). For each data type, we discuss the data required, the estimation techniques, and the types of questions that might be addressed from studies conducted at single and multiple sites.
Hwang, Beomsoo; Jeon, Doyoung
2015-01-01
In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions. PMID:25860074
Method for estimating the cooperativity length in polymers
NASA Astrophysics Data System (ADS)
Pieruccini, Marco; Alessandrini, Andrea
2015-05-01
The problem of estimating the size of the cooperatively rearranging regions (CRRs) in supercooled polymeric melts from an analysis of the α -process in ordinary relaxation experiments is addressed. The mechanism whereby a CRR changes its configuration is viewed as consisting of two distinct steps: a reduced number of monomers reaches initially an activated state, allowing for some local rearrangement; then, the subsequent regression of the energy fluctuation may take place through the configurational degrees of freedom, thus allowing for further rearrangements on larger length scales. The latter are indeed those to which the well-known Donth's scheme refers. Local readjustments are described in the framework of a canonical formalism on a stationary ensemble of small-scale regions, distributed over all possible energy thresholds for rearrangement. Large-scale configurational changes, instead, are described as spontaneous processes. Two main regimes are envisaged, depending on whether the role played by the configurational degrees of freedom in the regression of the energy fluctuation is significant or not. It is argued that the latter case is related to the occurrence of an Arrhenian dependence of the central relaxation rate. Consistency with Donth's scheme is demonstrated, and data from the literature confirm the agreement of the two methods of analysis when configurational degrees of freedom are relevant for the fluctuation regression. Poly(n -butyl methacrylate) is chosen in order to show how CRR size and temperature fluctuations at rearrangement can be estimated from stress relaxation experiments carried out by means of an atomic force microscopy setup. Cases in which the configurational pathway for regression is significantly hindered are considered. Relaxation in poly(dimethyl siloxane) confined in nanopores is taken as an example to suggest how a more complete view of the effects of configurational constraints would be possible if direct measurements of
A non-destructive dental method for age estimation.
Kvaal, S; Solheim, T
1994-06-01
Dental radiographs have rarely been used in dental age estimation methods for adults and the aim of this investigation was to derive formulae for age calculation based on measurements of teeth and their radiographs. Age-related changes were studied in 452 extracted, unsectioned incisors, canines and premolars. The length of the apical translucent zone and extent of the periodontal retraction were measured on the teeth while the pulp length and width as well as root length and width were measured on the radiographs and the ratios between the root and pulp measurements calculated. For all types of teeth significant, negative Pearson's correlation coefficients were found between age and the ratios between the pulp and the root width. In this study also, the correlation between age and the length of the apical translucent zone was weaker than expected. The periodontal retraction was significantly correlated with age in maxillary premolars alone. Multiple regression analyses showed inclusion of the ratio between the measurements of the pulp and the root on the radiographs for all teeth; the length of the apical translucency in five types; and periodontal retraction in only three types of teeth. The correlation coefficients ranged from r = 0.48 to r = 0.90 between the chronological and the calculated age using the formulae from this multiple regression study. The strongest coefficients were for premolars. These formulae may be recommended for use in odontological age estimations in forensic and archaeological cases where teeth are loose or can be extracted and where it is important that the teeth are not sectioned. PMID:9227083
NASA Astrophysics Data System (ADS)
Shi, Lei; Wang, Z. J.
2015-08-01
Adjoint-based mesh adaptive methods are capable of distributing computational resources to areas which are important for predicting an engineering output. In this paper, we develop an adjoint-based h-adaptation approach based on the high-order correction procedure via reconstruction formulation (CPR) to minimize the output or functional error. A dual-consistent CPR formulation of hyperbolic conservation laws is developed and its dual consistency is analyzed. Super-convergent functional and error estimate for the output with the CPR method are obtained. Factors affecting the dual consistency, such as the solution point distribution, correction functions, boundary conditions and the discretization approach for the non-linear flux divergence term, are studied. The presented method is then used to perform simulations for the 2D Euler and Navier-Stokes equations with mesh adaptation driven by the adjoint-based error estimate. Several numerical examples demonstrate the ability of the presented method to dramatically reduce the computational cost comparing with uniform grid refinement.
Hardware architecture design of a fast global motion estimation method
NASA Astrophysics Data System (ADS)
Liang, Chaobing; Sang, Hongshi; Shen, Xubang
2015-12-01
VLSI implementation of gradient-based global motion estimation (GME) faces two main challenges: irregular data access and high off-chip memory bandwidth requirement. We previously proposed a fast GME method that reduces computational complexity by choosing certain number of small patches containing corners and using them in a gradient-based framework. A hardware architecture is designed to implement this method and further reduce off-chip memory bandwidth requirement. On-chip memories are used to store coordinates of the corners and template patches, while the Gaussian pyramids of both the template and reference frame are stored in off-chip SDRAMs. By performing geometric transform only on the coordinates of the center pixel of a 3-by-3 patch in the template image, a 5-by-5 area containing the warped 3-by-3 patch in the reference image is extracted from the SDRAMs by burst read. Patched-based and burst mode data access helps to keep the off-chip memory bandwidth requirement at the minimum. Although patch size varies at different pyramid level, all patches are processed in term of 3x3 patches, so the utilization of the patch-processing circuit reaches 100%. FPGA implementation results show that the design utilizes 24,080 bits on-chip memory and for a sequence with resolution of 352x288 and frequency of 60Hz, the off-chip bandwidth requirement is only 3.96Mbyte/s, compared with 243.84Mbyte/s of the original gradient-based GME method. This design can be used in applications like video codec, video stabilization, and super-resolution, where real-time GME is a necessity and minimum memory bandwidth requirement is appreciated.
Novel Method for Analyzing Locomotor Ability after Spinal Cord Injury in Rats: Technical Note
Shinozaki, Munehisa; Yasuda, Akimasa; Nori, Satoshi; Saito, Nobuhito; Toyama, Yoshiaki; Okano, Hideyuki; Nakamura, Masaya
2013-01-01
In the research for the treatment of spinal cord injury (SCI), the evaluation of motor function in model rats must be as objective, noninvasive, and ethical as possible. The maximum speed and acceleration of a mouse measured using a SCANET system were previously reported to vary significantly according to severity of SCI. In the present study, the motor performance of SCI model rats was examined with SCANET and assessed for Basso-Beattie-Bresnahan (BBB) score to determine the usefulness of the SCANET system in evaluating functional recovery after SCI. Maximum speed and acceleration within the measurement period correlated significantly with BBB scores. Furthermore, among several phased kinematic factors used in BBB scores, the capability of “plantar stepping” was associated with a drastic increase in maximum speed and acceleration after SCI. Therefore, evaluation of maximum speed and acceleration using a SCANET system is a useful method for rat models of SCI and can complement open field scoring scales. PMID:24097095
Nonparametric estimation of plant density by the distance method
Patil, S.A.; Burnham, K.P.; Kovner, J.L.
1979-01-01
A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.
A method to confer Protein L binding ability to any antibody fragment
Lakhrif, Zineb; Pugnière, Martine; Henriquet, Corinne; di Tommaso, Anne; Dimier-Poisson, Isabelle; Billiald, Philippe; Juste, Matthieu O.; Aubrey, Nicolas
2016-01-01
abstract Recombinant antibody single-chain variable fragments (scFv) are difficult to purify homogeneously from a protein complex mixture. The most effective, specific and fastest method of purification is an affinity chromatography on Protein L (PpL) matrix. This protein is a multi-domain bacterial surface protein that is able to interact with conformational patterns on kappa light chains. It mainly recognizes amino acid residues located at the VL FR1 and some residues in the variable and constant (CL) domain. Not all kappa chains are recognized, however, and the lack of CL can reduce the interaction. From a scFv composed of IGKV10-94 according to IMGT®, it is possible, with several mutations, to transfer the motif from the IGKV12-46 naturally recognized by the PpL, and, with the single mutation T8P, to confer PpL recognition with a higher affinity. A second mutation S24R greatly improves the affinity, in particular by modifying the dissociation rate (kd). The equilibrium dissociation constant (KD) was measured at 7.2 10-11 M by surface plasmon resonance. It was possible to confer PpL recognition to all kappa chains. This protein interaction can be modulated according to the characteristics of scFv (e.g., stability) and their use with conjugated PpL. This work could be extrapolated to recombinant monoclonal antibodies, and offers an alternative for protein A purification and detection. PMID:26683650
Application of age estimation methods based on teeth eruption: how easy is Olze method to use?
De Angelis, D; Gibelli, D; Merelli, V; Botto, M; Ventura, F; Cattaneo, C
2014-09-01
The development of new methods for age estimation has become with time an urgent issue because of the increasing immigration, in order to estimate accurately the age of those subjects who lack valid identity documents. Methods of age estimation are divided in skeletal and dental ones, and among the latter, Olze's method is one of the most recent, since it was introduced in 2010 with the aim to identify the legal age of 18 and 21 years by evaluating the different stages of development of the periodontal ligament of the third molars with closed root apices. The present study aims at verifying the applicability of the method to the daily forensic practice, with special focus on the interobserver repeatability. Olze's method was applied by three different observers (two physicians and one dentist without a specific training in Olze's method) to 61 orthopantomograms from subjects of mixed ethnicity aged between 16 and 51 years. The analysis took into consideration the lower third molars. The results provided by the different observers were then compared in order to verify the interobserver error. Results showed that interobserver error varies between 43 and 57 % for the right lower third molar (M48) and between 23 and 49 % for the left lower third molar (M38). Chi-square test did not show significant differences according to the side of teeth and type of professional figure. The results prove that Olze's method is not easy to apply when used by not adequately trained personnel, because of an intrinsic interobserver error. Since it is however a crucial method in age determination, it should be used only by experienced observers after an intensive and specific training. PMID:24781787
Altamentova, S M; Shaklai, N; Arav, R; Miller, Y I
1998-03-23
The hazard of toxemia, a condition resulting from the spread of toxins by the bloodstream, is regulated by plasma proteins capable of binding with free toxins. As toxin binding results in a reduction of available binding sites, measuring the proteins' binding capacity can be used to estimate toxemia severity. Suggested by this approach, a novel fluorescence method was developed to determine lipoprotein and albumin binding capacities in whole plasma. The method entails two steps: specific binding of N(n-carboxy)phenylimide-4-dimethyl-aminonaphthalic acid with albumin followed by addition of 12-(9-anthroyloxy)stearic acid which, under these conditions, binds mostly with lipoprotein. Reduced fluorescence intensity of the probes in plasma of patients compared to that of healthy donors reflected saturation of binding sites by toxins, thereby estimating toxemia severity. Poor correlation was found between the lipoprotein and albumin binding abilities, suggesting their independent diagnostic values. The simplicity and rapidity of this method are advantageous for its clinical application. PMID:9565329
Chromatographic method for quick estimation of DNA interaction potency of environmental pollutants.
Feng, Yong-Lai; Lian, Hong-Zhen; Liao, Xiang-Jun; Zhu, Ji-Ping
2009-10-01
The DNA interaction potency of a chemical has been defined in the present study as the degree of a chemical's ability to interact with DNA. An estimation method of such a potency has been established based on the peak reduction of an oligonucleotide probe resulting from its interaction with chemicals based on high-performance liquid chromatography. A DNA interaction potency equivalency (PEQ) also has been proposed to evaluate the relative interaction potency of test chemicals against benzo[a]pyrene-7,8-dihydrodiol-9,10-epoxide (BPDE). Five known direct DNA interaction chemicals were employed to demonstrate the method. Two known inactive chemicals were used as negative controls. Both the potency and PEQ(50) values (PEQ of testing chemical at 50% of the probe peak reduction) of these five chemicals were determined as BPDE > phenyl glycidyl ether (PGE) > tetrachlorohydroquinone (Cl4HQ) > methyl methanesulfonate (MMS) > styrene-7,8-oxide (SO). Among the reactive chemicals, MMS was found to break the oligonucleotide into smaller fragments, whereas BPDE, PGE, and SO form covalent adducts with the oligonucleotide. In the latter case, the formation of multi-chemical-oligonucleotide adducts also was observed by mass spectrometry. The method was employed to estimate the DNA interaction potency equivalency of diesel vehicle exhaust gas to demonstrate the applicability of this approach in evaluating the interaction potency of environmental pollutants in both gas and liquid phases. PMID:19432508
A method for quantitatively estimating diffuse and discrete hydrothermal discharge
NASA Astrophysics Data System (ADS)
Baker, Edward T.; Massoth, Gary J.; Walker, Sharon L.; Embley, Robert W.
1993-07-01
Submarine hydrothermal fluids discharge as undiluted, high-temperature jets and as diffuse, highly diluted, low-temperature percolation. Estimates of the relative contribution of each discharge type, which are important for the accurate determination of local and global hydrothermal budgets, are difficult to obtain directly. In this paper we describe a new method of using measurements of hydrothermal tracers such as Fe/Mn, Fe/heat, and Mn/heat in high-temperature fluids, low-temperature fluids, and the neutrally buoyant plume to deduce the relative contribution of each discharge type. We sampled vent fluids from the north Cleft vent field on the Juan de Fuca Ridge in 1988, 1989 and 1991, and plume samples every year from 1986 to 1991. The tracers were, on average, 3 to 90 times greater in high-temperature than in low-temperature fluids, with plume values intermediate. A mixing model calculates that high-temperature fluids contribute only ˜ 3% of the fluid mass flux but > 90% of the hydrothermal Fe and > 60% of the hydrothermal Mn to the overlying plume. Three years of extensive camera-CTD sled tows through the vent field show that diffuse venting is restricted to a narrow fissure zone extending for 18 km along the axial strike. Linear plume theory applied to the temperature plumes detected when the sled crossed this zone yields a maximum likelihood estimate for the diffuse heat flux of8.9 × 10 4 W/m, for a total flux of 534 MW, considering that diffuse venting is active along only one-third of the fissure system. For mean low- and high-temperature discharge of 25°C and 319°C, respectively, the discrete heat flux must be 266 MW to satisfy the mass flux partitioning. If the north Cleft vent field is globally representative, the assumption that high-temperature discharge dominates the mass flux in axial vent fields leads to an overestimation of the flux of many non-conservative hydrothermal species by about an order of magnitude.
NASA Astrophysics Data System (ADS)
El Sharif, H.; Teegavarapu, R. S.
2012-12-01
Spatial interpolation methods used for estimation of missing precipitation data at a site seldom check for their ability to preserve site and regional statistics. Such statistics are primarily defined by spatial correlations and other site-to-site statistics in a region. Preservation of site and regional statistics represents a means of assessing the validity of missing precipitation estimates at a site. This study evaluates the efficacy of a fuzzy-logic methodology for infilling missing historical daily precipitation data in preserving site and regional statistics. Rain gauge sites in the state of Kentucky, USA, are used as a case study for evaluation of this newly proposed method in comparison to traditional data infilling techniques. Several error and performance measures will be used to evaluate the methods and trade-offs in accuracy of estimation and preservation of site and regional statistics.
Stability over Time of Different Methods of Estimating School Performance
ERIC Educational Resources Information Center
Dumay, Xavier; Coe, Rob; Anumendem, Dickson Nkafu
2014-01-01
This paper aims to investigate how stability varies with the approach used in estimating school performance in a large sample of English primary schools. The results show that (a) raw performance is considerably more stable than adjusted performance, which in turn is slightly more stable than growth model estimates; (b) schools' performance…
ERIC Educational Resources Information Center
Lafferty, Mark T.
2010-01-01
The number of project failures and those projects completed over cost and over schedule has been a significant issue for software project managers. Among the many reasons for failure, inaccuracy in software estimation--the basis for project bidding, budgeting, planning, and probability estimates--has been identified as a root cause of a high…
ERIC Educational Resources Information Center
Wang, Lijuan; McArdle, John J.
2008-01-01
The main purpose of this research is to evaluate the performance of a Bayesian approach for estimating unknown change points using Monte Carlo simulations. The univariate and bivariate unknown change point mixed models were presented and the basic idea of the Bayesian approach for estimating the models was discussed. The performance of Bayesian…
Analytic Method to Estimate Particle Acceleration in Flux Ropes
NASA Technical Reports Server (NTRS)
Guidoni, S. E.; Karpen, J. T.; DeVore, C. R.
2015-01-01
The mechanism that accelerates particles to the energies required to produce the observed high-energy emission in solar flares is not well understood. Drake et al. (2006) proposed a kinetic mechanism for accelerating electrons in contracting magnetic islands formed by reconnection. In this model, particles that gyrate around magnetic field lines transit from island to island, increasing their energy by Fermi acceleration in those islands that are contracting. Based on these ideas, we present an analytic model to estimate the energy gain of particles orbiting around field lines inside a flux rope (2.5D magnetic island). We calculate the change in the velocity of the particles as the flux rope evolves in time. The method assumes a simple profile for the magnetic field of the evolving island; it can be applied to any case where flux ropes are formed. In our case, the flux-rope evolution is obtained from our recent high-resolution, compressible 2.5D MHD simulations of breakout eruptive flares. The simulations allow us to resolve in detail the generation and evolution of large-scale flux ropes as a result of sporadic and patchy reconnection in the flare current sheet. Our results show that the initial energy of particles can be increased by 2-5 times in a typical contracting island, before the island reconnects with the underlying arcade. Therefore, particles need to transit only from 3-7 islands to increase their energies by two orders of magnitude. These macroscopic regions, filled with a large number of particles, may explain the large observed rates of energetic electron production in flares. We conclude that this mechanism is a promising candidate for electron acceleration in flares, but further research is needed to extend our results to 3D flare conditions.
Variational methods to estimate terrestrial ecosystem model parameters
NASA Astrophysics Data System (ADS)
Delahaies, Sylvain; Roulstone, Ian
2016-04-01
Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.
Optimal filtering methods to structural damage estimation under ground excitation.
Hsieh, Chien-Shu; Liaw, Der-Cherng; Lin, Tzu-Hsuan
2013-01-01
This paper considers the problem of shear building damage estimation subject to earthquake ground excitation using the Kalman filtering approach. The structural damage is assumed to take the form of reduced elemental stiffness. Two damage estimation algorithms are proposed: one is the multiple model approach via the optimal two-stage Kalman estimator (OTSKE), and the other is the robust two-stage Kalman filter (RTSKF), an unbiased minimum-variance filtering approach to determine the locations and extents of the damage stiffness. A numerical example of a six-storey shear plane frame structure subject to base excitation is used to illustrate the usefulness of the proposed results. PMID:24453869
Shanei, Ahmad; Afshin, Maryam; Moslehi, Masoud; Rastaghi, Sedighe
2015-01-01
To make an accurate estimation of the uptake of radioactivity in an organ using the conjugate view method, corrections of physical factors, such as background activity, scatter, and attenuation are needed. The aim of this study was to evaluate the accuracy of four different methods for background correction in activity quantification of the heart in myocardial perfusion scans. The organ activity was calculated using the conjugate view method. A number of 22 healthy volunteers were injected with 17-19 mCi of (99m)Tc-methoxy-isobutyl-isonitrile (MIBI) at rest or during exercise. Images were obtained by a dual-headed gamma camera. Four methods for background correction were applied: (1) Conventional correction (referred to as the Gates' method), (2) Buijs method, (3) BgdA subtraction, (4) BgdB subtraction. To evaluate the accuracy of these methods, the results of the calculations using the above-mentioned methods were compared with the reference results. The calculated uptake in the heart using conventional method, Buijs method, BgdA subtraction, and BgdB subtraction methods was 1.4 ± 0.7% (P < 0.05), 2.6 ± 0.6% (P < 0.05), 1.3 ± 0.5% (P < 0.05), and 0.8 ± 0.3% (P < 0.05) of injected dose (I.D) at rest and 1.8 ± 0.6% (P > 0.05), 3.1 ± 0.8% (P > 0.05), 1.9 ± 0.8% (P < 0.05), and 1.2 ± 0.5% (P < 0.05) of I.D, during exercise. The mean estimated myocardial uptake of (99m)Tc-MIBI was dependent on the correction method used. Comparison among the four different methods of background activity correction applied in this study showed that the Buijs method was the most suitable method for background correction in myocardial perfusion scan. PMID:26955568
Dynamic State Estimation Utilizing High Performance Computing Methods
Schneider, Kevin P.; Huang, Zhenyu; Yang, Bo; Hauer, Matthew L.; Nieplocha, Jaroslaw
2009-03-18
The state estimation tools which are currently deployed in power system control rooms are based on a quasi-steady-state assumption. As a result, the suite of operational tools that rely on state estimation results as inputs do not have dynamic information available and their accuracy is compromised. This paper presents an overview of the Kalman Filtering process and then focuses on the implementation of the predication component on multiple processors.
Roerig, Simone; van Wesel, Floryt; Evers, Sandra J. T. M.; Krabbendam, Lydia
2015-01-01
In social neuroscience, empathy is often approached as an individual ability, whereas researchers in anthropology focus on empathy as a dialectic process between agents. In this perspective paper, we argue that to further elucidate the mechanisms underlying the development of empathy, social neuroscience research should draw on insights and methods from anthropology. First, we discuss neuropsychological studies that investigate empathy in inter-relational contexts. Second, we highlight differences between the social neuroscience and anthropological conceptualizations of empathy. Third, we introduce a new study design based on a mixed method approach, and present initial results from one classroom that was part of a larger study and included 28 children (m = 13, f = 15). Participants (aged 9–11) were administered behavioral tasks and a social network questionnaire; in addition an observational study was also conducted over a period of 3 months. Initial results showed how children's expressions of their empathic abilities were influenced by situational cues in classroom processes. This effect was further explained by children's positions within classroom networks. Our results emphasize the value of interdisciplinary research in the study of empathy. PMID:26283901
Li, Man; Xue, Xiao-Song; Guo, Jinping; Wang, Ya; Cheng, Jin-Pei
2016-04-15
This work established an energetic guide for estimating the trifluoromethyl cation-donating abilities (TC(+)DA) of electrophilic trifluoromethylating reagents through computing X-CF3 bond (X = O, S, Se, Te, and I) heterolytic dissociation enthalpies. TC(+)DA values for a wide range of popular reagents were derived on the basis of density functional calculations (M06-2X). A good correspondence has been identified between the computed TC(+)DA values and the experimentally observed relative trifluoromethylating capabilities of the reagents. Substituent effects hold good linear free energy relationships on the TC(+)DAs of the most widely used reagents including Umemoto reagent, Yagupolskii-Umemoto reagent, and Togni reagents, which allow their trifluoromethylating capabilities to be rationally tuned by substituents and thus extend their synthetic utility. All the information disclosed in this work would contribute to future rational exploration of the electrophilic trifluoromethylation chemistry. PMID:26999452
An Investigation of Methods for Improving Estimation of Test Score Distributions.
ERIC Educational Resources Information Center
Hanson, Bradley A.
Three methods of estimating test score distributions that may improve on using the observed frequencies (OBFs) as estimates of a population test score distribution are considered: the kernel method (KM); the polynomial method (PM); and the four-parameter beta binomial method (FPBBM). The assumption each method makes about the smoothness of the…
Iterative methods for distributed parameter estimation in parabolic PDE
Vogel, C.R.; Wade, J.G.
1994-12-31
The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.
PHREATOPHYTE WATER USE ESTIMATED BY EDDY-CORRELATION METHODS.
Weaver, H.L.; Weeks, E.P.; Campbell, G.S.; Stannard, D.I.; Tanner, B.D.
1986-01-01
Water-use was estimated for three phreatophyte communities: a saltcedar community and an alkali-Sacaton grass community in New Mexico, and a greasewood rabbit-brush-saltgrass community in Colorado. These water-use estimates were calculated from eddy-correlation measurements using three different analyses, since the direct eddy-correlation measurements did not satisfy a surface energy balance. The analysis that seems to be most accurate indicated the saltcedar community used from 58 to 87 cm (23 to 34 in. ) of water each year. The other two communities used about two-thirds this quantity.
COMPARISON OF METHODS FOR ESTIMATING GROUND-WATER PUMPAGE FOR IRRIGATION.
Frenzel, Steven A.
1985-01-01
Ground-water pumpage for irrigation was measured at 32 sites on the eastern Snake River Plain in southern Idaho during 1983. Pumpage at these sites also was estimated by three commonly used methods, and pumpage estimates were compared to measured values to determine the accuracy of each estimate. Statistical comparisons of estimated and metered pumpage using an F-test showed that only estimates made using the instantaneous discharge method were not significantly different ( alpha equals 0. 01) from metered values. Pumpage estimates made using the power consumption method reflect variability in pumping efficiency among sites. Pumpage estimates made using the crop-consumptive use method reflect variability in water-management practices. Pumpage estimates made using the instantaneous discharge method reflect variability in discharges at each site during the irrigation season.
A Practical Method of Policy Analysis by Estimating Effect Size
ERIC Educational Resources Information Center
Phelps, James L.
2011-01-01
The previous articles on class size and other productivity research paint a complex and confusing picture of the relationship between policy variables and student achievement. Missing is a conceptual scheme capable of combining the seemingly unrelated research and dissimilar estimates of effect size into a unified structure for policy analysis and…
Assessment of in silico methods to estimate aquatic species sensitivity
Determining the sensitivity of a diversity of species to environmental contaminants continues to be a significant challenge in ecological risk assessment because toxicity data are generally limited to a few standard species. In many cases, QSAR models are used to estimate toxici...
Estimation method for national methane emission from solid waste landfills
NASA Astrophysics Data System (ADS)
Kumar, Sunil; Gaikwad, S. A.; Shekdar, A. V.; Kshirsagar, P. S.; Singh, R. N.
In keeping with the global efforts on inventorisation of methane emission, municipal solid waste (MSW) landfills are recognised as one of the major sources of anthropogenic emissions generated from human activities. In India, most of the solid wastes are disposed of by landfilling in low-lying areas located in and around the urban centres resulting in generation of large quantities of biogas containing a sizeable proportion of methane. After a critical review of literature on the methodology for estimation of methane emissions, the default methodology has been used in estimation following the IPCC guidelines 1996. However, as the default methodology assumes that all potential methane is emitted in the year of waste deposition, a triangular model for biogas from landfill has been proposed and the results are compared. The methodology proposed for methane emissions from landfills based on a triangular model is more realistic and can very well be used in estimation on global basis. Methane emissions from MSW landfills for the year AD 1980-1999 have been estimated which could be used in computing national inventories of methane emission.
REVIEW AND DEVELOPMENT OF ESTIMATION METHODS FOR WILDLAND FIRE EMISSIONS
The product will be a collection of information/data materials and/or operational data systems that provide organized data and estimates to identify the occurence of aggregated or individual fires, the material burned, and air pollutant emissions. An interim background document ...
Assessing Methods for Generalizing Experimental Impact Estimates to Target Populations
ERIC Educational Resources Information Center
Kern, Holger L.; Stuart, Elizabeth A.; Hill, Jennifer; Green, Donald P.
2016-01-01
Randomized experiments are considered the gold standard for causal inference because they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations. In education research,…
Methods to explain genomic estimates of breeding value
Technology Transfer Automated Retrieval System (TEKTRAN)
Genetic markers allow animal breeders to locate, estimate, and trace inheritance of many unknown genes that affect quantitative traits. Traditional models use pedigree data to compute expected proportions of genes identical by descent (assumed the same for all traits). Newer genomic models use thous...
2010-01-01
The ancient Greek medical theory based on balance or imbalance of humors disappeared in the western world, but does survive elsewhere. Is this survival related to a certain degree of health care efficiency? We explored this hypothesis through a study of classical Greco-Arab medicine in Mauritania. Modern general practitioners evaluated the safety and effectiveness of classical Arabic medicine in a Mauritanian traditional clinic, with a prognosis/follow-up method allowing the following comparisons: (i) actual patient progress (clinical outcome) compared with what the traditional ‘tabib’ had anticipated (= prognostic ability) and (ii) patient progress compared with what could be hoped for if the patient were treated by a modern physician in the same neighborhood. The practice appeared fairly safe and, on average, clinical outcome was similar to what could be expected with modern medicine. In some cases, patient progress was better than expected. The ability to correctly predict an individual's clinical outcome did not seem to be better along modern or Greco-Arab theories. Weekly joint meetings (modern and traditional practitioners) were spontaneously organized with a modern health centre in the neighborhood. Practitioners of a different medical system can predict patient progress. For the patient, avoiding false expectations with health care and ensuring appropriate referral may be the most important. Prognosis and outcome studies such as the one presented here may help to develop institutions where patients find support in making their choices, not only among several treatment options, but also among several medical systems. PMID:18955326
A Modified Frequency Estimation Equating Method for the Common-Item Nonequivalent Groups Design
ERIC Educational Resources Information Center
Wang, Tianyou; Brennan, Robert L.
2009-01-01
Frequency estimation, also called poststratification, is an equating method used under the common-item nonequivalent groups design. A modified frequency estimation method is proposed here, based on altering one of the traditional assumptions in frequency estimation in order to correct for equating bias. A simulation study was carried out to…
Etalon-photometric method for estimation of tissues density at x-ray images
NASA Astrophysics Data System (ADS)
Buldakov, Nicolay S.; Buldakova, Tatyana I.; Suyatinov, Sergey I.
2016-04-01
The etalon-photometric method for quantitative estimation of physical density of pathological entities is considered. The method consists in using etalon during the registration and estimation of photometric characteristics of objects. The algorithm for estimating of physical density at X-ray images is offered.
Cloutier, L; Pomar, C; Létourneau Montminy, M P; Bernier, J F; Pomar, J
2015-04-01
The implementation of precision feeding in growing-finishing facilities requires accurate estimates of the animals' nutrient requirements. The objectives of the current study was to validate a method for estimating the real-time individual standardized ileal digestible (SID) lysine (Lys) requirements of growing-finishing pigs and the ability of this method to estimate the Lys requirements of pigs with different feed intake and growth patterns. Seventy-five pigs from a terminal cross and 72 pigs from a maternal cross were used in two 28-day experimental phases beginning at 25.8 (±2.5) and 73.3 (±5.2) kg BW, respectively. Treatments were randomly assigned to pigs within each experimental phase according to a 2×4 factorial design in which the two genetic lines and four dietary SID Lys levels (70%, 85%, 100% and 115% of the requirements estimated by the factorial method developed for precision feeding) were the main factors. Individual pigs' Lys requirements were estimated daily using a factorial approach based on their feed intake, BW and weight gain patterns. From 25 to 50 kg BW, this method slightly underestimated the pigs' SID Lys requirements, given that maximum protein deposition and weight gain were achieved at 115% of SID Lys requirements. However, the best gain-to-feed ratio (G : F) was obtained at a level of 85% or more of the estimated Lys requirement. From 70 to 100 kg, the method adequately estimated the pigs' individual requirements, given that maximum performance was achieved at 100% of Lys requirements. Terminal line pigs ate more (P=0.04) during the first experimental phase and tended to eat more (P=0.10) during the second phase than the maternal line pigs but both genetic lines had similar ADG and protein deposition rates during the two phases. The factorial method used in this study to estimate individual daily SID Lys requirements was able to accommodate the small genetic differences in feed intake, and it was concluded that this method can be
Furnham, Adrian; Arteche, Adriane; Chamorro-Premuzic, Tomas; Keser, Askin; Swami, Viren
2009-12-01
This study is part of a programmatic research effort into the determinants of self-assessed abilities. It examined cross-cultural differences in beliefs about intelligence and self- and other-estimated intelligence in two countries at extreme ends of the European continent. In all, 172 British and 272 Turkish students completed a three-part questionnaire where they estimated their parents', partners' and own multiple intelligences (Gardner (10) and Sternberg (3)). They also completed a measure of the 'big five' personality scales and rated six questions about intelligence. The British sample had more experience with IQ tests than the Turks. The majority of participants in both groups did not believe in sex differences in intelligence but did think there were race differences. They also believed that intelligence was primarily inherited. Participants rated their social and emotional intelligence highly (around one standard deviation above the norm). Results suggested that there were more cultural than sex differences in all the ratings, with various interactions mainly due to the British sample differentiating more between the sexes than the Turks. Males rated their overall, verbal, logical, spatial, creative and practical intelligence higher than females. Turks rated their musical, body-kinesthetic, interpersonal and intrapersonal intelligence as well as existential, naturalistic, emotional, creative, and practical intelligence higher than the British. There was evidence of participants rating their fathers' intelligence on most factors higher than their mothers'. Factor analysis of the ten Gardner intelligences yield two clear factors: cognitive and social intelligence. The first factor was impacted by sex but not culture; it was the other way round for the second factor. Regressions showed that five factors predicted overall estimates: sex (male), age (older), test experience (has done tests), extraversion (strong) and openness (strong). Results are discussed in
Sousa, Fátima Aparecida Emm Faleiros; da Silva, Talita de Cássia Raminelli; Siqueira, Hilze Benigno de Oliveira Moura; Saltareli, Simone; Gomez, Rodrigo Ramon Falconi; Hortense, Priscilla
2016-01-01
Abstract Objective: to describe acute and chronic pain from the perspective of the life cycle. Methods: participants: 861 people in pain. The Multidimensional Pain Evaluation Scale (MPES) was used. Results: in the category estimation method the highest descriptors of chronic pain for children/ adolescents were "Annoying" and for adults "Uncomfortable". The highest descriptors of acute pain for children/adolescents was "Complicated"; and for adults was "Unbearable". In magnitude estimation method, the highest descriptors of chronic pain was "Desperate" and for descriptors of acute pain was "Terrible". Conclusions: the MPES is a reliable scale it can be applied during different stages of development. PMID:27556875
Flood frequency estimation by hydrological continuous simulation and classical methods
NASA Astrophysics Data System (ADS)
Brocca, L.; Camici, S.; Melone, F.; Moramarco, T.; Tarpanelli, A.
2009-04-01
In recent years, the effects of flood damages have motivated the development of new complex methodologies for the simulation of the hydrologic/hydraulic behaviour of river systems, fundamental to direct the territorial planning as well as for the floodplain management and risk analysis. The valuation of the flood-prone areas can be carried out through various procedures that are usually based on the estimation of the peak discharge for an assigned probability of exceedence. In the case of ungauged or scarcely gauged catchments this is not straightforward, as the limited availability of historical peak flow data induces a relevant uncertainty in the flood frequency analysis. A possible solution to overcome this problem is the application of hydrological simulation studies in order to generate long synthetic discharge time series. For this purpose, recently, new methodologies based on the stochastic generation of rainfall and temperature data have been proposed. The inferred information can be used as input for a continuous hydrological model to generate a synthetic time series of peak river flow and, hence, the flood frequency distribution at a given site. In this study stochastic rainfall data have been generated via the Neyman-Scott Rectangular Pulses (NSRP) model characterized by a flexible structure in which the model parameters broadly relate to underlying physical features observed in rainfall fields and it is capable of preserving statistical properties of a rainfall time series over a range of time scales. The peak river flow time series have been generated through a continuous hydrological model aimed at flood prediction and developed for the purpose (hereinafter named MISDc) (Brocca, L., Melone, F., Moramarco, T., Singh, V.P., 2008. A continuous rainfall-runoff model as tool for the critical hydrological scenario assessment in natural channels. In: M. Taniguchi, W.C. Burnett, Y. Fukushima, M. Haigh, Y. Umezawa (Eds), From headwater to the ocean
Comparison of Two Parametric Methods to Estimate Pesticide Mass Loads in California's Central Valley
Saleh, D.K.; Lorenz, D.L.; Domagalski, J.L.
2011-01-01
Mass loadings were calculated for four pesticides in two watersheds with different land uses in the Central Valley, California, by using two parametric models: (1) the Seasonal Wave model (SeaWave), in which a pulse signal is used to describe the annual cycle of pesticide occurrence in a stream, and (2) the Sine Wave model, in which first-order Fourier series sine and cosine terms are used to simulate seasonal mass loading patterns. The models were applied to data collected during water years 1997 through 2005. The pesticides modeled were carbaryl, diazinon, metolachlor, and molinate. Results from the two models show that the ability to capture seasonal variations in pesticide concentrations was affected by pesticide use patterns and the methods by which pesticides are transported to streams. Estimated seasonal loads compared well with results from previous studies for both models. Loads estimated by the two models did not differ significantly from each other, with the exceptions of carbaryl and molinate during the precipitation season, where loads were affected by application patterns and rainfall. However, in watersheds with variable and intermittent pesticide applications, the SeaWave model is more suitable for use on the basis of its robust capability of describing seasonal variation of pesticide concentrations. ?? 2010 American Water Resources Association. This article is a US Government work and is in the public domain in the USA.
Comparison of two parametric methods to estimate pesticide mass loads in California's Central Valley
Saleh, Dina K.; Lorenz, David L.; Domagalski, Joseph L.
2011-01-01
Mass loadings were calculated for four pesticides in two watersheds with different land uses in the Central Valley, California, by using two parametric models: (1) the Seasonal Wave model (SeaWave), in which a pulse signal is used to describe the annual cycle of pesticide occurrence in a stream, and (2) the Sine Wave model, in which first-order Fourier series sine and cosine terms are used to simulate seasonal mass loading patterns. The models were applied to data collected during water years 1997 through 2005. The pesticides modeled were carbaryl, diazinon, metolachlor, and molinate. Results from the two models show that the ability to capture seasonal variations in pesticide concentrations was affected by pesticide use patterns and the methods by which pesticides are transported to streams. Estimated seasonal loads compared well with results from previous studies for both models. Loads estimated by the two models did not differ significantly from each other, with the exceptions of carbaryl and molinate during the precipitation season, where loads were affected by application patterns and rainfall. However, in watersheds with variable and intermittent pesticide applications, the SeaWave model is more suitable for use on the basis of its robust capability of describing seasonal variation of pesticide concentrations.
Altini, Marco; Penders, Julien; Vullers, Ruud; Amft, Oliver
2015-01-01
Several methods to estimate energy expenditure (EE) using body-worn sensors exist; however, quantifications of the differences in estimation error are missing. In this paper, we compare three prevalent EE estimation methods and five body locations to provide a basis for selecting among methods, sensors number, and positioning. We considered 1) counts-based estimation methods, 2) activity-specific estimation methods using METs lookup, and 3) activity-specific estimation methods using accelerometer features. The latter two estimation methods utilize subsequent activity classification and EE estimation steps. Furthermore, we analyzed accelerometer sensors number and on-body positioning to derive optimal EE estimation results during various daily activities. To evaluate our approach, we implemented a study with 15 participants that wore five accelerometer sensors while performing a wide range of sedentary, household, lifestyle, and gym activities at different intensities. Indirect calorimetry was used in parallel to obtain EE reference data. Results show that activity-specific estimation methods using accelerometer features can outperform counts-based methods by 88% and activity-specific methods using METs lookup for active clusters by 23%. No differences were found between activity-specific methods using METs lookup and using accelerometer features for sedentary clusters. For activity-specific estimation methods using accelerometer features, differences in EE estimation error between the best combinations of each number of sensors (1 to 5), analyzed with repeated measures ANOVA, were not significant. Thus, we conclude that choosing the best performing single sensor does not reduce EE estimation accuracy compared to a five sensors system and can reliably be used. However, EE estimation errors can increase up to 80% if a nonoptimal sensor location is chosen. PMID:24691168
Quantitative estimation of poikilocytosis by the coherent optical method
NASA Astrophysics Data System (ADS)
Safonova, Larisa P.; Samorodov, Andrey V.; Spiridonov, Igor N.
2000-05-01
The investigation upon the necessity and the reliability required of the determination of the poikilocytosis in hematology has shown that existing techniques suffer from grave shortcomings. To determine a deviation of the erythrocytes' form from the normal (rounded) one in blood smears it is expedient to use an integrative estimate. The algorithm which is based on the correlation between erythrocyte morphological parameters with properties of the spatial-frequency spectrum of blood smear is suggested. During analytical and experimental research an integrative form parameter (IFP) which characterizes the increase of the relative concentration of cells with the changed form over 5% and the predominating type of poikilocytes was suggested. An algorithm of statistically reliable estimation of the IFP on the standard stained blood smears has been developed. To provide the quantitative characterization of the morphological features of cells a form vector has been proposed, and its validity for poikilocytes differentiation was shown.
A life history method for estimating convective rainfall
NASA Technical Reports Server (NTRS)
Martin, D. W.
1981-01-01
The remote sensing of rain amounts is of great interest for a great variety of operational applications, including hydrology, hydroelectricity and agriculture is discussed. The microwave radiometer represents the most obvious technique, however, poor spatial and temporal resolution, together with the problems associated with the estimation of effective rain layer height make visible and IR techniques more promising at the present time. Based on bivariate frequency distribution of brightness versus temperature, brightness enhancing or infrared technique alone may be inadequate to deduce details of convective activity. It is implied that better estimates of rainfall will come from visible and IR observations combined than from either used alone. The technique identifies clouds with high probability of rain as those which have large optical and presumably physical thickness as measured by the visible albedo in comparison with their height, determined by the intensity of the IR emission.
On optical mass estimation methods for galaxy groups
NASA Astrophysics Data System (ADS)
Pearson, R. J.; Ponman, T. J.; Norberg, P.; Robotham, A. S. G.; Farr, W. M.
2015-05-01
We examine the performance of a variety of different estimators for the mass of galaxy groups, based on their galaxy distribution alone. We draw galaxies from the Sloan Digital Sky Survey for a set of groups and clusters for which hydrostatic mass estimates based on high-quality Chandra X-ray data are available. These are used to calibrate the galaxy-based mass proxies, and to test their performance. Richness, luminosity, galaxy overdensity, rms radius and dynamical mass proxies are all explored. These different mass indicators all have their merits, and we argue that using them in combination can provide protection against being misled by the effects of dynamical disturbance or variations in star formation efficiency. Using them in this way leads us to infer the presence of significant non-statistical scatter in the X-ray based mass estimates we employ. We apply a similar analysis to a set of mock groups derived from applying a semi-analytic galaxy formation code to the Millennium dark matter simulation. The relations between halo mass and the mass proxies differ significantly in some cases from those seen in the observational groups, and we discuss possible reasons for this.
A Five-Parameter Wind Field Estimation Method Based on Spherical Upwind Lidar Measurements
NASA Astrophysics Data System (ADS)
Kapp, S.; Kühn, M.
2014-12-01
Turbine mounted scanning lidar systems of focussed continuous-wave type are taken into consideration to sense approaching wind fields. The quality of wind information depends on the lidar technology itself but also substantially on the scanning technique and reconstruction algorithm. In this paper a five-parameter wind field model comprising mean wind speed, vertical and horizontal linear shear and homogeneous direction angles is introduced. A corresponding parameter estimation method is developed based on the assumption of upwind lidar measurements scanned over spherical segments. As a main advantage of this method all relevant parameters, in terms of wind turbine control, can be provided. Moreover, the ability to distinguish between shear and skew potentially increases the quality of the resulting feedforward pitch angles when compared to three-parameter methods. It is shown that minimal three measurements, each in turn from two independent directions are necessary for the application of the algorithm, whereas simpler measurements, each taken from only one direction, are not sufficient.
NASA Astrophysics Data System (ADS)
Grainger, S. J.; Su, J. L.; Greiner, C. A.; Saybolt, M. D.; Wilensky, R. L.; Raichlen, J. S.; Madden, S. P.; Muller, J. E.
2016-03-01
The ability to determine plaque cap thickness during catheterization is thought to be of clinical importance for plaque vulnerability assessment. While methods to compositionally assess cap integrity are in development, a method utilizing currently available tools to measure cap thickness is highly desirable. NIRS-IVUS is a commercially available dual imaging method in current clinical use that may provide cap thickness information to the skilled reader; however, this is as yet unproven. Ten autopsy hearts (n=15 arterial segments) were scanned with the multimodality NIRS-IVUS catheter (TVC Imaging System, Infraredx, Inc.) to identify lipid core plaques (LCPs). Skilled readers made predictions of cap thickness over regions of chemogram LCP, using NIRS-IVUS. Artery segments were perfusion fixed and cut into 2 mm serial blocks. Thin sections stained with Movat's pentachrome were analyzed for cap thickness at LCP regions. Block level predictions were compared to histology, as classified by a blinded pathologist. Within 15 arterial segments, 117 chemogram blocks were found by NIRS to contain LCP. Utilizing NIRSIVUS, chemogram blocks were divided into 4 categories: thin capped fibroatheromas (TCFA), thick capped fibroatheromas (ThCFA), pathological intimal thickening (PIT)/lipid pool (no defined cap), and calcified/unable to determine cap thickness. Sensitivities/specificities for thin cap fibroatheromas, thick cap fibroatheromas, and PIT/lipid pools were 0.54/0.99, 0.68/0.88, and 0.80/0.97, respectively. The overall accuracy rate was 70.1% (including 22 blocks unable to predict, p = 0.075). In the absence of calcium, NIRS-IVUS imaging provided predictions of cap thickness over LCP with moderate accuracy. The ability of this multimodality imaging method to identify vulnerable coronary plaques requires further assessment in both larger autopsy studies, and clinical studies in patients undergoing NIRS-IVUS imaging.
Wu, Huey-Min; Lin, Chin-Kai; Yang, Yu-Mao; Kuo, Bor-Chen
2014-11-12
Visual perception is the fundamental skill required for a child to recognize words, and to read and write. There was no visual perception assessment tool developed for preschool children based on Chinese characters in Taiwan. The purposes were to develop the computerized visual perception assessment tool for Chinese Characters Structures and to explore the psychometrical characteristic of assessment tool. This study adopted purposive sampling. The study evaluated 551 kindergarten-age children (293 boys, 258 girls) ranging from 46 to 81 months of age. The test instrument used in this study consisted of three subtests and 58 items, including tests of basic strokes, single-component characters, and compound characters. Based on the results of model fit analysis, the higher-order item response theory was used to estimate the performance in visual perception, basic strokes, single-component characters, and compound characters simultaneously. Analyses of variance were used to detect significant difference in age groups and gender groups. The difficulty of identifying items in a visual perception test ranged from -2 to 1. The visual perception ability of 4- to 6-year-old children ranged from -1.66 to 2.19. Gender did not have significant effects on performance. However, there were significant differences among the different age groups. The performance of 6-year-olds was better than that of 5-year-olds, which was better than that of 4-year-olds. This study obtained detailed diagnostic scores by using a higher-order item response theory model to understand the visual perception of basic strokes, single-component characters, and compound characters. Further statistical analysis showed that, for basic strokes and compound characters, girls performed better than did boys; there also were differences within each age group. For single-component characters, there was no difference in performance between boys and girls. However, again the performance of 6-year-olds was better than
Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation
NASA Technical Reports Server (NTRS)
Morelli, Eugene a.
2006-01-01
Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.
NASA Astrophysics Data System (ADS)
Ishigaki, Tsukasa; Yamamoto, Yoshinobu; Nakamura, Yoshiyuki; Akamatsu, Motoyuki
Patients that have an health service by doctor have to wait long time at many hospitals. The long waiting time is the worst factor of patient's dissatisfaction for hospital service according to questionnaire for patients. The present paper describes an estimation method of the waiting time for each patient without an electronic medical chart system. The method applies a portable RFID system to data acquisition and robust estimation of probability distribution of the health service and test time by doctor for high-accurate waiting time estimation. We carried out an health service of data acquisition at a real hospital and verified the efficiency of the proposed method. The proposed system widely can be used as data acquisition system in various fields such as marketing service, entertainment or human behavior measurement.
Estimating the prevalence of anaemia: a comparison of three methods.
Sari, M.; de Pee, S.; Martini, E.; Herman, S.; Sugiatmi; Bloem, M. W.; Yip, R.
2001-01-01
OBJECTIVE: To determine the most effective method for analysing haemoglobin concentrations in large surveys in remote areas, and to compare two methods (indirect cyanmethaemoglobin and HemoCue) with the conventional method (direct cyanmethaemoglobin). METHODS: Samples of venous and capillary blood from 121 mothers in Indonesia were compared using all three methods. FINDINGS: When the indirect cyanmethaemoglobin method was used the prevalence of anaemia was 31-38%. When the direct cyanmethaemoglobin or HemoCue method was used the prevalence was 14-18%. Indirect measurement of cyanmethaemoglobin had the highest coefficient of variation and the largest standard deviation of the difference between the first and second assessment of the same blood sample (10-12 g/l indirect measurement vs 4 g/l direct measurement). In comparison with direct cyanmethaemoglobin measurement of venous blood, HemoCue had the highest sensitivity (82.4%) and specificity (94.2%) when used for venous blood. CONCLUSIONS: Where field conditions and local resources allow it, haemoglobin concentration should be assessed with the direct cyanmethaemoglobin method, the gold standard. However, the HemoCue method can be used for surveys involving different laboratories or which are conducted in relatively remote areas. In very hot and humid climates, HemoCue microcuvettes should be discarded if not used within a few days of opening the container containing the cuvettes. PMID:11436471
Golmakani, Nahid; Khaleghinezhad, Khosheh; Dadgar, Selmeh; Hashempor, Majid; Baharian, Nosrat
2015-01-01
Introduction: In developing countries, hemorrhage accounts for 30% of the maternal deaths. Postpartum hemorrhage has been defined as blood loss of around 500 ml or more, after completing the third phase of labor. Most cases of postpartum hemorrhage occur during the first hour after birth. The most common reason for bleeding in the early hours after childbirth is uterine atony. Bleeding during delivery is usually a visual estimate that is measured by the midwife. It has a high error rate. However, studies have shown that the use of a standard can improve the estimation. The aim of the research is to compare the estimation of postpartum hemorrhage using the weighting method and the National Guideline for postpartum hemorrhage estimation. Materials and Methods: This descriptive study was conducted on 112 females in the Omolbanin Maternity Department of Mashhad, for a six-month period, from November 2012 to May 2013. The accessible method was used for sampling. The data collection tools were case selection, observation and interview forms. For postpartum hemorrhage estimation, after the third section of labor was complete, the quantity of bleeding was estimated in the first and second hours after delivery, by the midwife in charge, using the National Guideline for vaginal delivery, provided by the Maternal Health Office. Also, after visual estimation by using the National Guideline, the sheets under parturient in first and second hours after delivery were exchanged and weighted. The data were analyzed using descriptive statistics and the t-test. Results: According to the results, a significant difference was found between the estimated blood loss based on the weighting methods and that using the National Guideline (weighting method 62.68 ± 16.858 cc vs. National Guideline 45.31 ± 13.484 cc in the first hour after delivery) (P = 0.000) and (weighting method 41.26 ± 10.518 vs. National Guideline 30.24 ± 8.439 in second hour after delivery) (P = 0.000). Conclusions
Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco
2016-01-01
Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population’s sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns. PMID:27441554
Computation of nonparametric convex hazard estimators via profile methods
Jankowski, Hanna K.; Wellner, Jon A.
2010-01-01
This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females. PMID:20300560
A method to estimate optical distortion using planetary images
NASA Astrophysics Data System (ADS)
Kouyama, Toru; Yamazaki, Atsushi; Yamada, Manabu; Imamura, Takeshi
2013-09-01
We developed a method to calibrate optical distortion parameters for axisymmetrical optical systems using images of a spherical target taken at a variety of distances. The method utilizes the fact that the influence of distortion on the apparent radius in the image changes with the disk size of the projected body. Because several planets can be used as the spherical target, this method enables us to obtain distortion parameters in space and by using a large number of planetary images, desired accuracy of parameters can be achieved statistically. The applicability of the method was tested by applying it to simulated planetary images and real Venus images taken by Venus Monitoring Camera onboard the ESA's Venus Express, and optical distortion was successfully retrieved with the pixel position error of less than 1 pixel. Venus is the planet most suitable for the proposed method because of its smooth, nearly spherical surface of the haze layer covering the planet.
Estimates of minimum patch size depend on the method of estimation and the condition of the habitat.
McCoy, Earl D; Mushinsky, Henry R
2007-06-01
Minimum patch size for a viable population can be estimated in several ways. The density-area method estimates minimum patch size as the smallest area in which no new individuals are encountered as one extends the arbitrary boundaries of a study area outward. The density-area method eliminates the assumption of no variation in density with size of habitat area that accompanies other methods, but it is untested in situations in which habitat loss has confined populations to small areas. We used a variant of the density area method to study the minimum patch size for the gopher tortoise (Gopherus polyphemus) in Florida, USA, where this keystone species is being confined to ever smaller habitat fragments. The variant was based on the premise that individuals within populations are likely to occur at unusually high densities when confined to small areas, and it estimated minimum patch size as the smallest area beyond which density plateaus. The data for our study came from detailed surveys of 38 populations of the tortoise. For all 38 populations, the areas occupied were determined empirically, and for 19 of them, duplicate surveys were undertaken about a decade apart. We found that a consistent inverse density area relationship was present over smaller areas. The minimum patch size estimated from the density-area relationship was at least 100 ha, which is substantially larger than previous estimates. The relative abundance of juveniles was inversely related to population density for sites with relatively poor habitat quality, indicating that the estimated minimum patch size could represent an extinction threshold. We concluded that a negative density area relationship may be an inevitable consequence of excessive habitat loss. We also concluded that any detrimental effects of an inverse density area relationship may be exacerbated by the deterioration in habitat quality that often accompanies habitat loss. Finally, we concluded that the value of any estimate of
Simple and robust baseline estimation method for multichannel SAR-GMTI systems
NASA Astrophysics Data System (ADS)
Chen, Zhao-Yan; Wang, Tong; Ma, Nan
2016-07-01
In this paper, the authors propose an approach of estimating the effective baseline for ground moving target indication (GMTI) mode of synthetic aperture radar (SAR), which is different from any previous work. The authors show that the new method leads to a simpler and more robust baseline estimate. This method employs a baseline search operation, where the degree of coherence (DOC) is served as a metric to judge whether the optimum baseline estimate is obtained. The rationale behind this method is that the more accurate the baseline estimate, the higher the coherence of the two channels after co-registering with the estimated baseline value. The merits of the proposed method are twofold: simple to design and robust to the Doppler centroid estimation error. The performance of the proposed method is good. The effectiveness of the method is tested with real SAR data.
NASA Astrophysics Data System (ADS)
Mao, Yao; Deng, Chao; Gan, Xun; Tian, Jing
2015-10-01
The development of space optical communication requires arcsecond precision or even higher precision of the tracking performance of ATP(Acquisition, Tracking and Pointing) system under the condition of base disturbance. ATP system supported by stabilized reference beam which is provided by inertial stabilization platform with high precision and high bandwidth, can effectively restrain the influence of base angular disturbance on the line of sight. To get better disturbance rejection ability, this paper analyzes the influence of transfer characteristics and physical parameters of stabilization platform on disturbance stabilization performance, the result shows that the stabilization characteristics of inertial stabilization platform equals to the product of rejection characteristics of control loop and disturbance transfer characteristics of the platform, and improving isolation characteristics of the platform or extending control bandwidth can both achieve the result of getting a better rejection ability. Limited by factors such as mechanical characteristics of stabilization platform, bandwidth/noise of the sensor, and so on, as the control bandwidth of the LOS stabilization platform is limited, and high frequency disturbance can not be effectively rejected, so the rejection of high frequency disturbance mainly depends on the isolation characteristics of the platform itself. This paper puts forward three methods of improving the isolation characteristics of the platform itself, which includes 1) changing mechanical structure, such as reducing elastic coefficient, increasing moment of inertia of the platform, and so on; 2) changing electrical structure of the platform, such as increasing resistance, adding current loop, and so on; 3)adding a passive vibration isolator between the inertial stabilization platform and the base. The result of the experiment shows that adding current loop or adding a passive vibration isolator can effectively reject high frequency
Numerical method for estimating the size of chaotic regions of phase space
Henyey, F.S.; Pomphrey, N.
1987-10-01
A numerical method for estimating irregular volumes of phase space is derived. The estimate weights the irregular area on a surface of section with the average return time to the section. We illustrate the method by application to the stadium and oval billiard systems and also apply the method to the continuous Henon-Heiles system. 15 refs., 10 figs. (LSP)
Advanced Method to Estimate Fuel Slosh Simulation Parameters
NASA Technical Reports Server (NTRS)
Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl
2005-01-01
The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the
Semi-quantitative method to estimate levels of Campylobacter
Technology Transfer Automated Retrieval System (TEKTRAN)
Introduction: Research projects utilizing live animals and/or systems often require reliable, accurate quantification of Campylobacter following treatments. Even with marker strains, conventional methods designed to quantify are labor and material intensive requiring either serial dilutions or MPN ...
A history-based method to estimate animal preference.
Maia, Caroline Marques; Volpato, Gilson Luiz
2016-01-01
Giving animals their preferred items (e.g., environmental enrichment) has been suggested as a method to improve animal welfare, thus raising the question of how to determine what animals want. Most studies have employed choice tests for detecting animal preferences. However, whether choice tests represent animal preferences remains a matter of controversy. Here, we present a history-based method to analyse data from individual choice tests to discriminate between preferred and non-preferred items. This method differentially weighs choices from older and recent tests performed over time. Accordingly, we provide both a preference index that identifies preferred items contrasted with non-preferred items in successive multiple-choice tests and methods to detect the strength of animal preferences for each item. We achieved this goal by investigating colour choices in the Nile tilapia fish species. PMID:27350213
A history-based method to estimate animal preference
Maia, Caroline Marques; Volpato, Gilson Luiz
2016-01-01
Giving animals their preferred items (e.g., environmental enrichment) has been suggested as a method to improve animal welfare, thus raising the question of how to determine what animals want. Most studies have employed choice tests for detecting animal preferences. However, whether choice tests represent animal preferences remains a matter of controversy. Here, we present a history-based method to analyse data from individual choice tests to discriminate between preferred and non-preferred items. This method differentially weighs choices from older and recent tests performed over time. Accordingly, we provide both a preference index that identifies preferred items contrasted with non-preferred items in successive multiple-choice tests and methods to detect the strength of animal preferences for each item. We achieved this goal by investigating colour choices in the Nile tilapia fish species. PMID:27350213
Method of estimating pulse response using an impedance spectrum
Morrison, John L; Morrison, William H; Christophersen, Jon P; Motloch, Chester G
2014-10-21
Electrochemical Impedance Spectrum data are used to predict pulse performance of an energy storage device. The impedance spectrum may be obtained in-situ. A simulation waveform includes a pulse wave with a period greater than or equal to the lowest frequency used in the impedance measurement. Fourier series coefficients of the pulse train can be obtained. The number of harmonic constituents in the Fourier series are selected so as to appropriately resolve the response, but the maximum frequency should be less than or equal to the highest frequency used in the impedance measurement. Using a current pulse as an example, the Fourier coefficients of the pulse are multiplied by the impedance spectrum at corresponding frequencies to obtain Fourier coefficients of the voltage response to the desired pulse. The Fourier coefficients of the response are then summed and reassembled to obtain the overall time domain estimate of the voltage using the Fourier series analysis.
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
Kaller, Christoph P; Debelak, Rudolf; Köstering, Lena; Egle, Johanna; Rahm, Benjamin; Wild, Philipp S; Blettner, Maria; Beutel, Manfred E; Unterrainer, Josef M
2016-03-01
Planning ahead the consequences of future actions is a prototypical executive function. In clinical and experimental neuropsychology, disc-transfer tasks like the Tower of London (TOL) are commonly used for the assessment of planning ability. Previous psychometric evaluations have, however, yielded a poor reliability of measuring planning performance with the TOL. Based on theory-grounded task analyses and a systematic problem selection, the computerized TOL-Freiburg version (TOL-F) was developed to improve the task's psychometric properties for diagnostic applications. Here, we report reliability estimates for the TOL-F from two large samples collected in Mainz, Germany (n = 3,770; 40-80 years) and in Vienna, Austria (n = 830; 16-84 years). Results show that planning accuracy on the TOL-F possesses an adequate internal consistency and split-half reliability (>0.7) that are stable across the adult life span while the TOL-F covers a broad range of graded difficulty even in healthy adults, making it suitable for both research and clinical application. PMID:26715472
Performance of different detrending methods in turbulent flux estimation
NASA Astrophysics Data System (ADS)
Donateo, Antonio; Cava, Daniela; Contini, Daniele
2015-04-01
The eddy covariance is the most direct, efficient and reliable method to measure the turbulent flux of a scalar (Baldocchi, 2003). Required conditions for high-quality eddy covariance measurements are amongst others stationarity of the measured data and a fully developed turbulence. The simplest method for obtaining the fluctuating components for covariance calculation according to Reynolds averaging rules under ideal stationary conditions is the so called mean removal method. However steady state conditions rarely exist in the atmosphere, because of the diurnal cycle, changes in meteorological conditions, or sensor drift. All these phenomena produce trends or low-frequency changes superimposed to the turbulent signal. Different methods for trend removal have been proposed in literature; however a general agreement on how separate low frequency perturbations from turbulence has not yet been reached. The most commonly applied methods are the linear detrending (Gash and Culf, 1996) and the high-pass filter, namely the moving average (Moncrieff et al., 2004). Moreover Vickers and Mahrt (2003) proposed a multi resolution decomposition method in order to select an appropriate time scale for mean removal as a function of atmospheric stability conditions. The present work investigates the performance of these different detrending methods in removing the low frequency contribution to the turbulent fluxes calculation, including also a spectral filter by a Fourier decomposition of the time series. The different methods have been applied to the calculation of the turbulent fluxes for different scalars (temperature, ultrafine particles number concentration, carbon dioxide and water vapour concentration). A comparison of the detrending methods will be performed also for different measurement site, namely a urban site, a suburban area, and a remote area in Antarctica. Moreover the performance of the moving average in detrending time series has been analyzed as a function of the
Barker, Brandon E; Sadagopan, Narayanan; Wang, Yiping; Smallbone, Kieran; Myers, Christopher R; Xi, Hongwei; Locasale, Jason W; Gu, Zhenglong
2015-12-01
A major theme in constraint-based modeling is unifying experimental data, such as biochemical information about the reactions that can occur in a system or the composition and localization of enzyme complexes, with high-throughput data including expression data, metabolomics, or DNA sequencing. The desired result is to increase predictive capability and improve our understanding of metabolism. The approach typically employed when only gene (or protein) intensities are available is the creation of tissue-specific models, which reduces the available reactions in an organism model, and does not provide an objective function for the estimation of fluxes. We develop a method, flux assignment with LAD (least absolute deviation) convex objectives and normalization (FALCON), that employs metabolic network reconstructions along with expression data to estimate fluxes. In order to use such a method, accurate measures of enzyme complex abundance are needed, so we first present an algorithm that addresses quantification of complex abundance. Our extensions to prior techniques include the capability to work with large models and significantly improved run-time performance even for smaller models, an improved analysis of enzyme complex formation, the ability to handle large enzyme complex rules that may incorporate multiple isoforms, and either maintained or significantly improved correlation with experimentally measured fluxes. FALCON has been implemented in MATLAB and ATS, and can be downloaded from: https://github.com/bbarker/FALCON. ATS is not required to compile the software, as intermediate C source code is available. FALCON requires use of the COBRA Toolbox, also implemented in MATLAB. PMID:26381164
High-orbit satellite magnitude estimation using photometric measurement method
NASA Astrophysics Data System (ADS)
Zhang, Shixue
2015-12-01
The means to get the accurate high-orbit satellite magnitude can be significant in space target surveillance. This paper proposes a satellite photometric measurement method based on image processing. We calculate the satellite magnitude by comparing the output value of camera's CCD between the known fixed star and the satellite. We calculate the luminance value of a certain object on the acquired image using a background-removing method. According to the observation parameters such as azimuth, elevation, height and the situation of the telescope, we can draw the star map on the image, so we can get the real magnitude of a certain fixed star in the image. We derive a new method to calculate the magnitude value of a certain satellite according to the magnitude of the fixed star in the image. To guarantee the algorithm's stability, we evaluate the measurement precision of the method, and analysis the restrict condition in actual application. We have made plenty of experiment of our system using large telescope in satellite surveillance, and testify the correctness of the algorithm. The experimental result shows that the precision of the proposed algorithm in satellite magnitude measurement is 0.24mv, and this method can be generalized to other relative fields.
Comparative evaluation of two quantitative precipitation estimation methods in Korea
NASA Astrophysics Data System (ADS)
Ko, H.; Nam, K.; Jung, H.
2013-12-01
The spatial distribution and intensity of rainfall is necessary for hydrological model, particularly, grid based distributed model. The weather radar is much higher spatial resolution (1kmx1km) than rain gauges (~13km) although radar is indirect measurement of rainfall and rain gauges are directly observed it. And also, radar is provided areal and gridded rainfall information while rain gauges are provided point data. Therefore, radar rainfall data can be useful for input data on the hydrological model. In this study, we compared two QPE schemes to produce radar rainfall for hydrological utilization. The two methods are 1) spatial adjustment and 2) real-time Z-R relationship adjustment (hereafter RAR; Radar-Aws Rain rate). We computed and analyzed the statistics such as ME (Mean Error), RMSE (Root mean square Error), and correlation using cross-validation method (here, leave-one-out method).
Estimation of mechanical properties of nanomaterials using artificial intelligence methods
NASA Astrophysics Data System (ADS)
Vijayaraghavan, V.; Garg, A.; Wong, C. H.; Tai, K.
2014-09-01
Computational modeling tools such as molecular dynamics (MD), ab initio, finite element modeling or continuum mechanics models have been extensively applied to study the properties of carbon nanotubes (CNTs) based on given input variables such as temperature, geometry and defects. Artificial intelligence techniques can be used to further complement the application of numerical methods in characterizing the properties of CNTs. In this paper, we have introduced the application of multi-gene genetic programming (MGGP) and support vector regression to formulate the mathematical relationship between the compressive strength of CNTs and input variables such as temperature and diameter. The predictions of compressive strength of CNTs made by these models are compared to those generated using MD simulations. The results indicate that MGGP method can be deployed as a powerful method for predicting the compressive strength of the carbon nanotubes.
A Study on Channel Estimation Methods for Time-Domain Spreading MC-CDMA Systems
NASA Astrophysics Data System (ADS)
Nagate, Atsushi; Fujii, Teruya
As a candidate for the transmission technology of next generation mobile communication systems, time-domain spreading MC-CDMA systems have begun to attract much attention. In these systems, data and pilot symbols are spread in the time domain and code-multiplexed. To combat fading issues, we need to conduct channel estimation by using the code-multiplexed pilot symbols. Especially in next generation systems, frequency bands higher than those of current systems, which raise the maximum Doppler frequency, are expected to be used, so that a more powerful channel estimation method is expected. Considering this, we propose a channel estimation method for highly accurate channel estimation; it is a combination of a two-dimensional channel estimation method and an impulse response-based channel estimation method. We evaluate the proposed method by computer simulations.
Estimation of missing rainfall data using spatial interpolation and imputation methods
NASA Astrophysics Data System (ADS)
Radi, Noor Fadhilah Ahmad; Zakaria, Roslinazairimah; Azman, Muhammad Az-zuhri
2015-02-01
This study is aimed to estimate missing rainfall data by dividing the analysis into three different percentages namely 5%, 10% and 20% in order to represent various cases of missing data. In practice, spatial interpolation methods are chosen at the first place to estimate missing data. These methods include normal ratio (NR), arithmetic average (AA), coefficient of correlation (CC) and inverse distance (ID) weighting methods. The methods consider the distance between the target and the neighbouring stations as well as the correlations between them. Alternative method for solving missing data is an imputation method. Imputation is a process of replacing missing data with substituted values. A once-common method of imputation is single-imputation method, which allows parameter estimation. However, the single imputation method ignored the estimation of variability which leads to the underestimation of standard errors and confidence intervals. To overcome underestimation problem, multiple imputations method is used, where each missing value is estimated with a distribution of imputations that reflect the uncertainty about the missing data. In this study, comparison of spatial interpolation methods and multiple imputations method are presented to estimate missing rainfall data. The performance of the estimation methods used are assessed using the similarity index (S-index), mean absolute error (MAE) and coefficient of correlation (R).
On using sample selection methods in estimating the price elasticity of firms' demand for insurance.
Marquis, M Susan; Louis, Thomas A
2002-01-01
We evaluate a technique based on sample selection models that has been used by health economists to estimate the price elasticity of firms' demand for insurance. We demonstrate that, this technique produces inflated estimates of the price elasticity. We show that alternative methods lead to valid estimates. PMID:11845921
Estimation of IRT Graded Response Models: Limited versus Full Information Methods
ERIC Educational Resources Information Center
Forero, Carlos G.; Maydeu-Olivares, Alberto
2009-01-01
The performance of parameter estimates and standard errors in estimating F. Samejima's graded response model was examined across 324 conditions. Full information maximum likelihood (FIML) was compared with a 3-stage estimator for categorical item factor analysis (CIFA) when the unweighted least squares method was used in CIFA's third stage. CIFA…
A method for estimating both the solubility parameters and molar volumes of liquids
NASA Technical Reports Server (NTRS)
Fedors, R. F.
1974-01-01
Development of an indirect method of estimating the solubility parameter of high molecular weight polymers. The proposed method of estimating the solubility parameter, like Small's method, is based on group additive constants, but is believed to be superior to Small's method for two reasons: (1) the contribution of a much larger number of functional groups have been evaluated, and (2) the method requires only a knowledge of structural formula of the compound.
NASA Astrophysics Data System (ADS)
Verrelst, Jochem; Rivera, Juan Pablo; Veroustraete, Frank; Muñoz-Marí, Jordi; Clevers, Jan G. P. W.; Camps-Valls, Gustau; Moreno, José
2015-10-01
Given the forthcoming availability of Sentinel-2 (S2) images, this paper provides a systematic comparison of retrieval accuracy and processing speed of a multitude of parametric, non-parametric and physically-based retrieval methods using simulated S2 data. An experimental field dataset (SPARC), collected at the agricultural site of Barrax (Spain), was used to evaluate different retrieval methods on their ability to estimate leaf area index (LAI). With regard to parametric methods, all possible band combinations for several two-band and three-band index formulations and a linear regression fitting function have been evaluated. From a set of over ten thousand indices evaluated, the best performing one was an optimized three-band combination according to (ρ560 -ρ1610 -ρ2190) / (ρ560 +ρ1610 +ρ2190) with a 10-fold cross-validation RCV2 of 0.82 (RMSECV : 0.62). This family of methods excel for their fast processing speed, e.g., 0.05 s to calibrate and validate the regression function, and 3.8 s to map a simulated S2 image. With regard to non-parametric methods, 11 machine learning regression algorithms (MLRAs) have been evaluated. This methodological family has the advantage of making use of the full optical spectrum as well as flexible, nonlinear fitting. Particularly kernel-based MLRAs lead to excellent results, with variational heteroscedastic (VH) Gaussian Processes regression (GPR) as the best performing method, with a RCV2 of 0.90 (RMSECV : 0.44). Additionally, the model is trained and validated relatively fast (1.70 s) and the processed image (taking 73.88 s) includes associated uncertainty estimates. More challenging is the inversion of a PROSAIL based radiative transfer model (RTM). After the generation of a look-up table (LUT), a multitude of cost functions and regularization options were evaluated. The best performing cost function is Pearson's χ -square. It led to a R2 of 0.74 (RMSE: 0.80) against the validation dataset. While its validation went fast
Clement, Matthew; O'Keefe, Joy M; Walters, Brianne
2015-01-01
While numerous methods exist for estimating abundance when detection is imperfect, these methods may not be appropriate due to logistical difficulties or unrealistic assumptions. In particular, if highly mobile taxa are frequently absent from survey locations, methods that estimate a probability of detection conditional on presence will generate biased abundance estimates. Here, we propose a new estimator for estimating abundance of mobile populations using telemetry and counts of unmarked animals. The estimator assumes that the target population conforms to a fission-fusion grouping pattern, in which the population is divided into groups that frequently change in size and composition. If assumptions are met, it is not necessary to locate all groups in the population to estimate abundance. We derive an estimator, perform a simulation study, conduct a power analysis, and apply the method to field data. The simulation study confirmed that our estimator is asymptotically unbiased with low bias, narrow confidence intervals, and good coverage, given a modest survey effort. The power analysis provided initial guidance on survey effort. When applied to small data sets obtained by radio-tracking Indiana bats, abundance estimates were reasonable, although imprecise. The proposed method has the potential to improve abundance estimates for mobile species that have a fission-fusion social structure, such as Indiana bats, because it does not condition detection on presence at survey locations and because it avoids certain restrictive assumptions.
Estimating School Efficiency: A Comparison of Methods Using Simulated Data.
ERIC Educational Resources Information Center
Bifulco, Robert; Bretschneider, Stuart
2001-01-01
Uses simulated data to assess the adequacy of two econometric and linear-programming techniques (data-envelopment analysis and corrected ordinary least squares) for measuring performance-based school reform. In complex data sets (simulated to contain measurement error and endogeneity), these methods are inadequate efficiency measures. (Contains 40…
A Simple Estimation Method for Aggregate Government Outsourcing
ERIC Educational Resources Information Center
Minicucci, Stephen; Donahue, John D.
2004-01-01
The scholarly and popular debate on the delegation to the private sector of governmental tasks rests on an inadequate empirical foundation, as no systematic data are collected on direct versus indirect service delivery. We offer a simple method for approximating levels of service outsourcing, based on relatively straightforward combinations of and…
Effects of Vertical Scaling Methods on Linear Growth Estimation
ERIC Educational Resources Information Center
Lei, Pui-Wa; Zhao, Yu
2012-01-01
Vertical scaling is necessary to facilitate comparison of scores from test forms of different difficulty levels. It is widely used to enable the tracking of student growth in academic performance over time. Most previous studies on vertical scaling methods assume relatively long tests and large samples. Little is known about their performance when…
Estimation of the size of the female sex worker population in Rwanda using three different methods.
Mutagoma, Mwumvaneza; Kayitesi, Catherine; Gwiza, Aimé; Ruton, Hinda; Koleros, Andrew; Gupta, Neil; Balisanga, Helene; Riedel, David J; Nsanzimana, Sabin
2015-10-01
HIV prevalence is disproportionately high among female sex workers compared to the general population. Many African countries lack useful data on the size of female sex worker populations to inform national HIV programmes. A female sex worker size estimation exercise using three different venue-based methodologies was conducted among female sex workers in all provinces of Rwanda in August 2010. The female sex worker national population size was estimated using capture-recapture and enumeration methods, and the multiplier method was used to estimate the size of the female sex worker population in Kigali. A structured questionnaire was also used to supplement the data. The estimated number of female sex workers by the capture-recapture method was 3205 (95% confidence interval: 2998-3412). The female sex worker size was estimated at 3348 using the enumeration method. In Kigali, the female sex worker size was estimated at 2253 (95% confidence interval: 1916-2524) using the multiplier method. Nearly 80% of all female sex workers in Rwanda were found to be based in the capital, Kigali. This study provided a first-time estimate of the female sex worker population size in Rwanda using capture-recapture, enumeration, and multiplier methods. The capture-recapture and enumeration methods provided similar estimates of female sex worker in Rwanda. Combination of such size estimation methods is feasible and productive in low-resource settings and should be considered vital to inform national HIV programmes. PMID:25336306
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1975-01-01
Ridge, Marquardt's generalized inverse, shrunken, and principal components estimators are discussed in terms of the objectives of point estimation of parameters, estimation of the predictive regression function, and hypothesis testing. It is found that as the normal equations approach singularity, more consideration must be given to estimable functions of the parameters as opposed to estimation of the full parameter vector; that biased estimators all introduce constraints on the parameter space; that adoption of mean squared error as a criterion of goodness should be independent of the degree of singularity; and that ordinary least-squares subset regression is the best overall method.
Comparing the estimation methods of stable distributions with respect to robustness properties
NASA Astrophysics Data System (ADS)
Celik, Nuri; Erden, Samet; Sarikaya, M. Zeki
2016-04-01
In statistical applications, some data set may exhibit the features like high skewness and kurtosis and heavy tailness that are incompatible with the normality assumption especially in finance and engineering. For these reason, the modeling of the data sets with α stable distributions will be reasonable approach. The stable distributions have four parameters. In literature, the estimation methods have been studied in order to estimate these unknown model parameters. In this study, we give small information about these proposed estimation methods and we compare these estimators with respect to robustness properties with a comprehensive simulation study, since the robustness property of an estimator has been an important tool for an appropriate modeling.
Sensor fusion method for off-road vehicle position estimation
NASA Astrophysics Data System (ADS)
Guo, Linsong; Zhang, Qin; Han, Shufeng
2002-07-01
A FOG-aided GPS fusion system was developed for positioning an off-road vehicle, which consists of a six-axis inertial measurement unit (IMU) and a Garmin global positioning system (GPS). An observation-based Kalman filter was designed to integrate the readings from both sensors so that the noise in GPS signal was smoothed out, the redundant information was fused and a high update rate of output signals was obtained. The drift error of FOG was also compensated. By using this system, a low cost GPS can be used to replace expensive GPS with a higher accuracy. Measurement and fusion results showed that the positioning error of the vehicle estimated using this fusion system was greatly reduced from a GPS-only system. At a vehicle speed of about 1.34 m/s, the mean bias in East axis of the fusion system was 0.48 m comparing to the GPS mean bias of 1.28 m, and the mean bias in North axis was reduced to 0.32 m from 1.48 m. The update frequency of the fusion system was increased to 9 Hz from 1 Hz of the GPS. A prototype system was installed on a sprayer for vehicle positioning measurement.
Development of methods to estimate beryllium exposure. Final report
Rice, C.H.
1988-06-30
The project was designed to access data, provide preliminary exposure rankings, and delineate the process for detailing retrospective exposure assessments for beryllium among workers at processing facilities. A literature review was conducted, and walk-through surveys were conducted at two facilities still in operation. More than 8000 environmental records were entered into a computer file. Descriptive statistics were then generated and the process of rank ordering exposures across facilities was begun. In efforts to formulate crude indices of exposure, job titles of persons in the NIOSH mortality study were reviewed and categorized for any beryllium exposure, chemical form of beryllium exposure, and exposure to acid mists. Daily Weighted Average exposure estimates were reviewed by job title, across all facilities. The mean exposure at each facility was calculated. The strategy developed for retrospective exposure assessment is described. Tasks included determination of the usefulness of the Pennsylvania Workers' Compensation files; cataloging the numbers of samples available from company sources; investigating data holdings at Oak Ridge National Laboratory; and obtaining records from the Department of Energy Library.
Hilliges, M; Johansson, O
1999-01-01
The proper assessment of neuron numbers in the nervous system during physiological and pathological conditions, as well as following various treatments, has always been an important part of neuroscience. The present paper evaluates three methods for numerical estimates of nerves in epithelium: I) unbiased nerve fiber profile and nerve fiber fragment estimation methods, II) the traditional method of counting whole nerve fibers, and III) the nerve fiber estimation method. In addition, an unbiased nerve length estimation method was evaluated. Of these four methods, the nerve length per volume method was theoretically optimal, but more time-consuming than the others. The numbers obtained with the methods of nerve fiber profile, nerve fragment and nerve fiber estimation are dependent on the thickness of the epithelium and the sections as well as certain shape factors of the counted fiber. However for those, the actual counting can readily be performed in the microscope and is consequently quick and relatively inexpensive. The statistical analysis showed a very good correlation (R > 0.96) between the three numerical methods, meaning that basically any method could be used. However, dependent on theoretical and practical considerations and the correlation statistics, it may be concluded that the nerve fiber profile or fragment estimation methods should be employed if differences in epithelial and section thickness and the nerve fibers shape factors can be controlled. Such drawbacks are not inherent in the nerve length estimation method and, thus, it can generally be applied. PMID:10197065
Satellite attitude dynamics and estimation with the implicit midpoint method
NASA Astrophysics Data System (ADS)
Hellström, Christian; Mikkola, Seppo
2009-07-01
We describe the application of the implicit midpoint integrator to the problem of attitude dynamics for low-altitude satellites without the use of quaternions. Initially, we consider the satellite to rotate without external torques applied to it. We compare the numerical solution with the exact solution in terms of Jacobi's elliptic functions. Then, we include the gravity-gradient torque, where the implicit midpoint integrator proves to be a fast, simple and accurate method. Higher-order versions of the implicit midpoint scheme are compared to Gauss-Legendre Runge-Kutta methods in terms of accuracy and processing time. Finally, we investigate the performance of a parameter-adaptive Kalman filter based on the implicit midpoint integrator for the determination of the principal moments of inertia through observations.
A TRMM Rainfall Estimation Method Applicable to Land Areas
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, R., Jr.; Oki, R.; Weinman, J. A.
1998-01-01
Utilizing multi-spectral, dual-polarization Special Sensor Microwave Imager (SSM/I) radiometer measurements, we have developed in this study a method to retrieve average rain rate, R(sub f(sub R)), in a mesoscale grid box of 2deg x 3deg over land. The key parameter of this method is the fractional rain area, f(sub R), in that grid box, which is determined with the help of a threshold on the 85 GHz scattering depression 0 deduced from the SSM/I data. In order to demonstrate the usefulness of this method, nine-months of R(sub f(sub R))are retrieved from SSM/I data over three grid boxes in the Northeastern United States. These retrievals are then compared with the corresponding ground-truth-average rain rate, R(sub g), deduced from 15-minute rain gauges. Based on nine months of rain rate retrievals over three grid boxes, we find that R(sub f(sub R)can explain about 64 % of the variance contained in R(sub g). A similar evaluation of the grid-box-average rain rates R(sub GSCAT) and R(sub SRL), given by the NASA/GSCAT and NOAA/SRL rain retrieval algorithms, is performed. This evaluation reveals that R(sub GSCAT) and R(sub SRL) can explain only about 42 % of the variance contained in R(sub g). In our method, a threshold on the 85 GHz scattering depression is used primarily to determine the fractional rain area in a mesoscale grid box. Quantitative information pertaining to the 85 GHz scattering depression in the grid box is disregarded. In the NASA/GSCAT and NOAA/SRL methods on the other hand, this quantitative information is included. Based on the performance of all three methods, we infer that the magnitude of the scattering depression is a poor indicator of rain rate. Furthermore, from maps based on the observations made by SSM/I on land and ocean we find that there is a significant redundancy in the information content of the SSM/I multi-spectral observations. This leads us to infer that observations of SSM/I at 19 and 37 GHz add only marginal information to that
Spectrophotometric method for the estimation of 6-aminopenicillanic acid.
Shaikh, K; Talati, P G; Gang, D M
1973-02-01
A simple, rapid, and sensitive method is described whereby 6-aminopenicillanic acid can be spectrophotometrically determined in the presence of penicillins and their degradation products without prior separation. d-(+)-Glucosamine is used as reagent. The effect of such parameters as pH, temperature, and time of heating on the formation of the chromophore is described. The recommended range is from 25 to 250 mug of 6-aminopenicillanic acid. PMID:4364173
Estimation of partial pressure during graphite conditioning by matrix method
NASA Astrophysics Data System (ADS)
Chaudhuri, P.; Prakash, A.; Reddy, D. C.
2008-05-01
Plasma Facing Components (PFC) of SST-1 tokamak are designed to be compatible for UHV as it is kept in the main vacuum vessel. Graphite is the most widely used plasma facing material in present day tokamaks. High thermal shock resistance and low atomic number carbon are the most important properties of graphite for this application. However, graphite is porous and absorbs gases, which may be released during plasma operation. Graphite tiles are baked at high temperature of about 1000 °C in high vacuum (10-5 Torr) for several hours before installing them in the tokamak to remove the impurities (mainly water vapour and metal impurities), which may have been deposited during machining of the tiles‥ The measurements of the released gas (such as H2, H2O, CO, CO2, Hydrocarbons, etc.) from graphite tiles during baking are accomplished with the help of a Quadrupole Mass Analyzer (QMA). Since, the output of this measurement is a mass spectrum and not the partial pressures of the residual gases, one needs to adopt some procedure to convert the spectrum to obtain the partial pressures. The conventional method of analysis is tedious and time consuming. We propose a new approach based on constructing a set of linear equations and solving them using matrix operations. This is a simple method compared to the conventional one and also eliminates the limitations of the conventional method. A Fortran program has been developed which identifies the likely gases present in the vacuum system and calculates their partial pressures from the data of the residual gas analyzers. Application of this method of calculating partial pressures from mass spectra data will be discussed in detail in this paper.
Data-Driven Method to Estimate Nonlinear Chemical Equivalence
Mayo, Michael; Collier, Zachary A.; Winton, Corey; Chappell, Mark A
2015-01-01
There is great need to express the impacts of chemicals found in the environment in terms of effects from alternative chemicals of interest. Methods currently employed in fields such as life-cycle assessment, risk assessment, mixtures toxicology, and pharmacology rely mostly on heuristic arguments to justify the use of linear relationships in the construction of “equivalency factors,” which aim to model these concentration-concentration correlations. However, the use of linear models, even at low concentrations, oversimplifies the nonlinear nature of the concentration-response curve, therefore introducing error into calculations involving these factors. We address this problem by reporting a method to determine a concentration-concentration relationship between two chemicals based on the full extent of experimentally derived concentration-response curves. Although this method can be easily generalized, we develop and illustrate it from the perspective of toxicology, in which we provide equations relating the sigmoid and non-monotone, or “biphasic,” responses typical of the field. The resulting concentration-concentration relationships are manifestly nonlinear for nearly any chemical level, even at the very low concentrations common to environmental measurements. We demonstrate the method using real-world examples of toxicological data which may exhibit sigmoid and biphasic mortality curves. Finally, we use our models to calculate equivalency factors, and show that traditional results are recovered only when the concentration-response curves are “parallel,” which has been noted before, but we make formal here by providing mathematical conditions on the validity of this approach. PMID:26158701
Systematic variational method for statistical nonlinear state and parameter estimation
NASA Astrophysics Data System (ADS)
Ye, Jingxin; Rey, Daniel; Kadakia, Nirag; Eldridge, Michael; Morone, Uriel I.; Rozdeba, Paul; Abarbanel, Henry D. I.; Quinn, John C.
2015-11-01
In statistical data assimilation one evaluates the conditional expected values, conditioned on measurements, of interesting quantities on the path of a model through observation and prediction windows. This often requires working with very high dimensional integrals in the discrete time descriptions of the observations and model dynamics, which become functional integrals in the continuous-time limit. Two familiar methods for performing these integrals include (1) Monte Carlo calculations and (2) variational approximations using the method of Laplace plus perturbative corrections to the dominant contributions. We attend here to aspects of the Laplace approximation and develop an annealing method for locating the variational path satisfying the Euler-Lagrange equations that comprises the major contribution to the integrals. This begins with the identification of the minimum action path starting with a situation where the model dynamics is totally unresolved in state space, and the consistent minimum of the variational problem is known. We then proceed to slowly increase the model resolution, seeking to remain in the basin of the minimum action path, until a path that gives the dominant contribution to the integral is identified. After a discussion of some general issues, we give examples of the assimilation process for some simple, instructive models from the geophysical literature. Then we explore a slightly richer model of the same type with two distinct time scales. This is followed by a model characterizing the biophysics of individual neurons.
Wakefulness estimation only using ballistocardiogram: nonintrusive method for sleep monitoring.
Chung, Gih Sung; Lee, Jeong Su; Hwang, Su Hwan; Lim, Young Kyu; Jeong, Do-Un; Park, Kwang Suk
2010-01-01
To evaluate sleep quality or autonomic nervous system, many annoying electrodes have be attached to subjects' body. It can disturb comfortable sleep and, moreover, since it is very expensive experiment, continuous sleep monitoring is difficult. Since heart rate reflects the autonomic nervous system, it is highly synchronized with the sympathetic activation during transition from non-REM sleep to wakefulness. When the transition occurred the heart rate abruptly increased clearly distinguished with other changes. By using this physiology, we tried to classify the wakefulness during the whole night sleep. Our final goal is adopting this method to the continuous monitoring in our daily life. electrocardiogram (ECG) is not the suitable. Subjects have to attach the electrodes by themselves in their housing to obtain ECG. In that point of view, we used the ballistocardiogram (BCG) that is the representative method to obtain heart beat nonintrusively. For ten normal subjects, the wakefulness classifications by using the heart rate dynamics were executed. Nine subjects showed substantial agreement with the visually-scored method, polysomnography (PSG), and only one subject showed moderate agreement in Cohen's kappa value. PMID:21096160
Systematic variational method for statistical nonlinear state and parameter estimation.
Ye, Jingxin; Rey, Daniel; Kadakia, Nirag; Eldridge, Michael; Morone, Uriel I; Rozdeba, Paul; Abarbanel, Henry D I; Quinn, John C
2015-11-01
In statistical data assimilation one evaluates the conditional expected values, conditioned on measurements, of interesting quantities on the path of a model through observation and prediction windows. This often requires working with very high dimensional integrals in the discrete time descriptions of the observations and model dynamics, which become functional integrals in the continuous-time limit. Two familiar methods for performing these integrals include (1) Monte Carlo calculations and (2) variational approximations using the method of Laplace plus perturbative corrections to the dominant contributions. We attend here to aspects of the Laplace approximation and develop an annealing method for locating the variational path satisfying the Euler-Lagrange equations that comprises the major contribution to the integrals. This begins with the identification of the minimum action path starting with a situation where the model dynamics is totally unresolved in state space, and the consistent minimum of the variational problem is known. We then proceed to slowly increase the model resolution, seeking to remain in the basin of the minimum action path, until a path that gives the dominant contribution to the integral is identified. After a discussion of some general issues, we give examples of the assimilation process for some simple, instructive models from the geophysical literature. Then we explore a slightly richer model of the same type with two distinct time scales. This is followed by a model characterizing the biophysics of individual neurons. PMID:26651756
Estimation of solar radiation by using modified Heliosat-II method and COMS-MI imagery
NASA Astrophysics Data System (ADS)
Choi, Wonseok; Song, Ahram; Kim, Yongil
2015-10-01
Estimation of solar radiation is very important basic research which can be used in solar energy resources estimation, prediction of crop yields, resource-related decision-making and so on. Accordingly, recently diverse researches for estimating solar radiation are performing in Korea. Heliosat-II method is one of the widely used model to estimate solar irradiance, and it's accuracy has been demonstrated by many other studies. But Heliosat-II method cannot be applied directly for estimate solar irradiance around Korea. Because Heliosat-II method is optimized for estimating solar radiation of Europe. Basically Heliosat-II method estimate solar radiation by using Meteosat meteorological satellite imagery and statistical data which are taken around Europe. Because these data do not include Korea, Heliosat-II method must be modified for using in estimation solar radiation of Korea. So purpose of this study is Heliosat-II modification for irradiance estimation by using image of COMS-M, weather satellite of Korea. For this purpose, in this study, error if albedo was removed in ground albedo image which was made by using apparent albedo and atmosphere reflectance. And method of producing background albedo map which is used in Heliosat-II method is modified for getting more delicate one. Through the study, ground albedo correction could be successfully performed and background albedo maps could be successfully derived.
Wu, Zhihong; Lu, Ke; Zhu, Yuan
2015-01-01
The torque output accuracy of the IPMSM in electric vehicles using a state of the art MTPA strategy highly depends on the accuracy of machine parameters, thus, a torque estimation method is necessary for the safety of the vehicle. In this paper, a torque estimation method based on flux estimator with a modified low pass filter is presented. Moreover, by taking into account the non-ideal characteristic of the inverter, the torque estimation accuracy is improved significantly. The effectiveness of the proposed method is demonstrated through MATLAB/Simulink simulation and experiment. PMID:26114557
Zhu, Yuan
2015-01-01
The torque output accuracy of the IPMSM in electric vehicles using a state of the art MTPA strategy highly depends on the accuracy of machine parameters, thus, a torque estimation method is necessary for the safety of the vehicle. In this paper, a torque estimation method based on flux estimator with a modified low pass filter is presented. Moreover, by taking into account the non-ideal characteristic of the inverter, the torque estimation accuracy is improved significantly. The effectiveness of the proposed method is demonstrated through MATLAB/Simulink simulation and experiment. PMID:26114557
EXPERIMENTAL METHODS TO ESTIMATE ACCUMULATED SOLIDS IN NUCLEAR WASTE TANKS
Duignan, M.; Steeper, T.; Steimke, J.
2012-12-10
devices and techniques were very effective to estimate the movement, location, and concentrations of the solids representing plutonium and are expected to perform well at a larger scale. The operation of the techniques and their measurement accuracies will be discussed as well as the overall results of the accumulated solids test.
Slade, Jeffrey W.; Adams, Jean V.; Christie, Gavin C.; Cuddy, Douglas W.; Fodale, Michael F.; Heinrich, John W.; Quinlan, Henry R.; Weise, Jerry G.; Weisser, John W.; Young, Robert J.
2003-01-01
Before 1995, Great Lakes streams were selected for lampricide treatment based primarily on qualitative measures of the relative abundance of larval sea lampreys, Petromyzon marinus. New integrated pest management approaches required standardized quantitative measures of sea lamprey. This paper evaluates historical larval assessment techniques and data and describes how new standardized methods for estimating abundance of larval and metamorphosed sea lampreys were developed and implemented. These new methods have been used to estimate larval and metamorphosed sea lamprey abundance in about 100 Great Lakes streams annually and to rank them for lampricide treatment since 1995. Implementation of these methods has provided a quantitative means of selecting streams for treatment based on treatment cost and estimated production of metamorphosed sea lampreys, provided managers with a tool to estimate potential recruitment of sea lampreys to the Great Lakes and the ability to measure the potential consequences of not treating streams, resulting in a more justifiable allocation of resources. The empirical data produced can also be used to simulate the impacts of various control scenarios.
Comparative Evaluation of Two Methods to Estimate Natural Gas Production in Texas
2003-01-01
This report describes an evaluation conducted by the Energy Information Administration (EIA) in August 2003 of two methods that estimate natural gas production in Texas. The first method (parametric method) was used by EIA from February through August 2003 and the second method (multinomial method) replaced it starting in September 2003, based on the results of this evaluation.
A method to estimate weight and dimensions of large and small gas turbine engines
NASA Technical Reports Server (NTRS)
Onat, E.; Klees, G. W.
1979-01-01
A computerized method was developed to estimate weight and envelope dimensions of large and small gas turbine engines within + or - 5% to 10%. The method is based on correlations of component weight and design features of 29 data base engines. Rotating components were estimated by a preliminary design procedure which is sensitive to blade geometry, operating conditions, material properties, shaft speed, hub tip ratio, etc. The development and justification of the method selected, and the various methods of analysis are discussed.
System and Method for Outlier Detection via Estimating Clusters
NASA Technical Reports Server (NTRS)
Iverson, David J. (Inventor)
2016-01-01
An efficient method and system for real-time or offline analysis of multivariate sensor data for use in anomaly detection, fault detection, and system health monitoring is provided. Models automatically derived from training data, typically nominal system data acquired from sensors in normally operating conditions or from detailed simulations, are used to identify unusual, out of family data samples (outliers) that indicate possible system failure or degradation. Outliers are determined through analyzing a degree of deviation of current system behavior from the models formed from the nominal system data. The deviation of current system behavior is presented as an easy to interpret numerical score along with a measure of the relative contribution of each system parameter to any off-nominal deviation. The techniques described herein may also be used to "clean" the training data.
Statistical classification methods for estimating ancestry using morphoscopic traits.
Hefner, Joseph T; Ousley, Stephen D
2014-07-01
Ancestry assessments using cranial morphoscopic traits currently rely on subjective trait lists and observer experience rather than empirical support. The trait list approach, which is untested, unverified, and in many respects unrefined, is relied upon because of tradition and subjective experience. Our objective was to examine the utility of frequently cited morphoscopic traits and to explore eleven appropriate and novel methods for classifying an unknown cranium into one of several reference groups. Based on these results, artificial neural networks (aNNs), OSSA, support vector machines, and random forest models showed mean classification accuracies of at least 85%. The aNNs had the highest overall classification rate (87.8%), and random forests show the smallest difference between the highest (90.4%) and lowest (76.5%) classification accuracies. The results of this research demonstrate that morphoscopic traits can be successfully used to assess ancestry without relying only on the experience of the observer. PMID:24646108
Method and system for non-linear motion estimation
NASA Technical Reports Server (NTRS)
Lu, Ligang (Inventor)
2011-01-01
A method and system for extrapolating and interpolating a visual signal including determining a first motion vector between a first pixel position in a first image to a second pixel position in a second image, determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image, determining a third motion vector between one of the first pixel position in the first image and the second pixel position in the second image, and the second pixel position in the second image and the third pixel position in the third image using a non-linear model, determining a position of the fourth pixel in a fourth image based upon the third motion vector.
Study on Comparison of Bidding and Pricing Behavior Distinction between Estimate Methods
NASA Astrophysics Data System (ADS)
Morimoto, Emi; Namerikawa, Susumu
The most characteristic trend on bidding and pricing behavior distinction in recent years is the increasing number of bidders just above the criteria for low-price bidding investigations. The contractor's markup is the difference between the bidding price and the execution price. Therefore, the contractor's markup is the difference between criteria for low-price bidding investigations price and the execution price in the public works bid in Japan. Virtually, bidder's strategies and behavior have been controlled by public engineer's budgets. Estimation and bid are inseparably linked in the Japanese public works procurement system. The trial of the unit price-type estimation method begins in 2004. On another front, accumulated estimation method is one of the general methods in public works. So, there are two types of standard estimation methods in Japan. In this study, we did a statistical analysis on the bid information of civil engineering works for the Ministry of Land, Infrastructure, and Transportation in 2008. It presents several issues that bidding and pricing behavior is related to an estimation method (several estimation methods) for public works bid in Japan. The two types of standard estimation methods produce different results that number of bidders (decide on bid-no bid strategy) and distribution of bid price (decide on mark-up strategy).The comparison on the distribution of bid prices showed that the percentage of the bid concentrated on the criteria for low-price bidding investigations have had a tendency to get higher in the large-sized public works by the unit price-type estimation method, comparing with the accumulated estimation method. On one hand, the number of bidders who bids for public works estimated unit-price tends to increase significantly Public works estimated unit-price is likely to have been one of the factors for the construction companies to decide if they participate in the biddings.
Olascoaga, Beñat; Mac Arthur, Alasdair; Atherton, Jon; Porcar-Castell, Albert
2016-03-01
Accurate temporal and spatial measurements of leaf optical traits (i.e., absorption, reflectance and transmittance) are paramount to photosynthetic studies. These optical traits are also needed to couple radiative transfer and physiological models to facilitate the interpretation of optical data. However, estimating leaf optical traits in leaves with complex morphologies remains a challenge. Leaf optical traits can be measured using integrating spheres, either by placing the leaf sample in one of the measuring ports (External Method) or by placing the sample inside the sphere (Internal Method). However, in leaves with complex morphology (e.g., needles), the External Method presents limitations associated with gaps between the leaves, and the Internal Method presents uncertainties related to the estimation of total leaf area. We introduce a modified version of the Internal Method, which bypasses the effect of gaps and the need to estimate total leaf area, by painting the leaves black and measuring them before and after painting. We assess and compare the new method with the External Method using a broadleaf and two conifer species. Both methods yielded similar leaf absorption estimates for the broadleaf, but absorption estimates were higher with the External Method for the conifer species. Factors explaining the differences between methods, their trade-offs and their advantages and limitations are also discussed. We suggest that the new method can be used to estimate leaf absorption in any type of leaf independently of its morphology, and be used to study further the impact of gap fraction in the External Method. PMID:26843207
An improved method for nonlinear parameter estimation: a case study of the Rössler model
NASA Astrophysics Data System (ADS)
He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan
2015-06-01
Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.
An improved method for nonlinear parameter estimation: a case study of the Rössler model
NASA Astrophysics Data System (ADS)
He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan
2016-08-01
Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.
Multivariate drought frequency estimation using copula method in Southwest China
NASA Astrophysics Data System (ADS)
Hao, Cui; Zhang, Jiahua; Yao, Fengmei
2015-12-01
Drought over Southwest China occurs frequently and has an obvious seasonal characteristic. Proper management of regional droughts requires knowledge of the expected frequency or probability of specific climate information. This study utilized k-means classification and copulas to demonstrate the regional drought occurrence probability and return period based on trivariate drought properties, i.e., drought duration, severity, and peak. A drought event in this study was defined when 3-month Standardized Precipitation Evapotranspiration Index (SPEI) was less than -0.99 according to the regional climate characteristic. Then, the next step was to classify the region into six clusters by k-means method based on annual and seasonal precipitation and temperature and to establish marginal probabilistic distributions for each drought property in each sub-region. Several copula types were selected to test the best fit distribution, and Student t copula was recognized as the best one to integrate drought duration, severity, and peak. The results indicated that a proper classification was important for a regional drought frequency analysis, and copulas were useful tools in exploring the associations of the correlated drought variables and analyzing drought frequency. Student t copula was a robust and proper function for drought joint probability and return period analysis, which is important for analyzing and predicting the regional drought risks.
Brassey, Charlotte A.; Maidment, Susannah C. R.; Barrett, Paul M.
2015-01-01
Body mass is a key biological variable, but difficult to assess from fossils. Various techniques exist for estimating body mass from skeletal parameters, but few studies have compared outputs from different methods. Here, we apply several mass estimation methods to an exceptionally complete skeleton of the dinosaur Stegosaurus. Applying a volumetric convex-hulling technique to a digital model of Stegosaurus, we estimate a mass of 1560 kg (95% prediction interval 1082–2256 kg) for this individual. By contrast, bivariate equations based on limb dimensions predict values between 2355 and 3751 kg and require implausible amounts of soft tissue and/or high body densities. When corrected for ontogenetic scaling, however, volumetric and linear equations are brought into close agreement. Our results raise concerns regarding the application of predictive equations to extinct taxa with no living analogues in terms of overall morphology and highlight the sensitivity of bivariate predictive equations to the ontogenetic status of the specimen. We emphasize the significance of rare, complete fossil skeletons in validating widely applied mass estimation equations based on incomplete skeletal material and stress the importance of accurately determining specimen age prior to further analyses. PMID:25740841
Brassey, Charlotte A; Maidment, Susannah C R; Barrett, Paul M
2015-03-01
Body mass is a key biological variable, but difficult to assess from fossils. Various techniques exist for estimating body mass from skeletal parameters, but few studies have compared outputs from different methods. Here, we apply several mass estimation methods to an exceptionally complete skeleton of the dinosaur Stegosaurus. Applying a volumetric convex-hulling technique to a digital model of Stegosaurus, we estimate a mass of 1560 kg (95% prediction interval 1082-2256 kg) for this individual. By contrast, bivariate equations based on limb dimensions predict values between 2355 and 3751 kg and require implausible amounts of soft tissue and/or high body densities. When corrected for ontogenetic scaling, however, volumetric and linear equations are brought into close agreement. Our results raise concerns regarding the application of predictive equations to extinct taxa with no living analogues in terms of overall morphology and highlight the sensitivity of bivariate predictive equations to the ontogenetic status of the specimen. We emphasize the significance of rare, complete fossil skeletons in validating widely applied mass estimation equations based on incomplete skeletal material and stress the importance of accurately determining specimen age prior to further analyses. PMID:25740841
Variable methods to estimate the ionospheric horizontal gradient
NASA Astrophysics Data System (ADS)
Nagarajoo, Karthigesu
2016-06-01
DGPS or differential Global Positioning System is a system where the range error at a reference station (after eliminating the error due to its’ clock, hardware delay and multipath) will be eliminated from the range measurement at the user, which view the same satellite, presuming that the satellites path to both the reference station and the user experience common errors due to the ionosphere, clock errors etc. In this assumption, the error due to the ionospheric refraction is assumed to be the same for the two closely spaced paths (such as a baseline length between reference station and the user of 10km as used in simulations throughout this paper, unless otherwise stated) and thus the presence of ionospheric horizontal gradient is ignored. If a user's path is exposed to a drastically large ionosphere gradient, the large difference of ionosphere delays between the reference station and the user can result in significant position error for the user. Several examples of extremely large ionosphere gradients that could cause the significant user errors have been observed. The ionospheric horizontal gradient could be obtained instead from the gradient of the Total Electron Content, TEC observed from a number of received GPS satellites at one or more reference stations or based on empirical models updated with real time data. To investigate the former, in this work, the dual frequency method has been used to obtain both South-North and East-West gradients by using four different receiving stations separated in those directions. In addition, observation data from Navy Ionospheric Monitoring System (NIMS) receivers and the TEC contour map from Rutherford Appleton Laboratory (RAL) UK have also been used in order to define the magnitude and direction of the gradient.
Magnetic Resonance Elastography as a Method to Estimate Myocardial Contractility
Kolipaka, Arunark; Aggarwal, Shivani R.; McGee, Kiaran P.; Anavekar, Nandan; Manduca, Armando; Ehman, Richard L.; Araoz, Philip A.
2012-01-01
Purpose To determine whether increasing epinephrine infusion in an in-vivo pig model is associated with an increase in end-systolic magnetic resonance elastography (MRE)-derived effective stiffness. Methods Finite element modeling (FEM) was performed to determine range of myocardial wall thicknesses that could be used for analysis. Then MRE was performed on 5-pigs to measure the end-systolic effective stiffness with epinephrine infusion. Epinephrine was continuously infused intravenously in each pig to increase the heart-rate in increments of 20%. For each such increase end-systolic effective stiffness was measured using MRE. In each pig, Student’s t-test was used to compare effective end-systolic stiffness at baseline and at initial infusion of epinephrine. Least-square linear regression was performed to determine the correlation between normalized end-systolic effective stiffness and increase in heart-rate with epinephrine infusion. Results FEM showed that phase gradient inversion could be performed on wall thickness ~≥1.5cm. In pigs, effective end-systolic stiffness significantly increased from baseline to the first infusion in all pigs (p=0.047). A linear correlation was found between normalized effective end-systolic stiffness and percent increase in heart-rate by epinephrine infusion with R2 ranging from 0.86–0.99 in 4-pigs. In one of the pigs the R2 value was 0.1. A linear correlation with R2=0.58 was found between normalized effective end-systolic stiffness and percent increase in heart-rate when pooling data points from all pigs. Conclusion Noninvasive MRE-derived end-systolic effective myocardial stiffness may be a surrogate for myocardial contractility. PMID:22334349
Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods
Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.
2011-01-01
Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.
Modified slanted-edge method and multidirectional modulation transfer function estimation.
Masaoka, Kenichiro; Yamashita, Takayuki; Nishida, Yukihiro; Sugawara, Masayuki
2014-03-10
The slanted-edge method specified in ISO Standard 12233, which measures the modulation transfer function (MTF) by analyzing an image of a slightly slanted knife-edge target, is not robust against noise because it takes the derivative of each data line in the edge-angle estimation. We propose here a modified method that estimates the edge angle by fitting a two-dimensional function to the image data. The method has a higher accuracy, precision, and robustness against noise than the ISO 12233 method and is applicable to any arbitrary pixel array, enabling a multidirectional MTF estimate in a single measurement of a starburst image. PMID:24663939
An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia
Kidney, Darren; Rawson, Benjamin M.; Borchers, David L.; Stevenson, Ben C.; Marques, Tiago A.; Thomas, Len
2016-01-01
Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers’ estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method
An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.
Kidney, Darren; Rawson, Benjamin M; Borchers, David L; Stevenson, Ben C; Marques, Tiago A; Thomas, Len
2016-01-01
Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method
Mehta, N C; Scharlemann, E T; Stevens, C G
2001-04-02
Application of a novel transform operator, the Sticklet transform, to the quantitative estimation of trace chemicals in industrial effluent plumes is reported. The sticklet transform is a superset of the well-known derivative operator and the Haar wavelet, and is characterized by independently adjustable lobe width and separation. Computer simulations demonstrate that they can make accurate and robust concentration estimates of multiple chemical species in industrial effluent plumes in the presence of strong clutter background, interferent chemicals and random noise. In this paper they address the application of the sticklet transform in estimating chemical concentrations in effluent plumes in the presence of atmospheric transmission effects. They show that this transform retains the ability to yield accurate estimates using on-plume/off-plume measurements that represent atmospheric differentials up to 10% of the full atmospheric attenuation.
Parameter estimation of analog circuits based on the fractional wavelet method
NASA Astrophysics Data System (ADS)
Yong, Deng; He, Zhang
2015-03-01
Aiming at the problem of parameter estimation in analog circuits, a new approach is proposed. The approach is based on the fractional wavelet to derive the Volterra series model of the circuit under test (CUT). By the gradient search algorithm used in the Volterra model, the unknown parameters in the CUT are estimated and the Volterra model is identified. The simulations show that the parameter estimation results of the proposed method in the paper are better than those of other parameter estimation methods. Project supported by the Key Research Project of Sichuan Provincial Department of Education, China (No. 13ZA0186).
Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†
Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia
2015-01-01
Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144
NASA Astrophysics Data System (ADS)
Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata
2016-08-01
Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.
Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure
NASA Technical Reports Server (NTRS)
Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark
2009-01-01
High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.
A biphasic parameter estimation method for quantitative analysis of dynamic renal scintigraphic data
NASA Astrophysics Data System (ADS)
Koh, T. S.; Zhang, Jeff L.; Ong, C. K.; Shuter, B.
2006-06-01
Dynamic renal scintigraphy is an established method in nuclear medicine, commonly used for the assessment of renal function. In this paper, a biphasic model fitting method is proposed for simultaneous estimation of both vascular and parenchymal parameters from renal scintigraphic data. These parameters include the renal plasma flow, vascular and parenchymal mean transit times, and the glomerular extraction rate. Monte Carlo simulation was used to evaluate the stability and confidence of the parameter estimates obtained by the proposed biphasic method, before applying the method on actual patient study cases to compare with the conventional fitting approach and other established renal indices. The various parameter estimates obtained using the proposed method were found to be consistent with the respective pathologies of the study cases. The renal plasma flow and extraction rate estimated by the proposed method were in good agreement with those previously obtained using dynamic computed tomography and magnetic resonance imaging.
A Comparison of Methods for Estimating Quadratic Effects in Nonlinear Structural Equation Models
Harring, Jeffrey R.; Weiss, Brandi A.; Hsu, Jui-Chen
2012-01-01
Two Monte Carlo simulations were performed to compare methods for estimating and testing hypotheses of quadratic effects in latent variable regression models. The methods considered in the current study were (a) a 2-stage moderated regression approach using latent variable scores, (b) an unconstrained product indicator approach, (c) a latent moderated structural equation method, (d) a fully Bayesian approach, and (e) marginal maximum likelihood estimation. Of the 5 estimation methods, it was found that overall the methods based on maximum likelihood estimation and the Bayesian approach performed best in terms of bias, root-mean-square error, standard error ratios, power, and Type I error control, although key differences were observed. Similarities as well as disparities among methods are highlight and general recommendations articulated. As a point of comparison, all 5 approaches were fit to a reparameterized version of the latent quadratic model to educational reading data. PMID:22429193
Using Resampling To Estimate the Precision of an Empirical Standard-Setting Method.
ERIC Educational Resources Information Center
Muijtjens, Arno M. M.; Kramer, Anneke W. M.; Kaufman, David M.; Van der Vleuten, Cees P. M.
2003-01-01
Developed a method to estimate the cutscore precisions for empirical standard-setting methods by using resampling. Illustrated the method with two actual datasets consisting of 86 Dutch medical residents and 155 Canadian medical students taking objective structured clinical examinations. Results show the applicability of the method. (SLD)
Novel and simple non-parametric methods of estimating the joint and marginal densities
NASA Astrophysics Data System (ADS)
Alghalith, Moawia
2016-07-01
We introduce very simple non-parametric methods that overcome key limitations of the existing literature on both the joint and marginal density estimation. In doing so, we do not assume any form of the marginal distribution or joint distribution a priori. Furthermore, our method circumvents the bandwidth selection problems. We compare our method to the kernel density method.
ERIC Educational Resources Information Center
Cui, Zhongmin; Kolen, Michael J.
2008-01-01
This article considers two methods of estimating standard errors of equipercentile equating: the parametric bootstrap method and the nonparametric bootstrap method. Using a simulation study, these two methods are compared under three sample sizes (300, 1,000, and 3,000), for two test content areas (the Iowa Tests of Basic Skills Maps and Diagrams…
Feasibility study of a novel method for real-time aerodynamic coefficient estimation
NASA Astrophysics Data System (ADS)
Gurbacki, Phillip M.
In this work, a feasibility study of a novel technique for the real-time identification of uncertain nonlinear aircraft aerodynamic coefficients has been conducted. The major objective of this paper is to investigate the feasibility of a system for parameter identification in a real-time flight environment. This system should be able to calculate aerodynamic coefficients and derivative information using typical pilot inputs while ensuring robust, stable, and rapid convergence. The parameter estimator investigated is based upon the nonlinear sliding mode control schema; one of the main advantages of the sliding mode estimator is the ability to guarantee a stable and robust convergence. Stable convergence is ensured by choosing a sliding surface and function that satisfies the Lyapunov stability criteria. After a proper sliding surface has been chosen, the nonlinear equations of motion for an F-16 aircraft are substituted into the sliding surface yielding an estimator capable of identifying a single aircraft parameter. Multiple sliding surfaces are then developed for each of the different flight parameters that will be identified. Sliding surfaces and parameter estimators have been developed and simulated for the pitching moment, lift force, and drag force coefficients of the F-16 aircraft. Comparing the estimated coefficients with the reference coefficients shows rapid and stable convergence for a variety of pilot inputs. Starting with simple doublet and sin wave commands, and followed by more complicated continuous pilot inputs, estimated aerodynamic coefficients have been shown to match the actual coefficients with a high degree of accuracy. This estimator is also shown to be superior to model reference or adaptive estimators, it is able to handle positive and negative estimated parameters and control inputs along with guaranteeing Lyapunov stability during convergence. Accurately estimating these aerodynamic parameters in real-time during a flight is essential
NASA Astrophysics Data System (ADS)
Kwon, Ki-Won; Cho, Yongsoo
This letter presents a simple joint estimation method for residual frequency offset (RFO) and sampling frequency offset (STO) in OFDM-based digital video broadcasting (DVB) systems. The proposed method selects a continual pilot (CP) subset from an unsymmetrically and non-uniformly distributed CP set to obtain an unbiased estimator. Simulation results show that the proposed method using a properly selected CP subset is unbiased and performs robustly.
Parameters Estimation For A Patellofemoral Joint Of A Human Knee Using A Vector Method
NASA Astrophysics Data System (ADS)
Ciszkiewicz, A.; Knapczyk, J.
2015-08-01
Position and displacement analysis of a spherical model of a human knee joint using the vector method was presented. Sensitivity analysis and parameter estimation were performed using the evolutionary algorithm method. Computer simulations for the mechanism with estimated parameters proved the effectiveness of the prepared software. The method itself can be useful when solving problems concerning the displacement and loads analysis in the knee joint.
Parameters Estimation for the Spherical Model of the Human Knee Joint Using Vector Method
NASA Astrophysics Data System (ADS)
Ciszkiewicz, A.; Knapczyk, J.
2014-08-01
Position and displacement analysis of a spherical model of a human knee joint using the vector method was presented. Sensitivity analysis and parameter estimation were performed using the evolutionary algorithm method. Computer simulations for the mechanism with estimated parameters proved the effectiveness of the prepared software. The method itself can be useful when solving problems concerning the displacement and loads analysis in the knee joint
NASA Technical Reports Server (NTRS)
Meier, M. J.; Evans, W. E.
1975-01-01
Snow-covered areas on LANDSAT (ERTS) images of the Santiam River basin, Oregon, and other basins in Washington were measured using several operators and methods. Seven methods were used: (1) Snowline tracing followed by measurement with planimeter, (2) mean snowline altitudes determined from many locations, (3) estimates in 2.5 x 2.5 km boxes of snow-covered area with reference to snow-free images, (4) single radiance-threshold level for entire basin, (5) radiance-threshold setting locally edited by reference to altitude contours and other images, (6) two-band color-sensitive extraction locally edited as in (5), and (7) digital (spectral) pattern recognition techniques. The seven methods are compared in regard to speed of measurement, precision, the ability to recognize snow in deep shadow or in trees, relative cost, and whether useful supplemental data are produced.
ERIC Educational Resources Information Center
Su, Allan Yen-Lun
2007-01-01
This study explores the impact of individual ability and favorable team member scores on student preference of team-based learning and grading methods, and examines the moderating effects of student perception of course importance on student preference of team-based learning and grading methods. The author also investigates the relationship…
ERIC Educational Resources Information Center
Haddock, Maryann
1976-01-01
The experiment studied the differential effectiveness of two methods of blending instruction on the ability of prereaders to decode synthetic words. Findings indicated the superiority of auditory-visual training over auditory, with both methods significantly superior to practice on sound-letter association (the control group task). (RC)
Parameters estimation using the first passage times method in a jump-diffusion model
NASA Astrophysics Data System (ADS)
Khaldi, K.; Meddahi, S.
2016-06-01
The main purposes of this paper are two contributions: (1) it presents a new method, which is the first passage time (FPT method) generalized for all passage times (GPT method), in order to estimate the parameters of stochastic Jump-Diffusion process. (2) it compares in a time series model, share price of gold, the empirical results of the estimation and forecasts obtained with the GPT method and those obtained by the moments method and the FPT method applied to the Merton Jump-Diffusion (MJD) model.
A comparison of de-noising methods for differential phase shift and associated rainfall estimation
NASA Astrophysics Data System (ADS)
Hu, Zhiqun; Liu, Liping; Wu, Linlin; Wei, Qing
2015-04-01
Measured differential phase shift UDP is known to be a noisy unstable polarimetric radar variable, such that the quality of UDP data has direct impact on specific differential phase shift KDP estimation, and subsequently, the KDP-based rainfall estimation. Over the past decades, many UDP de-noising methods have been developed; however, the de-noising effects in these methods and their impact on KDP-based rainfall estimation lack comprehensive comparative analysis. In this study, simulated noisy UDP data were generated and de-noised by using several methods such as finite-impulse response (FIR), Kalman, wavelet, traditional mean, and median filters. The biases were compared between KDP from simulated and observed UDP radial profiles after de-noising by these methods. The results suggest that the complicated FIR, Kalman, and wavelet methods have a better de-noising effect than the traditional methods. After UDP was de-noised, the accuracy of the KDP-based rainfall estimation increased significantly based on the analysis of three actual rainfall events. The improvement in estimation was more obvious when KDP was estimated with UDP de-noised by Kalman, FIR, and wavelet methods when the average rainfall was heavier than 5 mm h ≥1. However, the improved estimation was not significant when the precipitation intensity further increased to a rainfall rate beyond 10 mm h ≥1. The performance of wavelet analysis was found to be the most stable of these filters.
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, R., Jr.; Weinman, J. A.; Dalu, G.
1999-01-01
cases is on the average about 15 %. Taking advantage of this ability of our retrieval method, one could derive the latent heat input into the atmosphere over the 760 km wide swath of the TMI radiometer in the tropics.
Methods for estimating monthly streamflow characteristics at ungaged sites in western Montana
Parrett, Charles; Cartier, Kenn D.
1989-01-01
Three methods were developed for estimating monthly streamflow characteristics for western Montana. The first method, based on multiple-regression equations, relates monthly streamflow characteristics to various basin and climatic variables. Standard errors range from 43 to 107%. The equations are generally not applicable to streams that receive or lose water as a result of geology or that have appreciable upstream storage or diversions. The second method, based on regression equations, relates monthly streamflow characteristics to channel width. Standard errors range from 41 to 111%. The equations are generally not applicable to streams with exposed bedrock, with braided or sand channel, or with recent alterations. The third method requires 12 once-monthly streamflow measurements at an ungaged site. They are then correlated with concurrent flows at some nearby gaged site, and the resulting relation is used to estimate the required monthly streamflow characteristic at the ungaged site. Standard errors range from 19 to 92%. Although generally substantially more reliable than the first or second method, this method may be unreliable if the measurement site and the gage site are not hydrologically similar. A procedure for weighting individual estimates, based on variance and degree of independence of individual estimating methods, was also developed. Standard errors range from 15 to 43% when all three methods are used. The weighted-average estimated from all three methods are generally substantially more reliable than any of the individual estimates. (USGS)
NASA Astrophysics Data System (ADS)
Yong, Tao; Yong, Wei; Jin, Guofan; Gao, Xuejun
2005-01-01
The study of shaping ability is very important in assessment of instruments and preparation techniques. Using simulated canals in these studies have many advantages such as good standardization, good comparability, transparent et al. A new computer assistant measure system particularly designed for quantitative analysis of the shape of simulated canals is set up and described in this paper. This system can be used in automatic assessment of shaping ability of instruments, by image pretreatment, feature extraction, registration, fusion, and measurement. Comparing the simulated root canal shape before and after instrument in the fused image is good to reduce the errors from vision and enhance the repetition of results. The registration and measurement precision of the system can achieve 0.021mm or higher, when the resolution of original root canal image is 1200 DPI or higher. The shaping ability of stainless steel K-files is evaluated by the system.
Methods to estimate the between-study variance and its uncertainty in meta-analysis.
Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P T; Langan, Dean; Salanti, Georgia
2016-03-01
Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance, has been long challenged. Our aim is to identify known methods for estimation of the between-study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between-study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between-study variance. Based on the scenarios and results presented in the published studies, we recommend the Q-profile method and the alternative approach based on a 'generalised Cochran between-study variance statistic' to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence-based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. PMID:26332144
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
Sloboda, J
1993-01-01
Musical ability is the ability to 'make sense' of music, and develops in most people over the first decade of life through normal enculturation. Whether this ability is developed to a high level usually depends on the decision to start learning a musical instrument, which forces high levels of focused cognitive engagement (practice) with musical materials. Performance ability has both technical and expressive aspects. These aspects are not always developed equally well. Factors contributing to the development of a well-balanced musical performer include (a) lengthy periods of engagement with music through practice and exploration, (b) high levels of material and emotional support from parents and other adults, (c) relationships with early teachers characterized by warmth and mutual liking, and (d) early experiences with music that promote, rather than inhibit, intense sensuous/affective experiences. It is argued that much formal education inhibits the development of musical ability through over-emphasis on assessment, creating performance anxiety, coupled with class and sex stereotyping of approved musical activities. Early free exploration of a medium is a necessity for the development of high levels of musicality. PMID:8168360
New Method for Estimation of Aeolian Sand Transport Rate Using Ceramic Sand Flux Sensor (UD-101)
Udo, Keiko
2009-01-01
In this study, a new method for the estimation of aeolian sand transport rate was developed; the method employs a ceramic sand flux sensor (UD-101). UD-101 detects wind-blown sand impacting on its surface. The method was devised by considering the results of wind tunnel experiments that were performed using a vertical sediment trap and the UD-101. Field measurements to evaluate the estimation accuracy during the prevalence of unsteady winds were performed on a flat backshore. The results showed that aeolian sand transport rates estimated using the developed method were of the same order as those estimated using the existing method for high transport rates, i.e., for transport rates greater than 0.01 kg m−1 s−1. PMID:22291553
A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2015-01-01
A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.
Simplified Estimating Method for Shock Response Spectrum Envelope of V-Band Clamp Separation Shock
NASA Astrophysics Data System (ADS)
Iwasa, Takashi; Shi, Qinzhong
A simplified estimating method for the Shock Response Spectrum (SRS) envelope at the spacecraft interface near the V-band clamp separation device has been established. This simplified method is based on the pyroshock analysis method with a single degree of freedom (D.O.F) model proposed in our previous paper. The parameters required in the estimating method are only geometrical information of the interface and a tension of the V-band clamp. According to the use of these parameters, a simplified calculation of the SRS magnitude at the knee frequency is newly proposed. By comparing the estimation results with actual pyroshock test results, it was verified that the SRS envelope estimated with the simplified method appropriately covered the pyroshock test data of the actual space satellite systems except some specific high frequency responses.
A comparison of the methods for objective strain estimation from the Fry plots
NASA Astrophysics Data System (ADS)
Kumar, Rajan; Srivastava, Deepak C.; Ojha, Arun K.
2014-06-01
Fry method is a graphical technique that displays the strain ellipse in the form of central vacancy on a point distribution, the Fry plot. For objective strain estimation from the Fry plot, the central vacancy must appear as a sharply focused ellipse. In practice, however, a diffused appearance of the central vacancy in the Fry plots induces considerable subjectivity in direct strain estimation. Several alternative computer-based methods have recently been proposed for objective strain estimation from the Fry plots. Relative merits and limitations of these methods are, however, not yet well understood.
NASA Technical Reports Server (NTRS)
Campbell, John P; Mckinney, Marion O
1952-01-01
A summary of methods for making dynamic lateral stability and response calculations and for estimating the aerodynamic stability derivatives required for use in these calculations is presented. The processes of performing calculations of the time histories of lateral motions, of the period and damping of these motions, and of the lateral stability boundaries are presented as a series of simple straightforward steps. Existing methods for estimating the stability derivatives are summarized and, in some cases, simple new empirical formulas are presented. Detailed estimation methods are presented for low-subsonic-speed conditions but only a brief discussion and a list of references are given for transonic and supersonic speed conditions.
NASA Astrophysics Data System (ADS)
Tao, Shanshan; Dong, Sheng; Wang, Zhifeng; Jiang, Wensheng
2016-06-01
The maximum entropy distribution, which consists of various recognized theoretical distributions, is a better curve to estimate the design thickness of sea ice. Method of moment and empirical curve fitting method are common-used parameter estimation methods for maximum entropy distribution. In this study, we propose to use the particle swarm optimization method as a new parameter estimation method for the maximum entropy distribution, which has the advantage to avoid deviation introduced by simplifications made in other methods. We conducted a case study to fit the hindcasted thickness of the sea ice in the Liaodong Bay of Bohai Sea using these three parameter-estimation methods for the maximum entropy distribution. All methods implemented in this study pass the K-S tests at 0.05 significant level. In terms of the average sum of deviation squares, the empirical curve fitting method provides the best fit for the original data, while the method of moment provides the worst. Among all three methods, the particle swarm optimization method predicts the largest thickness of the sea ice for a same return period. As a result, we recommend using the particle swarm optimization method for the maximum entropy distribution for offshore structures mainly influenced by the sea ice in winter, but using the empirical curve fitting method to reduce the cost in the design of temporary and economic buildings.
Fast 2D DOA Estimation Algorithm by an Array Manifold Matching Method with Parallel Linear Arrays.
Yang, Lisheng; Liu, Sheng; Li, Dong; Jiang, Qingping; Cao, Hailin
2016-01-01
In this paper, the problem of two-dimensional (2D) direction-of-arrival (DOA) estimation with parallel linear arrays is addressed. Two array manifold matching (AMM) approaches, in this work, are developed for the incoherent and coherent signals, respectively. The proposed AMM methods estimate the azimuth angle only with the assumption that the elevation angles are known or estimated. The proposed methods are time efficient since they do not require eigenvalue decomposition (EVD) or peak searching. In addition, the complexity analysis shows the proposed AMM approaches have lower computational complexity than many current state-of-the-art algorithms. The estimated azimuth angles produced by the AMM approaches are automatically paired with the elevation angles. More importantly, for estimating the azimuth angles of coherent signals, the aperture loss issue is avoided since a decorrelation procedure is not required for the proposed AMM method. Numerical studies demonstrate the effectiveness of the proposed approaches. PMID:26907301
Fast 2D DOA Estimation Algorithm by an Array Manifold Matching Method with Parallel Linear Arrays
Yang, Lisheng; Liu, Sheng; Li, Dong; Jiang, Qingping; Cao, Hailin
2016-01-01
In this paper, the problem of two-dimensional (2D) direction-of-arrival (DOA) estimation with parallel linear arrays is addressed. Two array manifold matching (AMM) approaches, in this work, are developed for the incoherent and coherent signals, respectively. The proposed AMM methods estimate the azimuth angle only with the assumption that the elevation angles are known or estimated. The proposed methods are time efficient since they do not require eigenvalue decomposition (EVD) or peak searching. In addition, the complexity analysis shows the proposed AMM approaches have lower computational complexity than many current state-of-the-art algorithms. The estimated azimuth angles produced by the AMM approaches are automatically paired with the elevation angles. More importantly, for estimating the azimuth angles of coherent signals, the aperture loss issue is avoided since a decorrelation procedure is not required for the proposed AMM method. Numerical studies demonstrate the effectiveness of the proposed approaches. PMID:26907301
Martin, C. K.; Correa, J. B.; Han, H.; Allen, H. R.; Rood, J.; Champagne, C. M.; Gunturk, B. K.; Bray, G. A.
2014-01-01
Two studies are reported; a pilot study to demonstrate feasibility followed by a larger validity study. Study 1’s objective was to test the effect of two ecological momentary assessment (EMA) approaches that varied in intensity on the validity/accuracy of estimating energy intake with the Remote Food Photography Method (RFPM) over six days in free-living conditions. When using the RFPM, Smartphones are used to capture images of food selection and plate waste and to send the images to a server for food intake estimation. Consistent with EMA, prompts are sent to the Smartphones reminding participants to capture food images. During Study 1, energy intake estimated with the RFPM and the gold standard, doubly labeled water (DLW), were compared. Participants were assigned to receive Standard EMA Prompts (n=24) or Customized Prompts (n=16) (the latter received more reminders delivered at personalized meal times). The RFPM differed significantly from DLW at estimating energy intake when Standard (mean±SD = −895±770 kcal/day, p<.0001), but not Customized Prompts (−270±748 kcal/day, p=.22) were used. Error (energy intake from the RFPM minus that from DLW) was significantly smaller with Customized vs. Standard Prompts. The objectives of Study 2 included testing the RFPM’s ability to accurately estimate energy intake in free-living adults (N=50) over six days, and energy and nutrient intake in laboratory-based meals. The RFPM did not differ significantly from DLW at estimating free-living energy intake (−152±694 kcal/day, p=0.16). During laboratory-based meals, estimating energy and macronutrient intake with the RFPM did not differ significantly compared to directly weighed intake. PMID:22134199
Technology Transfer Automated Retrieval System (TEKTRAN)
Breeding and selection for the traits with polygenic inheritance is a challenging task that can be done by phenotypic selection, by marker-assisted selection or by genome wide selection. We tested predictive ability of four selection models in a biparental population genotyped with 95 SNP markers an...
ERIC Educational Resources Information Center
Brock, Jon; Jarrold, Christopher; Farran, Emily K.; Laws, Glynis; Riby, Deborah M.
2007-01-01
The comparison of cognitive and linguistic skills in individuals with developmental disorders is fraught with methodological and psychometric difficulties. In this paper, we illustrate some of these issues by comparing the receptive vocabulary knowledge and non-verbal reasoning abilities of 41 children with Williams syndrome, a genetic disorder in…
ERIC Educational Resources Information Center
Di Gennaro, Kristen K.
2011-01-01
A growing body of research suggests that the writing ability of international second language learners (IL2) and US-resident second language learners, also referred to as Generation 1.5 (G1.5), differs, despite a dearth of substantial empirical evidence supporting such claims. The present study provides much-needed empirical evidence concerning…
Estimating Rooftop Suitability for PV: A Review of Methods, Patents, and Validation Techniques
Melius, J.; Margolis, R.; Ong, S.
2013-12-01
A number of methods have been developed using remote sensing data to estimate rooftop area suitable for the installation of photovoltaics (PV) at various geospatial resolutions. This report reviews the literature and patents on methods for estimating rooftop-area appropriate for PV, including constant-value methods, manual selection methods, and GIS-based methods. This report also presents NREL's proposed method for estimating suitable rooftop area for PV using Light Detection and Ranging (LiDAR) data in conjunction with a GIS model to predict areas with appropriate slope, orientation, and sunlight. NREL's method is validated against solar installation data from New Jersey, Colorado, and California to compare modeled results to actual on-the-ground measurements.
COMPARISON AND EVALUATION OF FIELD METHODS (DIRECT AND INDIRECT) TO ESTIMATE SOIL WATER FLUXES.
Technology Transfer Automated Retrieval System (TEKTRAN)
Knowledge of soil water fluxes is critical for evaluating efficiency and environmental effects of soil and crop management. Indirect methods commonly used to estimate soil water fluxes estimates are currently based on (a) soil water balance, (b) soil water potential measurements with the Darcy-Bucki...
In this paper, we present methods for estimating Freundlich isotherm fitting parameters (K and N) and their joint uncertainty, which have been implemented into the freeware software platforms R and WinBUGS. These estimates were determined by both Frequentist and Bayesian analyse...
49 CFR Appendix B to Part 227 - Methods for Estimating the Adequacy of Hearing Protector Attenuation
Code of Federal Regulations, 2014 CFR
2014-10-01
... Protector Attenuation B Appendix B to Part 227 Transportation Other Regulations Relating to Transportation..., App. B Appendix B to Part 227—Methods for Estimating the Adequacy of Hearing Protector Attenuation... estimate the adequacy of hearing protector attenuation. I. Derate by Type Derate the hearing...
49 CFR Appendix B to Part 227 - Methods for Estimating the Adequacy of Hearing Protector Attenuation
Code of Federal Regulations, 2011 CFR
2011-10-01
... Protector Attenuation B Appendix B to Part 227 Transportation Other Regulations Relating to Transportation..., App. B Appendix B to Part 227—Methods for Estimating the Adequacy of Hearing Protector Attenuation... estimate the adequacy of hearing protector attenuation. I. Derate by Type Derate the hearing...
49 CFR Appendix B to Part 227 - Methods for Estimating the Adequacy of Hearing Protector Attenuation
Code of Federal Regulations, 2013 CFR
2013-10-01
... Protector Attenuation B Appendix B to Part 227 Transportation Other Regulations Relating to Transportation..., App. B Appendix B to Part 227—Methods for Estimating the Adequacy of Hearing Protector Attenuation... estimate the adequacy of hearing protector attenuation. I. Derate by Type Derate the hearing...
49 CFR Appendix B to Part 227 - Methods for Estimating the Adequacy of Hearing Protector Attenuation
Code of Federal Regulations, 2012 CFR
2012-10-01
... Protector Attenuation B Appendix B to Part 227 Transportation Other Regulations Relating to Transportation..., App. B Appendix B to Part 227—Methods for Estimating the Adequacy of Hearing Protector Attenuation... estimate the adequacy of hearing protector attenuation. I. Derate by Type Derate the hearing...
A Comparison of Normal and Elliptical Estimation Methods in Structural Equation Models.
ERIC Educational Resources Information Center
Schumacker, Randall E.; Cheevatanarak, Suchittra
Monte Carlo simulation compared chi-square statistics, parameter estimates, and root mean square error of approximation values using normal and elliptical estimation methods. Three research conditions were imposed on the simulated data: sample size, population contamination percent, and kurtosis. A Bentler-Weeks structural model established the…
ESTIMATING THE RATE OF PLASMID TRANSFER: AN END-POINT METHOD
A method is described for determining rate parameter of conjugative plasmid transfer that is based on single estimates of donor, recipient and transconjugant densities, and the growth rate in exponential phase of the mating culture. he formula for estimating the plasmid transfer ...
One of the objectives of the National Human Exposure Assessment Survey (NHEXAS) is to estimate exposures to several pollutants in multiple media and determine their distributions for the population of Arizona. This paper presents modeling methods used to estimate exposure dist...
ERIC Educational Resources Information Center
Bauer, Daniel J.; Sterba, Sonya K.
2011-01-01
Previous research has compared methods of estimation for fitting multilevel models to binary data, but there are reasons to believe that the results will not always generalize to the ordinal case. This article thus evaluates (a) whether and when fitting multilevel linear models to ordinal outcome data is justified and (b) which estimator to employ…
49 CFR Appendix B to Part 227 - Methods for Estimating the Adequacy of Hearing Protector Attenuation
Code of Federal Regulations, 2010 CFR
2010-10-01
... Protector Attenuation B Appendix B to Part 227 Transportation Other Regulations Relating to Transportation..., App. B Appendix B to Part 227—Methods for Estimating the Adequacy of Hearing Protector Attenuation... estimate the adequacy of hearing protector attenuation. I. Derate by Type Derate the hearing...
Ball, J.R.; Cohen, S.; Ziegler, E.Z.
1984-10-01
This document provides overall guidance to assist the NRC in preparing the types of cost estimates required by the Regulatory Analysis Guidelines and to assist in the assignment of priorities in resolving generic safety issues. The Handbook presents an overall cost model that allows the cost analyst to develop a chronological series of activities needed to implement a specific regulatory requirement throughout all applicable commercial LWR power plants and to identify the significant cost elements for each activity. References to available cost data are provided along with rules of thumb and cost factors to assist in evaluating each cost element. A suitable code-of-accounts data base is presented to assist in organizing and aggregating costs. Rudimentary cost analysis methods are described to allow the analyst to produce a constant-dollar, lifetime cost for the requirement. A step-by-step example cost estimate is included to demonstrate the overall use of the Handbook.
Yoshida, Kazutaka; Yokoyama, Hidekatsu; Oteki, Takaaki; Matsumoto, Gaku; Aizawa, Koichi; Inakuma, Takahiro
2011-04-13
Although it has been reported that dietary lycopene, the main carotenoid in tomato, improved drug-induced nephropathy, there are no reports on the effect of orally administered lycopene on the in vivo renal reducing (i.e., antioxidant) ability. The radiofrequency electron paramagnetic resonance (EPR) method is a unique technique by which the in vivo reducing ability of an experimental animal can be studied. In this study, the in vivo changes in the renal reducing ability of rats orally administered lycopene were investigated using a 700 MHz EPR spectrometer equipped with a surface-coil-type resonator. Rats were fed either a control diet or a diet containing lycopene. After 2 weeks, in vivo EPR measurements were conducted. The renal reducing ability of lycopene-treated rats was significantly greater than that of the control. This is the first verification of in vivo antioxidant enhancement via dietary lycopene administration. PMID:21381743
Joint estimation of TOA and DOA in IR-UWB system using a successive propagator method
NASA Astrophysics Data System (ADS)
Wang, Fangqiu; Zhang, Xiaofei; Wang, Chenghua; Zhou, Shengkui
2015-10-01
Impulse radio ultra-wideband (IR-UWB) ranging and positioning require accurate estimation of time-of-arrival (TOA) and direction-of-arrival (DOA). With receiver of two antennas, both of the TOA and DOA parameters can be estimated via two-dimensional (2D) propagator method (PM), in which the 2D spectral peak searching, however, renders much higher computational complexity. This paper proposes a successive PM algorithm for joint TOA and DOA estimation in IR-UWB system to avoid 2D spectral peak searching. The proposed algorithm firstly gets the initial TOA estimates in the two antennas from the propagation matrix, then utilises successively one-dimensional (1D) local searches to achieve the estimation of TOAs in the two antennas, and finally obtains the DOA estimates via the difference in the TOAs between the two antennas. The proposed algorithm, which only requires 1D local searches, can avoid the high computational cost in 2D-PM algorithm. Furthermore, the proposed algorithm can obtain automatically paired parameters and has better joint TOA and DOA estimation performance than conventional PM algorithm, estimation of signal parameters via rotational invariance techniques algorithm and matrix pencil algorithm. Meanwhile, it has very close parameter estimation to that of 2D-PM algorithm. We have also derived the mean square error of TOA and DOA estimation of the proposed algorithm and the Cramer-Rao bound of TOA and DOA estimation in this paper. The simulation results verify the usefulness of the proposed algorithm.
Mirarab, Siavash; Bayzid, Md Shamsuzzoha; Warnow, Tandy
2016-05-01
Species tree estimation is complicated by processes, such as gene duplication and loss and incomplete lineage sorting (ILS), that cause discordance between gene trees and the species tree. Furthermore, while concatenation, a traditional approach to tree estimation, has excellent performance under many conditions, the expectation is that the best accuracy will be obtained through the use of species tree estimation methods that are specifically designed to address gene tree discordance. In this article, we report on a study to evaluate MP-EST-one of the most popular species tree estimation methods designed to address ILS-as well as concatenation under maximum likelihood, the greedy consensus, and two supertree methods (Matrix Representation with Parsimony and Matrix Representation with Likelihood). Our study shows that several factors impact the absolute and relative accuracy of methods, including the number of gene trees, the accuracy of the estimated gene trees, and the amount of ILS. Concatenation can be more accurate than the best summary methods in some cases (mostly when the gene trees have poor phylogenetic signal or when the level of ILS is low), but summary methods are generally more accurate than concatenation when there are an adequate number of sufficiently accurate gene trees. Our study suggests that coalescent-based species tree methods may be key to estimating highly accurate species trees from multiple loci. PMID:25164915
Qai, Qiang; Rushton, Gerald; Bhaduri, Budhendra L; Bright, Eddie A; Coleman, Phil R
2006-01-01
The objective of this research is to compute population estimates by age and sex for small areas whose boundaries are different from those for which the population counts were made. In our approach, population surfaces and age-sex proportion surfaces are separately estimated. Age-sex population estimates for small areas and their confidence intervals are then computed using a binomial model with the two surfaces as inputs. The approach was implemented for Iowa using a 90 m resolution population grid (LandScan USA) and U.S. Census 2000 population. Three spatial interpolation methods, the areal weighting (AW) method, the ordinary kriging (OK) method, and a modification of the pycnophylactic method, were used on Census Tract populations to estimate the age-sex proportion surfaces. To verify the model, age-sex population estimates were computed for paired Block Groups that straddled Census Tracts and therefore were spatially misaligned with them. The pycnophylactic method and the OK method were more accurate than the AW method. The approach is general and can be used to estimate subgroup-count types of variables from information in existing administrative areas for custom-defined areas used as the spatial basis of support in other applications.
A revised terrain correction method for forest canopy height estimation using ICESat/GLAS data
NASA Astrophysics Data System (ADS)
Nie, Sheng; Wang, Cheng; Zeng, Hongcheng; Xi, Xiaohuan; Xia, Shaobo
2015-10-01
Although spaceborne Geoscience Laser Altimeter System (GLAS) can measure forest canopy height directly, the measurement accuracy is often affected by footprint size, shape and orientation, and terrain slope. Previous terrain correction methods only took into account the effect of terrain slope and footprint size when estimating forest canopy height. In this study, an improved terrain correction method was proposed to remove the effect of all aforementioned factors when estimating canopy height over sloped terrains. The revised method was found significantly better than the traditional ones according to the canopy height tested using small footprint LiDAR data in China. It reduced the RMSE of the canopy height estimates by up to 1.2 m. The effect of slope on canopy height estimation is almost eliminated by the proposed method since the slope had little correlation with the canopy heights estimated by revised method. When the footprint eccentricity is small, the canopy height error due to the footprint shape and orientation is small. However, when the footprint eccentricity is large enough, the height estimation error due to footprint shape and orientation is large. Therefore, it is necessary to take into account the influence of footprint shape and orientation on forest canopy estimation.
NASA Technical Reports Server (NTRS)
Pera, R. J.; Onat, E.; Klees, G. W.; Tjonneland, E.
1977-01-01
Weight and envelope dimensions of aircraft gas turbine engines are estimated within plus or minus 5% to 10% using a computer method based on correlations of component weight and design features of 29 data base engines. Rotating components are estimated by a preliminary design procedure where blade geometry, operating conditions, material properties, shaft speed, hub-tip ratio, etc., are the primary independent variables used. The development and justification of the method selected, the various methods of analysis, the use of the program, and a description of the input/output data are discussed.
A simple method to estimate threshold friction velocity of wind erosion in the field
NASA Astrophysics Data System (ADS)
Li, Junran; Okin, Gregory S.; Herrick, Jeffrey E.; Belnap, Jayne; Munson, Seth M.; Miller, Mark E.
2010-05-01
This study provides a fast and easy-to-apply method to estimate threshold friction velocity (TFV) of wind erosion in the field. Wind tunnel experiments and a variety of ground measurements including air gun, pocket penetrometer, torvane, and roughness chain were conducted in Moab, Utah and cross-validated in the Mojave Desert, California. Patterns between TFV and ground measurements were examined to identify the optimum method for estimating TFV. The results show that TFVs were best predicted using the air gun and penetrometer measurements in the Moab sites. This empirical method, however, systematically underestimated TFVs in the Mojave Desert sites. Further analysis showed that TFVs in the Mojave sites can be satisfactorily estimated with a correction for rock cover, which is presumably the main cause of the underestimation of TFVs. The proposed method may be also applied to estimate TFVs in environments where other non-erodible elements such as postharvest residuals are found.
NASA Astrophysics Data System (ADS)
Borodachev, S. M.
2016-06-01
The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.
A novel method for estimating the number of species within a region.
Shtilerman, Elad; Thompson, Colin J; Stone, Lewi; Bode, Michael; Burgman, Mark
2014-03-22
Ecologists are often required to estimate the number of species in a region or designated area. A number of diversity indices are available for this purpose and are based on sampling the area using quadrats or other means, and estimating the total number of species from these samples. In this paper, a novel theory and method for estimating the number of species is developed. The theory involves the use of the Laplace method for approximating asymptotic integrals. The method is shown to be successful by testing random simulated datasets. In addition, several real survey datasets are tested, including forests that contain a large number (tens to hundreds) of tree species, and an aquatic system with a large number of fish species. The method is shown to give accurate results, and in almost all cases found to be superior to existing tools for estimating diversity. PMID:24500169
A novel method for estimating the number of species within a region
Shtilerman, Elad; Thompson, Colin J.; Stone, Lewi; Bode, Michael; Burgman, Mark
2014-01-01
Ecologists are often required to estimate the number of species in a region or designated area. A number of diversity indices are available for this purpose and are based on sampling the area using quadrats or other means, and estimating the total number of species from these samples. In this paper, a novel theory and method for estimating the number of species is developed. The theory involves the use of the Laplace method for approximating asymptotic integrals. The method is shown to be successful by testing random simulated datasets. In addition, several real survey datasets are tested, including forests that contain a large number (tens to hundreds) of tree species, and an aquatic system with a large number of fish species. The method is shown to give accurate results, and in almost all cases found to be superior to existing tools for estimating diversity. PMID:24500169
A review and comparison of some commonly used methods of estimating petroleum resource availability
Herbert, J.H.
1982-10-01
The purpose of this pedagogical report is to elucidate the characteristics of the principal methods of estimating the petroleum resource base. Other purposes are to indicate the logical similarities and data requirements of these different methods. The report should serve as a guide for the application and interpretation of the different methods.
Site Effects Estimation by a Transfer-Station Generalized Inversion Method
NASA Astrophysics Data System (ADS)
Zhang, Wenbo; Yu, Xiangwei
2016-04-01
Site effect is one of the essential factors in characterizing strong ground motion as well as in earthquake engineering design. In this study, the generalized inversion technique (GIT) is applied to estimate site effects. Moreover, the GIT is modified to improve its analytical ability.GIT needs a reference station as a standard. Ideally the reference station is located at a rock site, and its site effect is considered to be a constant. For the same earthquake, the record spectrum of an interested station is divided by that of the reference station, and the source term is eliminated. Thus site effects and the attenuation can be acquired. In the GIT process, the amount of earthquake data available in analysis is limited to that recorded by the reference station, and the stations of which site effects can be estimated are also restricted to those stations which recorded common events with the reference station. In order to improve the limitation of the GIT, a modified GIT is put forward in this study, namely, the transfer-station generalized inversion method (TSGI). Comparing with the GIT, this modified GIT can be used to enlarge data set and increase the number of stations whose site effects can be analyzed. And this makes solution much more stable. To verify the results of GIT, a non-reference method, the genetic algorithms (GA), is applied to estimate absolute site effects. On April 20, 2013, an earthquake with magnitude of MS 7.0 occurred in the Lushan region, China. After this event, more than several hundred aftershocks with ML<3.0 occurred in this region. The purpose of this paper is to investigate the site effects and Q factor for this area based on the aftershock strong motion records from the China National Strong Motion Observation Network System. Our results show that when the TSGI is applied instead of the GIT, the total number of events used in the inversion increases from 31 to 54 and the total number of stations whose site effect can be estimated
A Fast and Reliable Computational Method for Estimating Population Genetic Parameters
Vasco, Daniel A.
2008-01-01
The estimation of ancestral and current effective population sizes in expanding populations is a fundamental problem in population genetics. Recently it has become possible to scan entire genomes of several individuals within a population. These genomic data sets can be used to estimate basic population parameters such as the effective population size and population growth rate. Full-data-likelihood methods potentially offer a powerful statistical framework for inferring population genetic parameters. However, for large data sets, computationally intensive methods based upon full-likelihood estimates may encounter difficulties. First, the computational method may be prohibitively slow or difficult to implement for large data. Second, estimation bias may markedly affect the accuracy and reliability of parameter estimates, as suggested from past work on coalescent methods. To address these problems, a fast and computationally efficient least-squares method for estimating population parameters from genomic data is presented here. Instead of modeling genomic data using a full likelihood, this new approach uses an analogous function, in which the full data are replaced with a vector of summary statistics. Furthermore, these least-squares estimators may show significantly less estimation bias for growth rate and genetic diversity than a corresponding maximum-likelihood estimator for the same coalescent process. The least-squares statistics also scale up to genome-sized data sets with many nucleotides and loci. These results demonstrate that least-squares statistics will likely prove useful for nonlinear parameter estimation when the underlying population genomic processes have complex evolutionary dynamics involving interactions between mutation, selection, demography, and recombination. PMID:18505868
Parameter estimation of copula functions using an optimization-based method
NASA Astrophysics Data System (ADS)
Abdi, Amin; Hassanzadeh, Yousef; Talatahari, Siamak; Fakheri-Fard, Ahmad; Mirabbasi, Rasoul
2016-02-01
Application of the copulas can be useful for the accurate multivariate frequency analysis of hydrological phenomena. There are many copula functions and some methods were proposed for estimating the copula parameters. Since the copula functions are mathematically complicated, estimating of the copula parameter is an effortful work. In the present study, an optimization-based method (OBM) is proposed to obtain the parameters of copulas. The usefulness of the proposed method is illustrated on drought events. For this purpose, three commonly used copulas of Archimedean family, namely, Clayton, Frank, and Gumbel copulas are used to construct the joint probability distribution of drought characteristics of 60 gauging sites located in East-Azarbaijan province, Iran. The performance of OBM was compared with two conventional methods, namely, method of moments and inference function for margins. The results illustrate the supremacy of the OBM to estimate the copula parameters compared to the other considered methods.