Science.gov

Sample records for ability estimation methods

  1. Developing an Efficient Computational Method that Estimates the Ability of Students in a Web-Based Learning Environment

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2012-01-01

    This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…

  2. Dental age estimation and different predictive ability of various tooth types in the Czech population: data mining methods.

    PubMed

    Velemínská, Jana; Pilný, Ales; Cepek, Miroslav; Kot'ová, Magdaléna; Kubelková, Radka

    2013-01-01

    Dental development is frequently used to estimate age in many anthropological specializations. The aim of this study was to extract an accurate predictive age system for the Czech population and to discover any different predictive ability of various tooth types and their ontogenetic stability during infancy and adolescence. A cross-sectional panoramic X-ray study was based on developmental stages assessment of mandibular teeth (Moorrees et al. 1963) using 1393 individuals aged from 3 to 17 years. Data mining methods were used for dental age estimation. These are based on nonlinear relationships between the predicted age and data sets. Compared with other tested predictive models, the GAME method predicted age with the highest accuracy. Age-interval estimations between the 10th and 90th percentiles ranged from -1.06 to +1.01 years in girls and from -1.13 to +1.20 in boys. Accuracy was expressed by RMS error, which is the average deviation between estimated and chronological age. The predictive value of individual teeth changed during the investigated period from 3 to 17 years. When we evaluated the whole period, the second molars exhibited the best predictive ability. When evaluating partial age periods, we found that the accuracy of biological age prediction declines with increasing age (from 0.52 to 1.20 years in girls and from 0.62 to 1.22 years in boys) and that the predictive importance of tooth types changes, depending on variability and the number of developmental stages in the age interval. GAME is a promising tool for age-interval estimation studies as they can provide reliable predictive models. PMID:24466642

  3. Estimation of the binding ability of main transport proteins of blood plasma with liver cirrhosis by the fluorescent probe method

    NASA Astrophysics Data System (ADS)

    Korolenko, E. A.; Korolik, E. V.; Korolik, A. K.; Kirkovskii, V. V.

    2007-07-01

    We present results from an investigation of the binding ability of the main transport proteins (albumin, lipoproteins, and α-1-acid glycoprotein) of blood plasma from patients at different stages of liver cirrhosis by the fluorescent probe method. We used the hydrophobic fluorescent probes anionic 8-anilinonaphthalene-1-sulfonate, which interacts in blood plasma mainly with albumin; cationic Quinaldine red, which interacts with α-1-acid glycoprotein; and neutral Nile red, which redistributes between lipoproteins and albumin in whole blood plasma. We show that the binding ability of albumin and α-1-acid glycoprotein to negatively charged and positively charged hydrophobic metabolites, respectively, increases in the compensation stage of liver cirrhosis. As the pathology process deepens and transitions into the decompensation stage, the transport abilities of albumin and α-1-acid glycoprotein decrease whereas the binding ability of lipoproteins remains high.

  4. Combining Climatic Projections and Dispersal Ability: A Method for Estimating the Responses of Sandfly Vector Species to Climate Change

    PubMed Central

    Fischer, Dominik; Moeller, Philipp; Thomas, Stephanie M.; Naucke, Torsten J.; Beierkuhnlein, Carl

    2011-01-01

    Background In the Old World, sandfly species of the genus Phlebotomus are known vectors of Leishmania, Bartonella and several viruses. Recent sandfly catches and autochthonous cases of leishmaniasis hint on spreading tendencies of the vectors towards Central Europe. However, studies addressing potential future distribution of sandflies in the light of a changing European climate are missing. Methodology Here, we modelled bioclimatic envelopes using MaxEnt for five species with proven or assumed vector competence for Leishmania infantum, which are either predominantly located in (south-) western (Phlebotomus ariasi, P. mascittii and P. perniciosus) or south-eastern Europe (P. neglectus and P. perfiliewi). The determined bioclimatic envelopes were transferred to two climate change scenarios (A1B and B1) for Central Europe (Austria, Germany and Switzerland) using data of the regional climate model COSMO-CLM. We detected the most likely way of natural dispersal (“least-cost path”) for each species and hence determined the accessibility of potential future climatically suitable habitats by integrating landscape features, projected changes in climatic suitability and wind speed. Results and Relevance Results indicate that the Central European climate will become increasingly suitable especially for those vector species with a current south-western focus of distribution. In general, the highest suitability of Central Europe is projected for all species in the second half of the 21st century, except for P. perfiliewi. Nevertheless, we show that sandflies will hardly be able to occupy their climatically suitable habitats entirely, due to their limited natural dispersal ability. A northward spread of species with south-eastern focus of distribution may be constrained but not completely avoided by the Alps. Our results can be used to install specific monitoring systems to the projected risk zones of potential sandfly establishment. This is urgently needed for adaptation

  5. Item-Weighted Likelihood Method for Ability Estimation in Tests Composed of Both Dichotomous and Polytomous Items

    ERIC Educational Resources Information Center

    Tao, Jian; Shi, Ning-Zhong; Chang, Hua-Hua

    2012-01-01

    For mixed-type tests composed of both dichotomous and polytomous items, polytomous items often yield more information than dichotomous ones. To reflect the difference between the two types of items, polytomous items are usually pre-assigned with larger weights. We propose an item-weighted likelihood method to better assess examinees' ability…

  6. Estimation abilities of large numerosities in Kindergartners.

    PubMed

    Mejias, Sandrine; Schiltz, Christine

    2013-01-01

    The approximate number system (ANS) is thought to be a building block for the elaboration of formal mathematics. However, little is known about how this core system develops and if it can be influenced by external factors at a young age (before the child enters formal numeracy education). The purpose of this study was to examine numerical magnitude representations of 5-6 year old children at 2 different moments of Kindergarten considering children's early number competence as well as schools' socio-economic index (SEI). This study investigated estimation abilities of large numerosities using symbolic and non-symbolic output formats (8-64). In addition, we assessed symbolic and non-symbolic early number competence (1-12) at the end of the 2nd (N = 42) and the 3rd (N = 32) Kindergarten grade. By letting children freely produce estimates we observed surprising estimation abilities at a very young age (from 5 year on) extending far beyond children's symbolic explicit knowledge. Moreover, the time of testing has an impact on the ANS accuracy since 3rd Kindergarteners were more precise in both estimation tasks. Additionally, children who presented better exact symbolic knowledge were also those with the most refined ANS. However, this was true only for 3rd Kindergarteners who were a few months from receiving math instructions. In a similar vein, higher SEI positively impacted only the oldest children's estimation abilities whereas it played a role for exact early number competences already in 2nd and 3rd graders. Our results support the view that approximate numerical representations are linked to exact number competence in young children before the start of formal math education and might thus serve as building blocks for mathematical knowledge. Since this core number system was also sensitive to external components such as the SEI this implies that it can most probably be targeted and refined through specific educational strategies from preschool on. PMID:24009591

  7. Ability of 3 extraction methods (BCR, Tessier and protease K) to estimate bioavailable metals in sediments from Huelva estuary (Southwestern Spain).

    PubMed

    Rosado, Daniel; Usero, José; Morillo, José

    2016-01-15

    The bioavailable fraction of metals (Zn, Cu, Cd, Mn, Pb, Ni, Fe, and Cr) in sediments of the Huelva estuary and its littoral of influence has been estimated carrying out the most popular methods of sequential extraction (BCR and Tessier) and a biomimetic approach (protease K extraction). Results were compared to enrichment factors found in Arenicola marina. The linear correlation coefficients (R(2)) obtained between the fraction mobilized by the first step of the BCR sequential extraction, by the sum of the first and second steps of the Tessier sequential extraction, and by protease K, and enrichment factors in A. marina, are at their highest for protease K extraction (0.709), followed by BCR first step (0.507) and the sum of the first and second steps of Tessier (0.465). This observation suggests that protease K represents the bioavailable fraction more reliably than traditional methods (BCR and Tessier), which have a similar ability.

  8. Estimating Premorbid Cognitive Abilities in Low-Educated Populations

    PubMed Central

    Apolinario, Daniel; Brucki, Sonia Maria Dozzi; Ferretti, Renata Eloah de Lucena; Farfel, José Marcelo; Magaldi, Regina Miksian; Busse, Alexandre Leopold; Jacob-Filho, Wilson

    2013-01-01

    Objective To develop an informant-based instrument that would provide a valid estimate of premorbid cognitive abilities in low-educated populations. Methods A questionnaire was drafted by focusing on the premorbid period with a 10-year time frame. The initial pool of items was submitted to classical test theory and a factorial analysis. The resulting instrument, named the Premorbid Cognitive Abilities Scale (PCAS), is composed of questions addressing educational attainment, major lifetime occupation, reading abilities, reading habits, writing abilities, calculation abilities, use of widely available technology, and the ability to search for specific information. The validation sample was composed of 132 older Brazilian adults from the following three demographically matched groups: normal cognitive aging (n = 72), mild cognitive impairment (n = 33), and mild dementia (n = 27). The scores of a reading test and a neuropsychological battery were adopted as construct criteria. Post-mortem inter-informant reliability was tested in a sub-study with two relatives from each deceased individual. Results All items presented good discriminative power, with corrected item-total correlation varying from 0.35 to 0.74. The summed score of the instrument presented high correlation coefficients with global cognitive function (r = 0.73) and reading skills (r = 0.82). Cronbach's alpha was 0.90, showing optimal internal consistency without redundancy. The scores did not decrease across the progressive levels of cognitive impairment, suggesting that the goal of evaluating the premorbid state was achieved. The intraclass correlation coefficient was 0.96, indicating excellent inter-informant reliability. Conclusion The instrument developed in this study has shown good properties and can be used as a valid estimate of premorbid cognitive abilities in low-educated populations. The applicability of the PCAS, both as an estimate of premorbid intelligence and cognitive

  9. A Comparison of Item Selection Procedures Using Different Ability Estimation Methods in Computerized Adaptive Testing Based on the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Ho, Tsung-Han

    2010-01-01

    Computerized adaptive testing (CAT) provides a highly efficient alternative to the paper-and-pencil test. By selecting items that match examinees' ability levels, CAT not only can shorten test length and administration time but it can also increase measurement precision and reduce measurement error. In CAT, maximum information (MI) is the most…

  10. Hierarchical state-space estimation of leatherback turtle navigation ability.

    PubMed

    Mills Flemming, Joanna; Jonsen, Ian D; Myers, Ransom A; Field, Christopher A

    2010-01-01

    Remotely sensed tracking technology has revealed remarkable migration patterns that were previously unknown; however, models to optimally use such data have developed more slowly. Here, we present a hierarchical Bayes state-space framework that allows us to combine tracking data from a collection of animals and make inferences at both individual and broader levels. We formulate models that allow the navigation ability of animals to be estimated and demonstrate how information can be combined over many animals to allow improved estimation. We also show how formal hypothesis testing regarding navigation ability can easily be accomplished in this framework. Using Argos satellite tracking data from 14 leatherback turtles, 7 males and 7 females, during their southward migration from Nova Scotia, Canada, we find that the circle of confusion (the radius around an animal's location within which it is unable to determine its location precisely) is approximately 96 km. This estimate suggests that the turtles' navigation does not need to be highly accurate, especially if they are able to use more reliable cues as they near their destination. Moreover, for the 14 turtles examined, there is little evidence to suggest that male and female navigation abilities differ. Because of the minimal assumptions made about the movement process, our approach can be used to estimate and compare navigation ability for many migratory species that are able to carry electronic tracking devices. PMID:21203382

  11. Hierarchical state-space estimation of leatherback turtle navigation ability.

    PubMed

    Mills Flemming, Joanna; Jonsen, Ian D; Myers, Ransom A; Field, Christopher A

    2010-12-28

    Remotely sensed tracking technology has revealed remarkable migration patterns that were previously unknown; however, models to optimally use such data have developed more slowly. Here, we present a hierarchical Bayes state-space framework that allows us to combine tracking data from a collection of animals and make inferences at both individual and broader levels. We formulate models that allow the navigation ability of animals to be estimated and demonstrate how information can be combined over many animals to allow improved estimation. We also show how formal hypothesis testing regarding navigation ability can easily be accomplished in this framework. Using Argos satellite tracking data from 14 leatherback turtles, 7 males and 7 females, during their southward migration from Nova Scotia, Canada, we find that the circle of confusion (the radius around an animal's location within which it is unable to determine its location precisely) is approximately 96 km. This estimate suggests that the turtles' navigation does not need to be highly accurate, especially if they are able to use more reliable cues as they near their destination. Moreover, for the 14 turtles examined, there is little evidence to suggest that male and female navigation abilities differ. Because of the minimal assumptions made about the movement process, our approach can be used to estimate and compare navigation ability for many migratory species that are able to carry electronic tracking devices.

  12. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  13. Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation

    ERIC Educational Resources Information Center

    Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting

    2011-01-01

    Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…

  14. An effective method for incoherent scattering radar's detecting ability evaluation

    NASA Astrophysics Data System (ADS)

    Lu, Ziqing; Yao, Ming; Deng, Xiaohua

    2016-06-01

    Ionospheric incoherent scatter radar (ISR), which is used to detect ionospheric electrons and ions, generally, has megawatt class transmission power and hundred meter level antenna aperture. The crucial purpose of this detecting technology is to get ionospheric parameters by acquiring the autocorrelation function and power spectrum of the target ionospheric plasma echoes. Whereas the ISR's echoes are very weak because of the small radar cross section of its target, estimating detecting ability will be significantly instructive and meaningful for ISR system design. In this paper, we evaluate the detecting ability through signal-to-noise ratio (SNR). The soft-target radar equation is deduced to be applicable to ISR, through which we use data from International Reference Ionosphere model to simulate signal-to-noise ratio (SNR) of echoes, and then comparing the measured SNR from European Incoherent Scatter Scientific Association and Advanced Modular Incoherent Scatter Radar with the simulation. The simulation results show good consistency with the measured SNR. For ISR, the topic of this paper is the first comparison between the calculated SNR and radar measurements; the detecting ability can be improved through increasing SNR. The effective method for ISR's detecting ability evaluation provides basis for design of radar system.

  15. Evaluating methods for estimating existential risks.

    PubMed

    Tonn, Bruce; Stiefel, Dorian

    2013-10-01

    Researchers and commissions contend that the risk of human extinction is high, but none of these estimates have been based upon a rigorous methodology suitable for estimating existential risks. This article evaluates several methods that could be used to estimate the probability of human extinction. Traditional methods evaluated include: simple elicitation; whole evidence Bayesian; evidential reasoning using imprecise probabilities; and Bayesian networks. Three innovative methods are also considered: influence modeling based on environmental scans; simple elicitation using extinction scenarios as anchors; and computationally intensive possible-worlds modeling. Evaluation criteria include: level of effort required by the probability assessors; level of effort needed to implement the method; ability of each method to model the human extinction event; ability to incorporate scientific estimates of contributory events; transparency of the inputs and outputs; acceptability to the academic community (e.g., with respect to intellectual soundness, familiarity, verisimilitude); credibility and utility of the outputs of the method to the policy community; difficulty of communicating the method's processes and outputs to nonexperts; and accuracy in other contexts. The article concludes by recommending that researchers assess the risks of human extinction by combining these methods. PMID:23551083

  16. PDV Uncertainty Estimation & Methods Comparison

    SciTech Connect

    Machorro, E.

    2011-11-01

    Several methods are presented for estimating the rapidly changing instantaneous frequency of a time varying signal that is contaminated by measurement noise. Useful a posteriori error estimates for several methods are verified numerically through Monte Carlo simulation. However, given the sampling rates of modern digitizers, sub-nanosecond variations in velocity are shown to be reliably measurable in most (but not all) cases. Results support the hypothesis that in many PDV regimes of interest, sub-nanosecond resolution can be achieved.

  17. On the Relationships between Jeffreys Modal and Weighted Likelihood Estimation of Ability under Logistic IRT Models

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2012-01-01

    This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…

  18. Clinical validation of the General Ability Index--Estimate (GAI-E): estimating premorbid GAI.

    PubMed

    Schoenberg, Mike R; Lange, Rael T; Iverson, Grant L; Chelune, Gordon J; Scott, James G; Adams, Russell L

    2006-09-01

    The clinical utility of the General Ability Index--Estimate (GAI-E; Lange, Schoenberg, Chelune, Scott, & Adams, 2005) for estimating premorbid GAI scores was investigated using the WAIS-III standardization clinical trials sample (The Psychological Corporation, 1997). The GAI-E algorithms combine Vocabulary, Information, Matrix Reasoning, and Picture Completion subtest raw scores with demographic variables to predict GAI. Ten GAI-E algorithms were developed combining demographic variables with single subtest scaled scores and with two subtests. Estimated GAI are presented for participants diagnosed with dementia (n = 50), traumatic brain injury (n = 20), Huntington's disease (n = 15), Korsakoff's disease (n = 12), chronic alcohol abuse (n = 32), temporal lobectomy (n = 17), and schizophrenia (n = 44). In addition, a small sample of participants without dementia and diagnosed with depression (n = 32) was used as a clinical comparison group. The GAI-E algorithms provided estimates of GAI that closely approximated scores expected for a healthy adult population. The greatest differences between estimated GAI and obtained GAI were observed for the single subtest GAI-E algorithms using the Vocabulary, Information, and Matrix Reasoning subtests. Based on these data, recommendations for the use of the GAI-E algorithms are presented.

  19. Development of the WAIS-III general ability index estimate (GAI-E).

    PubMed

    Lange, Rael T; Schoenberg, Mike R; Chelune, Gordon J; Scott, James G; Adams, Russell L

    2005-02-01

    The WAIS-III General Ability Index (GAI; Tulsky, Saklofske, Wilkins, & Weiss, 2001) is a recently developed, 6-subtest measure of global intellectual functioning. However, clinical use of the GAI is currently limited by the absence of a method to estimate premorbid functioning as measured by this index. The purpose of this study was to develop regression equations to estimate GAI scores from demographic variables and WAIS-III subtest performance. Participants consisted of those subjects in the WAIS-III standardization sample that has complete demographic data (N=2,401) and were randomly divided into two groups. The first group (n=1,200) was used to develop the formulas (i.e., Development group) and the second (n=1,201) group was used to validate the prediction algorithms (i.e., Validation group). Demographic variables included age, education, ethnicity, gender and region of country. Subtest variables included vocabulary, information, picture completion, and matrix reasoning raw scores. Ten regression algorithms were generated designed to estimate GAI. The GAI-Estimate (GAI-E) algorithms accounted for 58% to 82% of the variance. The standard error of estimate ranged from 6.44 to 9.57. The correlations between actual and estimated GAI ranged from r=.76 to r=.90. These algorithms provided accurate estimates of GAI in the WAIS-III standardization sample. Implications for estimating GAI in patients with known or suspected neurological dysfunction is discussed and future research is proposed. PMID:15814479

  20. Robust Estimation of Ability in the Rasch Model.

    ERIC Educational Resources Information Center

    Wainer, Howard; Wright, Benjamin D.

    The pure Rasch model was compared with four modifications of the model in a number of different simulations in order to ascertain the comparative efficiencies of the parameter estimations of these modifications. Because there is always noise in test score data, some individuals may have response patterns that do not fit the model and their…

  1. Methods for Cloud Cover Estimation

    NASA Technical Reports Server (NTRS)

    Glackin, D. L.; Huning, J. R.; Smith, J. H.; Logan, T. L.

    1984-01-01

    Several methods for cloud cover estimation are described relevant to assessing the performance of a ground-based network of solar observatories. The methods rely on ground and satellite data sources and provide meteorological or climatological information. One means of acquiring long-term observations of solar oscillations is the establishment of a ground-based network of solar observatories. Criteria for station site selection are: gross cloudiness, accurate transparency information, and seeing. Alternative methods for computing this duty cycle are discussed. The cycle, or alternatively a time history of solar visibility from the network, can then be input to a model to determine the effect of duty cycle on derived solar seismology parameters. Cloudiness from space is studied to examine various means by which the duty cycle might be computed. Cloudiness, and to some extent transparency, can potentially be estimated from satellite data.

  2. Career Interests and Self-Estimated Abilities of Young Adults with Disabilities

    ERIC Educational Resources Information Center

    Turner, Sherri; Unkefer, Lesley Craig; Cichy, Bryan Ervin; Peper, Christine; Juang, Ju-Ping

    2011-01-01

    The purpose of this study was to ascertain vocational interests and self-estimated work-relevant abilities of young adults with disabilities. Results showed that young adults with both low incidence and high incidence disabilities have a wide range of interests and self-estimated work-relevant abilities that are comparable to those in the general…

  3. Density estimation with non-parametric methods

    NASA Astrophysics Data System (ADS)

    Fadda, D.; Slezak, E.; Bijaoui, A.

    1998-01-01

    One key issue in several astrophysical problems is the evaluation of the density probability function underlying an observational discrete data set. We here review two non-parametric density estimators which recently appeared in the astrophysical literature, namely the adaptive kernel density estimator and the Maximum Penalized Likelihood technique, and describe another method based on the wavelet transform. The efficiency of these estimators is tested by using extensive numerical simulations in the one-dimensional case. The results are in good agreement with theoretical functions and the three methods appear to yield consistent estimates. However, the Maximum Penalized Likelihood suffers from a lack of resolution and high computational cost due to its dependency on a minimization algorithm. The small differences between kernel and wavelet estimates are mainly explained by the ability of the wavelet method to take into account local gaps in the data distribution. This new approach is very promising, since smaller structures superimposed onto a larger one are detected only by this technique, especially when small samples are investigated. Thus, wavelet solutions appear to be better suited for subclustering studies. Nevertheless, kernel estimates seem more robust and are reliable solutions although some small-scale details can be missed. In order to check these estimators with respect to previous studies, two galaxy redshift samples, related to the galaxy cluster A3526 and to the Corona Borealis region, have been analyzed. In both these cases claims for bimodality are confirmed at a high confidence level. The complete version of this paper with the whole set of figures can be accessed from the electronic version of the A\\&A Suppl. Ser. managed by Editions de Physique as well as from the SISSA database (astro-ph/9704096).

  4. A Study of Frequency Estimation Equipercentile Equating When There Are Large Ability Differences. Research Report. ETS RR-09-45

    ERIC Educational Resources Information Center

    Guo, Hongwen; Oh, Hyeonjoo J.

    2009-01-01

    In operational equating, frequency estimation (FE) equipercentile equating is often excluded from consideration when the old and new groups have a large ability difference. This convention may, in some instances, cause the exclusion of one competitive equating method from the set of methods under consideration. In this report, we study the…

  5. Improving the Quality of Ability Estimates through Multidimensional Scoring and Incorporation of Ancillary Variables

    ERIC Educational Resources Information Center

    de la Torre, Jimmy

    2009-01-01

    For one reason or another, various sources of information, namely, ancillary variables and correlational structure of the latent abilities, which are usually available in most testing situations, are ignored in ability estimation. A general model that incorporates these sources of information is proposed in this article. The model has a general…

  6. The Asymptotic Distribution of Ability Estimates: Beyond Dichotomous Items and Unidimensional IRT Models

    ERIC Educational Resources Information Center

    Sinharay, Sandip

    2015-01-01

    The maximum likelihood estimate (MLE) of the ability parameter of an item response theory model with known item parameters was proved to be asymptotically normally distributed under a set of regularity conditions for tests involving dichotomous items and a unidimensional ability parameter (Klauer, 1990; Lord, 1983). This article first considers…

  7. Longitudinal and Concurrent Relations among Temperament, Ability Estimation, and Injury Proneness.

    ERIC Educational Resources Information Center

    Schwebel, David C.; Plumert, Jodie M.

    1999-01-01

    Examined relations between temperament, ability estimation, and injury proneness from toddlerhood through school age. Found that children scoring high on Extroversion and low on Inhibitory Control as toddlers and preschoolers tended to overestimate their physical abilities and have more unintentional injuries at age 6. Children low on Extroversion…

  8. Smoothing Methods for Estimating Test Score Distributions.

    ERIC Educational Resources Information Center

    Kolen, Michael J.

    1991-01-01

    Estimation/smoothing methods that are flexible enough to fit a wide variety of test score distributions are reviewed: kernel method, strong true-score model-based method, and method that uses polynomial log-linear models. Applications of these methods include describing/comparing test score distributions, estimating norms, and estimating…

  9. [Estimation of combining ability of specialized types of the Big White breeds].

    PubMed

    Berezovskiĭ, N D; Giria, V N

    1992-01-01

    The combining ability of the specialized intrabreed types of Estonia Big White (EBW-1) and Ukrainian Big White (UBW) breeding of pigs has been studied by the results of their productivity using the first Griffing method (1956). A close coincidence of theoretical and practical indices by the studied sings, proves the ability to use this method for prediction of interline hybridization of pigs.

  10. Ten Years of GLAPHI Method Developing Scientific Research Abilities

    NASA Astrophysics Data System (ADS)

    Vega-Carrillo, Hector R.

    2006-12-01

    During the past ten years we had applied our method, GLAPHI, to teach how to do scientific research. The method has been applied from freshman students up to PhD professionals. The method is based in the search and analysis of scientific literature, the scientific question or problem, the approach of hypothesis and objetive, the estimation of the project cost and the timetable. It also includes statistics for research, author rights, ethics in research, publication of scientific papers, writting scientific reports and meeting presentations. In this work success and fails of GLAPHI methods will be discussed. Work partially supported by CONACyT (Mexico) under contract: SEP-2004-C01-46893

  11. Ability of Sagittal Kinematic Variables to Estimate Ground Reaction Forces and Joint Kinetics in Running

    PubMed Central

    Wille, Christa; Lenhart, Rachel; Wang, Sijian; Thelen, Darryl; Heiderscheit, Bryan

    2015-01-01

    Study Design Controlled laboratory study, cross sectional design. Objective To determine if sagittal kinematic variables can be used to estimate select running kinetics. Background Excessive loading during running has been implicated in a variety of injuries, yet this information is typically not assessed during a standard clinical examination. Developing a clinically feasible strategy to estimate ground reaction forces and joint kinetics may improve the ability to identify those at an increased risk of injury. Methods Three-dimensional kinematics and ground reaction forces of 45 participants were recorded during treadmill running at self-selected speed. Kinematic variables used to estimate specific kinetic metrics included: vertical excursion of the center of mass, foot inclination angle at initial contact, horizontal distance between the center of mass and heel at initial contact, knee flexion angle at initial contact, and peak knee flexion angle during stance. Linear mixed effects models were fitted to explore the association between the kinetic and kinematic measures, including step rate and gender, with final models created using backward variable selection. Results Models were developed to estimate peak knee extensor moment (R2=0.43), energy absorbed at the knee during loading response (R2=0.58), peak patellofemoral joint reaction force (R2=0.55), peak vertical ground reaction force (R2=0.48), braking impulse (R2=0.50), and average vertical loading rate (R2=0.04). Conclusions Our findings suggest that insights into important running kinetics can be obtained from a subset of sagittal plane kinematics common to a clinical running analysis. Of note, the limb posture at initial contact influenced subsequent loading patterns in stance. PMID:25156183

  12. Estimated maximal and current brain volume predict cognitive ability in old age.

    PubMed

    Royle, Natalie A; Booth, Tom; Valdés Hernández, Maria C; Penke, Lars; Murray, Catherine; Gow, Alan J; Maniega, Susana Muñoz; Starr, John; Bastin, Mark E; Deary, Ian J; Wardlaw, Joanna M

    2013-12-01

    Brain tissue deterioration is a significant contributor to lower cognitive ability in later life; however, few studies have appropriate data to establish how much influence prior brain volume and prior cognitive performance have on this association. We investigated the associations between structural brain imaging biomarkers, including an estimate of maximal brain volume, and detailed measures of cognitive ability at age 73 years in a large (N = 620), generally healthy, community-dwelling population. Cognitive ability data were available from age 11 years. We found positive associations (r) between general cognitive ability and estimated brain volume in youth (male, 0.28; females, 0.12), and in measured brain volume in later life (males, 0.27; females, 0.26). Our findings show that cognitive ability in youth is a strong predictor of estimated prior and measured current brain volume in old age but that these effects were the same for both white and gray matter. As 1 of the largest studies of associations between brain volume and cognitive ability with normal aging, this work contributes to the wider understanding of how some early-life factors influence cognitive aging.

  13. A meta-analysis of sex differences in physical ability: revised estimates and strategies for reducing differences in selection contexts.

    PubMed

    Courtright, Stephen H; McCormick, Brian W; Postlethwaite, Bennett E; Reeves, Cody J; Mount, Michael K

    2013-07-01

    Despite the wide use of physical ability tests for selection and placement decisions in physically demanding occupations, research has suggested that there are substantial male-female differences on the scores of such tests, contributing to adverse impact. In this study, we present updated, revised meta-analytic estimates of sex differences in physical abilities and test 3 moderators of these differences-selection system design, specificity of measurement, and training-in order to provide insight into possible methods of reducing sex differences on physical ability test scores. Findings revealed that males score substantially better on muscular strength and cardiovascular endurance tests but that there are no meaningful sex differences on movement quality tests. These estimates differ in several ways from past estimates. Results showed that sex differences are similar across selection systems that emphasize basic ability tests versus job simulations. Results also showed that sex differences are smaller for narrow dimensions of muscular strength and that there is substantial variance in the sex differences in muscular strength across different body regions. Finally, we found that training led to greater increases in performance for women than for men on both muscular strength and cardiovascular endurance tests. However, training reduced the male-female differences on muscular strengths tests only modestly and actually increased male-female differences on cardiovascular endurance. We discuss the implications of these findings for research on physical ability testing and adverse impact, as well as the practical implications of the results. PMID:23731029

  14. A Longitudinal Analysis of Estimation, Counting Skills, and Mathematical Ability across the First School Year

    ERIC Educational Resources Information Center

    Muldoon, Kevin; Towse, John; Simms, Victoria; Perra, Oliver; Menzies, Victoria

    2013-01-01

    In response to claims that the quality (and in particular linearity) of children's mental representation of number acts as a constraint on number development, we carried out a longitudinal assessment of the relationships between number line estimation, counting, and mathematical abilities. Ninety-nine 5-year-olds were tested on 4 occasions at 3…

  15. Effects of Calibration Sample Size and Item Bank Size on Ability Estimation in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Sahin, Alper; Weiss, David J.

    2015-01-01

    This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…

  16. TH-SCORE: A Program for Obtaining Ability Estimates under Different Psychometric Models.

    ERIC Educational Resources Information Center

    Ferrando, Pere J.; Lorenzo, Urbano

    1998-01-01

    A program for obtaining ability estimates and their standard errors under a variety of psychometric models is documented. The general models considered are (1) classical test theory; (2) item factor analysis for continuous censored responses; and (3) unidimensional and multidimensional item response theory graded response models. (SLD)

  17. Brief Report: Use of DQ for Estimating Cognitive Ability in Young Children with Autism

    ERIC Educational Resources Information Center

    Delmolino, Lara M.

    2006-01-01

    The utility of Developmental Quotients (DQ) from the Psychoeducational Profile--Revised (PEP-R) to estimate cognitive ability in young children with autism was assessed. DQ scores were compared to scores from the Stanford-Binet Intelligence Scales--Fourth Edition (SB-FE) for 27 preschool students with autism. Overall and domain DQ's on the PEP-R…

  18. The Stability of "g" across Different Methods of Estimation.

    ERIC Educational Resources Information Center

    Ree, Malcolm James; Earles, James A.

    1991-01-01

    Fourteen estimates were made of "g" (general cognitive ability) from the normative sample of a multiple-aptitude test battery with a weighted sample representing 25,409,193 men and women. The methods, which included principal components, unrotated principal factors, and hierarchical factor analysis, are equivalent for this test. (SLD)

  19. Longitudinal and concurrent relations among temperament, ability estimation, and injury proneness.

    PubMed

    Schwebel, D C; Plumert, J M

    1999-01-01

    This study examined longitudinal and concurrent relations between temperament, ability estimation, and injury proneness. Longitudinal assessments of Inhibitory Control were collected through a behavioral battery at toddler (33 months) and preschool ages (46 months). Parent-reported measures of Inhibitory Control and Extraversion also were obtained at those ages. At school age (76 months), children participated in a set of tasks to assess overestimation and underestimation of physical abilities. Parents provided reports of children's temperament and injury history at school age. Results showed that children who were high on Extraversion and low on Inhibitory Control as toddlers and preschoolers tended to overestimate their physical abilities and to have more unintentional injuries at age 6. Children low on Extraversion and high on Inhibitory Control tended to underestimate their physical abilities. Implications for injury prevention are discussed. PMID:10368916

  20. Estimating Turbulent Surface Fluxes from Small Unmanned Aircraft: Evaluation of Current Abilities

    NASA Astrophysics Data System (ADS)

    de Boer, G.; Lawrence, D.; Elston, J.; Cassano, J. J.; Mack, J.; Wildmann, N.; Nigro, M. A.; Ivey, M.; Wolfe, D. E.; Muschinski, A.

    2014-12-01

    Heat transfer between the atmosphere and Earth's surface represents a key component to understanding Earth energy balance, making it important in understanding and simulating climate. Arguably, the oceanic air-sea interface and Polar sea-ice-air interface are amongst the most challenging in which to measure these fluxes. This difficulty results partially from challenges associated with infrastructure deployment on these surfaces and partially from an inability to obtain spatially representative values over a potentially inhomogeneous surface. Traditionally sensible (temperature) and latent (moisture) fluxes are estimated using one of several techniques. A preferred method involves eddy-correlation where cross-correlation between anomalies in vertical motion (w) and temperature (T) or moisture (q) is used to estimate heat transfer. High-frequency measurements of these quantities can be derived using tower-mounted instrumentation. Such systems have historically been deployed over land surfaces or on ships and buoys to calculate fluxes at the air-land or air-sea interface, but such deployments are expensive and challenging to execute, resulting in a lack of spatially diverse measurements. A second ("bulk") technique involves the observation of horizontal windspeed, temperature and moisture at a given altitude over an extended time period in order to estimate the surface fluxes. Small Unmanned Aircraft Systems (sUAS) represent a unique platform from which to derive these fluxes. These sUAS can be small ( 1 m), lightweight ( 700 g), low cost ( $2000) and relatively easy to deploy to remote locations and over inhomogeneous surfaces. We will give an overview of the ability of sUAS to provide measurements necessary for estimating surface turbulent fluxes. This discussion is based on flights in the vicinity of the 1000 ft. Boulder Atmospheric Observatory (BAO) tower, and over the US Department of Energy facility at Oliktok Point, Alaska. We will present initial comparisons

  1. The choice of the ability estimate with asymptotically correct standardized person-fit statistics.

    PubMed

    Sinharay, Sandip

    2016-05-01

    Snijders (2001, Psychometrika, 66, 331) suggested a statistical adjustment to obtain the asymptotically correct standardized versions of a specific class of person-fit statistics. His adjustment has been used to obtain the asymptotically correct standardized versions of several person-fit statistics including the lz statistic (Drasgow et al., 1985, Br. J. Math. Stat. Psychol., 38, 67), the infit and outfit statistics (e.g., Wright & Masters, 1982, Rating scale analysis, Chicago, IL: Mesa Press), and the standardized extended caution indices (Tatsuoka, 1984, Psychometrika, 49, 95). Snijders (2001), van Krimpen-Stoop and Meijer (1999, Appl. Psychol. Meas., 23, 327), Magis et al. (2012, J. Educ. Behav. Stat., 37, 57), Magis et al. (2014, J. Appl. Meas., 15, 82), and Sinharay (2015b, Psychometrika, doi:10.1007/s11336-015-9465-x, 2016b, Corrections of standardized extended caution indices, Unpublished manuscript) have used the maximum likelihood estimate, the weighted likelihood estimate, and the posterior mode of the examinee ability with the adjustment of Snijders (2001). This paper broadens the applicability of the adjustment of Snijders (2001) by showing how other ability estimates such as the expected a posteriori estimate, the biweight estimate (Mislevy & Bock, 1982, Educ. Psychol. Meas., 42, 725), and the Huber estimate (Schuster & Yuan, 2011, J. Educ. Behav. Stat., 36, 720) can be used with the adjustment. A simulation study is performed to examine the Type I error rate and power of two asymptotically correct standardized person-fit statistics with several ability estimates. A real data illustration follows.

  2. The Effects of Answer Copying on the Ability Level Estimates of Cheater Examinees in Answer Copying Pairs

    ERIC Educational Resources Information Center

    Zopluoglu, Cengiz; Davenport, Ernest C., Jr.

    2011-01-01

    The purpose of this study was to examine the effects of answer copying on the ability level estimates of cheater examinees in answer copying pairs. The study generated answer copying pairs for each of 1440 conditions, source ability (12) x cheater ability (12) x amount of copying (10). The average difference between the ability level estimates…

  3. An assessment of vapour pressure estimation methods.

    PubMed

    O'Meara, Simon; Booth, Alastair Murray; Barley, Mark Howard; Topping, David; McFiggans, Gordon

    2014-09-28

    Laboratory measurements of vapour pressures for atmospherically relevant compounds were collated and used to assess the accuracy of vapour pressure estimates generated by seven estimation methods and impacts on predicted secondary organic aerosol. Of the vapour pressure estimation methods that were applicable to all the test set compounds, the Lee-Kesler [Reid et al., The Properties of Gases and Liquids, 1987] method showed the lowest mean absolute error and the Nannoolal et al. [Nannoonal et al., Fluid Phase Equilib., 2008, 269, 117-133] method showed the lowest mean bias error (when both used normal boiling points estimated using the Nannoolal et al. [Nannoolal et al., Fluid Phase Equilib., 2004, 226, 45-63] method). The effect of varying vapour pressure estimation methods on secondary organic aerosol (SOA) mass loading and composition was investigated using an absorptive partitioning equilibrium model. The Myrdal and Yalkowsky [Myrdal and Yalkowsky, Ind. Eng. Chem. Res., 1997, 36, 2494-2499] vapour pressure estimation method using the Nannoolal et al. [Nannoolal et al., Fluid Phase Equilib., 2004, 226, 45-63] normal boiling point gave the most accurate estimation of SOA loading despite not being the most accurate for vapour pressures alone. PMID:25105180

  4. [Estimation of combining ability of specialized types of the big white breed].

    PubMed

    Berezovskiĭ, N D; Giria, V N

    1991-01-01

    The combining ability of the specialized intrabreed types of Estonian Big White (EBW-1) and Ukrainian Big White (UBW) selections of pigs has been studied by the results of their productivity using the first Griffing method (1956). Close agreement of theoretical and practical indices by the characters under study proves the possibility of applying this method to predict efficiency of interlinear big hybridization.

  5. Estimation of avidin activity by two methods.

    PubMed

    Borza, B; Marcheş, F; Repanovici, R; Burducea, O; Popa, L M

    1991-01-01

    The biological activity of avidin was estimated by two different methods. The spectrophotometric method used the avidin titration with biotin in the presence of 4 hydroxiazobenzen-2'carboxilic acid as indicator. In the radioisotopic determination the titration with tritiated biotin was accomplished. Both methods led to the same results, but the spectrophotometric one is less avidin expensive and more rapid, being more convenient.

  6. [Bayesian methods for genomic breeding value estimation].

    PubMed

    Wang, Chonglong; Ding, Xiangdong; Liu, Jianfeng; Yin, Zongjun; Zhang, Qin

    2014-02-01

    Estimation of genomic breeding values is the key step in genomic selection. The successful application of genomic selection depends on the accuracy of genomic estimated breeding values, which is mostly determined by the estimation method. Bayes-type and BLUP-type methods are the two main methods which have been widely studied and used. Here, we systematically introduce the currently proposed Bayesian methods, and summarize their effectiveness and improvements. Results from both simulated and real data showed that the accuracies of Bayesian methods are higher than those of BLUP methods, especially for the traits which are influenced by QTL with large effect. Because the theories and computation of Bayesian methods are relatively complicated, their use in practical breeding is less common than BLUP methods. However, with the development of fast algorithms and the improvement of computer hardware, the computational problem of Bayesian methods is expected to be solved. In addition, further studies on the genetic architecture of traits will provide Bayesian methods more accurate prior information, which will make their advantage in accuracy of genomic estimated breeding values more prominent. Therefore, the application of Bayesian methods will be more extensive.

  7. Estimating the level of functional ability of children identified as likely to have an intellectual disability.

    PubMed

    Murray, Aja; McKenzie, Karen; Booth, Tom; Murray, George

    2013-11-01

    Screening tools can provide an indication of whether a child may have an intellectual disability (ID). Item response theory (IRT) analyses can be used to assess whether the statistical properties of the tools are such that their utility extends beyond their use as a screen for ID. We used non-parametric IRT scaling analyses to investigate whether the Child and Adolescent Intellectual Disability Screening Questionnaire (CAIDS-Q) possessed the statistical properties that would suggest its use could be extended to estimate levels of functional ability and to estimate which (if any) features associated with intellectual impairment are consistently indicative of lower or higher levels of functional ability. The validity of the two proposed applications was assessed by evaluating whether the CAIDS-Q conformed to the properties of the Monotone Homogeneity Model (MHM), characterised by uni-dimensionality, local independence and latent monotonicity and the Double Monotone Model (DMM), characterised by the assumptions of the MHM and, in addition, of non-intersecting item response functions. We analysed these models using CAIDS-Q data from 319 people referred to child clinical services. Of these, 148 had a diagnosis of ID. The CAIDS-Q was found to conform to the properties of the MHM but not the DMM. In practice, this means that the CAIDS-Q total scores can be used to quickly estimate the level of a person's functional ability. However, items of the CAIDS-Q did not show invariant item ordering, precluding the use of individual items in isolation as accurate indices of a person's level of functional ability. PMID:24036121

  8. The Effects of Three Instructional Strategies on Prospective Teachers' Ability to Transfer Estimation Skills for Metric Length and Area.

    ERIC Educational Resources Information Center

    Attivo, Barbara; Trueblood, Cecil R.

    The purpose of this study was to investigate how the nature of metric estimation skill instruction affects prospective elementary and special education teachers' abilities to estimate metric length, area, and volume. Four types of estimation skills were identified by an estimation matrix. Three instructional strategies were selected: (1) a…

  9. Developing Writing-Reading Abilities though Semiglobal Methods

    ERIC Educational Resources Information Center

    Macri, Cecilia; Bocos, Musata

    2013-01-01

    Through this research was intended to underline the importance of the semi-global strategies used within thematic projects for developing writing/reading abilities in the first grade pupils. Four different coordinates were chosen to be the main variables of this research: the level of phonological awareness, the degree in which writing-reading…

  10. Imagining the Music: Methods for Assessing Musical Imagery Ability

    ERIC Educational Resources Information Center

    Clark, Terry; Williamon, Aaron

    2012-01-01

    Timing profiles of live and imagined performances were compared with the aim of creating a context-specific measure of musicians' imagery ability. Thirty-two advanced musicians completed imagery use and vividness surveys, and then gave two live and two mental performances of a two-minute musical excerpt, tapping along with the beat of the mental…

  11. A simple method to estimate interwell autocorrelation

    SciTech Connect

    Pizarro, J.O.S.; Lake, L.W.

    1997-08-01

    The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.

  12. Computer Adaptive Practice of Maths Ability Using a New Item Response Model for on the Fly Ability and Difficulty Estimation

    ERIC Educational Resources Information Center

    Klinkenberg, S.; Straatemeier, M.; van der Maas, H. L. J.

    2011-01-01

    In this paper we present a model for computerized adaptive practice and monitoring. This model is used in the Maths Garden, a web-based monitoring system, which includes a challenging web environment for children to practice arithmetic. Using a new item response model based on the Elo (1978) rating system and an explicit scoring rule, estimates of…

  13. Temporal parameter change of human postural control ability during upright swing using recursive least square method

    NASA Astrophysics Data System (ADS)

    Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi

    2010-01-01

    The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.

  14. Temporal parameter change of human postural control ability during upright swing using recursive least square method

    NASA Astrophysics Data System (ADS)

    Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi

    2009-12-01

    The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.

  15. Variational bayesian method of estimating variance components.

    PubMed

    Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi

    2016-07-01

    We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.

  16. Estimation method for serial dilution experiments.

    PubMed

    Ben-David, Avishai; Davidson, Charles E

    2014-12-01

    Titration of microorganisms in infectious or environmental samples is a corner stone of quantitative microbiology. A simple method is presented to estimate the microbial counts obtained with the serial dilution technique for microorganisms that can grow on bacteriological media and develop into a colony. The number (concentration) of viable microbial organisms is estimated from a single dilution plate (assay) without a need for replicate plates. Our method selects the best agar plate with which to estimate the microbial counts, and takes into account the colony size and plate area that both contribute to the likelihood of miscounting the number of colonies on a plate. The estimate of the optimal count given by our method can be used to narrow the search for the best (optimal) dilution plate and saves time. The required inputs are the plate size, the microbial colony size, and the serial dilution factors. The proposed approach shows relative accuracy well within ±0.1log10 from data produced by computer simulations. The method maintains this accuracy even in the presence of dilution errors of up to 10% (for both the aliquot and diluent volumes), microbial counts between 10(4) and 10(12) colony-forming units, dilution ratios from 2 to 100, and plate size to colony size ratios between 6.25 to 200.

  17. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    The development of parametric cost estimating methods for advanced space systems in the conceptual design phase is discussed. The process of identifying variables which drive cost and the relationship between weight and cost are discussed. A theoretical model of cost is developed and tested using a historical data base of research and development projects.

  18. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    Parametric cost estimating methods for space systems in the conceptual design phase are developed. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance, and time. The relationship between weight and cost is examined in detail. A theoretical model of cost is developed and tested statistically against a historical data base of major research and development programs. It is concluded that the technique presented is sound, but that it must be refined in order to produce acceptable cost estimates.

  19. Implicit solvent methods for free energy estimation

    PubMed Central

    Decherchi, Sergio; Masetti, Matteo; Vyalov, Ivan; Rocchia, Walter

    2014-01-01

    Solvation is a fundamental contribution in many biological processes and especially in molecular binding. Its estimation can be performed by means of several computational approaches. The aim of this review is to give an overview of existing theories and methods to estimate solvent effects giving a specific focus on the category of implicit solvent models and their use in Molecular Dynamics. In many of these models, the solvent is considered as a continuum homogenous medium, while the solute can be represented at the atomic detail and at different levels of theory. Despite their degree of approximation, implicit methods are still widely employed due to their trade-off between accuracy and efficiency. Their derivation is rooted in the statistical mechanics and integral equations disciplines, some of the related details being provided here. Finally, methods that combine implicit solvent models and molecular dynamics simulation, are briefly described. PMID:25193298

  20. Disentangling the relationship between cognitive estimation abilities and executive functions: a study on patients with Parkinson's disease.

    PubMed

    D'Aniello, Guido Edoardo; Scarpina, Federica; Albani, Giovanni; Castelnuovo, Gianluca; Mauro, Alessandro

    2015-08-01

    The cognitive estimation test (CET) measures cognitive estimation abilities: it assesses the ability to apply reasoning strategies to answer questions that usually cannot lead to a clear and exact reply. Since it requires the activation of an intricate ensemble of cognitive functions, there is an ongoing debate in the literature regarding whether the CET represents a measurement of global cognitive abilities or a pure measure of executive functions. In the present study, CET together with a neuropsychological assessment focused on executive functions was administered in thirty patients with Parkinson's disease without signs of dementia. The CET correlated with measures of verbal working memory and semantic knowledge, but not with other dimensions of executive domains, such as verbal phonemic fluency, ability to manage real-world interferences, or visuospatial reasoning. According to our results, cognitive estimation abilities appeared to trigger a defined cognitive path that includes executive functions, namely, working memory and semantic knowledge.

  1. Disentangling the relationship between cognitive estimation abilities and executive functions: a study on patients with Parkinson's disease.

    PubMed

    D'Aniello, Guido Edoardo; Scarpina, Federica; Albani, Giovanni; Castelnuovo, Gianluca; Mauro, Alessandro

    2015-08-01

    The cognitive estimation test (CET) measures cognitive estimation abilities: it assesses the ability to apply reasoning strategies to answer questions that usually cannot lead to a clear and exact reply. Since it requires the activation of an intricate ensemble of cognitive functions, there is an ongoing debate in the literature regarding whether the CET represents a measurement of global cognitive abilities or a pure measure of executive functions. In the present study, CET together with a neuropsychological assessment focused on executive functions was administered in thirty patients with Parkinson's disease without signs of dementia. The CET correlated with measures of verbal working memory and semantic knowledge, but not with other dimensions of executive domains, such as verbal phonemic fluency, ability to manage real-world interferences, or visuospatial reasoning. According to our results, cognitive estimation abilities appeared to trigger a defined cognitive path that includes executive functions, namely, working memory and semantic knowledge. PMID:25791888

  2. A method for estimating soil moisture availability

    NASA Technical Reports Server (NTRS)

    Carlson, T. N.

    1985-01-01

    A method for estimating values of soil moisture based on measurements of infrared surface temperature is discussed. A central element in the method is a boundary layer model. Although it has been shown that soil moistures determined by this method using satellite measurements do correspond in a coarse fashion to the antecedent precipitation, the accuracy and exact physical interpretation (with respect to ground water amounts) are not well known. This area of ignorance, which currently impedes the practical application of the method to problems in hydrology, meteorology and agriculture, is largely due to the absence of corresponding surface measurements. Preliminary field measurements made over France have led to the development of a promising vegetation formulation (Taconet et al., 1985), which has been incorporated in the model. It is necessary, however, to test the vegetation component, and the entire method, over a wide variety of surface conditions and crop canopies.

  3. Drop jumping as a training method for jumping ability.

    PubMed

    Bobbert, M F

    1990-01-01

    Vertical jumping ability is of importance for good performance in sports such as basketball and volleyball. Coaches are in need of exercises that consume only little time and still help to improve their players' jumping ability, without involving a high risk of injury. Drop jumping is assumed to satisfy these requirements. This assumption is supported by a review of results of training studies. However, it appears that regular jumping exercises can be just as helpful. The same holds for exercises with weights, provided the subjects have no weight-training history. In fact, for unskilled jumpers who have no weight-training history, the effects of training programmes utilising these different exercises are additive. The most effective, efficient and safe way for a coach to improve the jumping achievement of his athletes may well be to submit them first to a training programme utilising regular jumps, then to a weight-training programme and finally to a drop jump training programme. In drop jump training programmes themselves, the improvement in jumping height varies greatly among studies. This variation cannot be explained satisfactorily with the information available on subjects and training programmes. Given the current state of knowledge, coaches seem to have no other option than to strictly copy a programme which has proved to be very effective. Obviously there is a need for more systematic research of the relationship between design and effect of drop jump training programmes. The most important variable to be controlled is drop jumping technique. From a review of biomechanical studies of drop jumping, it becomes clear that jumping technique strongly affects the mechanical output of muscles. The biomechanics of 2 techniques are discussed. In the bounce drop jump the downward movement after the drop is reversed as soon as possible into an upward push-off, while in the countermovement drop jump this is done more gradually by increasing the amplitude of the

  4. Drop jumping as a training method for jumping ability.

    PubMed

    Bobbert, M F

    1990-01-01

    Vertical jumping ability is of importance for good performance in sports such as basketball and volleyball. Coaches are in need of exercises that consume only little time and still help to improve their players' jumping ability, without involving a high risk of injury. Drop jumping is assumed to satisfy these requirements. This assumption is supported by a review of results of training studies. However, it appears that regular jumping exercises can be just as helpful. The same holds for exercises with weights, provided the subjects have no weight-training history. In fact, for unskilled jumpers who have no weight-training history, the effects of training programmes utilising these different exercises are additive. The most effective, efficient and safe way for a coach to improve the jumping achievement of his athletes may well be to submit them first to a training programme utilising regular jumps, then to a weight-training programme and finally to a drop jump training programme. In drop jump training programmes themselves, the improvement in jumping height varies greatly among studies. This variation cannot be explained satisfactorily with the information available on subjects and training programmes. Given the current state of knowledge, coaches seem to have no other option than to strictly copy a programme which has proved to be very effective. Obviously there is a need for more systematic research of the relationship between design and effect of drop jump training programmes. The most important variable to be controlled is drop jumping technique. From a review of biomechanical studies of drop jumping, it becomes clear that jumping technique strongly affects the mechanical output of muscles. The biomechanics of 2 techniques are discussed. In the bounce drop jump the downward movement after the drop is reversed as soon as possible into an upward push-off, while in the countermovement drop jump this is done more gradually by increasing the amplitude of the

  5. Improvement of Source Number Estimation Method for Single Channel Signal

    PubMed Central

    Du, Bolun; He, Yunze

    2016-01-01

    Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin’s disk estimation (GDE) and minimum description length (MDL), are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC) obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely. PMID:27736959

  6. Expanding the WAIS-III Estimate of Premorbid Ability for Canadians (EPAC).

    PubMed

    Lange, Rael T; Schoenberg, Mike R; Saklofske, Donald H; Woodward, Todd S; Brickell, Tracey A

    2006-07-01

    Since the release of the Canadian WAIS-III normative data in 2001 (Wechsler, 2001), the clinical application of these norms has been limited by the absence of a method to estimate premorbid functioning. However, Lange, Schoenberg, Woodward, and Brickell (2005) recently developed regression algorithms that estimate premorbid FSIQ, VIQ and PIQ scores for use with the Canadian WAIS-III norms. The purpose of this study was to expand work by Lange and colleagues by developing regression algorithms to estimate premorbid GAI (Saklofske et al., 2005), VCI, and POI scores. Participants were the Canadian WAIS-III standardization sample (n = 1,105). The sample was randomly divided into two groups (Development and Validation group). Using the Development group, a total of 14 regression algorithms were generated to estimate GAI, VCI, and POI scores by combining subtest performance (i.e., Vocabulary, Information, Matrix Reasoning, and Picture Completion) with demographic variables (i.e., age, education, ethnicity, region of the country, and gender). The algorithms accounted for a maximum of 77% of the variance in GAI, 78% of the variance in VCI, and 63% of the variance in POI. In the Validation Group, correlations between predicted and obtained scores were high (GAI = .70 to .88; VCI = .87 to .88; POI = .71 to .80). Evaluation of prediction errors revealed that the majority of estimated GAI, VCI, and POI scores fell within a 95% CI band (93.5% to 97.0%) and within 10 points of obtained index scores (72.3% to 85.6%) depending on the subtests used. These algorithms provide a promising means for estimating premorbid GAI, VCI, and POI scores using the Canadian WAIS-III norms. PMID:16723324

  7. On methods of estimating cosmological bulk flows

    NASA Astrophysics Data System (ADS)

    Nusser, Adi

    2016-01-01

    We explore similarities and differences between several estimators of the cosmological bulk flow, B, from the observed radial peculiar velocities of galaxies. A distinction is made between two theoretical definitions of B as a dipole moment of the velocity field weighted by a radial window function. One definition involves the three-dimensional (3D) peculiar velocity, while the other is based on its radial component alone. Different methods attempt at inferring B for either of these definitions which coincide only for the case of a velocity field which is constant in space. We focus on the Wiener Filtering (WF) and the Constrained Minimum Variance (CMV) methodologies. Both methodologies require a prior expressed in terms of the radial velocity correlation function. Hoffman et al. compute B in Top-Hat windows from a WF realization of the 3D peculiar velocity field. Feldman et al. infer B directly from the observed velocities for the second definition of B. The WF methodology could easily be adapted to the second definition, in which case it will be equivalent to the CMV with the exception of the imposed constraint. For a prior with vanishing correlations or very noisy data, CMV reproduces the standard Maximum Likelihood estimation for B of the entire sample independent of the radial weighting function. Therefore, this estimator is likely more susceptible to observational biases that could be present in measurements of distant galaxies. Finally, two additional estimators are proposed.

  8. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1994-01-01

    NASA is responsible for developing much of the nation's future space technology. Cost estimates for new programs are required early in the planning process so that decisions can be made accurately. Because of the long lead times required to develop space hardware, the cost estimates are frequently required 10 to 15 years before the program delivers hardware. The system design in conceptual phases of a program is usually only vaguely defined and the technology used is so often state-of-the-art or beyond. These factors combine to make cost estimating for conceptual programs very challenging. This paper describes an effort to develop parametric cost estimating methods for space systems in the conceptual design phase. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance and time. The nature of the relationships between the driver variables and cost will be discussed. In particular, the relationship between weight and cost will be examined in detail. A theoretical model of cost will be developed and tested statistically against a historical database of major research and development projects.

  9. An Analytical Method of Estimating Turbine Performance

    NASA Technical Reports Server (NTRS)

    Kochendorfer, Fred D; Nettles, J Cary

    1948-01-01

    A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and turning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of the blading-loss parameter. A variation of blading-loss parameter from 0.3 to 0.5 includes most of the experimental data from the turbine investigated.

  10. An analytical method of estimating turbine performance

    NASA Technical Reports Server (NTRS)

    Kochendorfer, Fred D; Nettles, J Cary

    1949-01-01

    A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and the friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and the tuning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of a blading-loss parameter.

  11. The Study on Educational Technology Abilities Evaluation Method

    NASA Astrophysics Data System (ADS)

    Jing, Duan

    The traditional methods used to evaluate the test, the test did not really measure that we want to measuring things. Test results and can not serve as a basis for evaluation, so it was worth the natural result of its evaluation of weighing. This system is full use of technical means of education, based on education, psychological theory, to evaluate the object-based, evaluation tools, evaluation of secondary teachers to primary and secondary school teachers in educational technology as the goal, using a variety of evaluation of side France, from various angles established an informal evaluation system.

  12. A Novel Method for Estimating Linkage Maps

    PubMed Central

    Tan, Yuan-De; Fu, Yun-Xin

    2006-01-01

    The goal of linkage mapping is to find the true order of loci from a chromosome. Since the number of possible orders is large even for a modest number of loci, the problem of finding the optimal solution is known as a NP-hard problem or traveling salesman problem (TSP). Although a number of algorithms are available, many either are low in the accuracy of recovering the true order of loci or require tremendous amounts of computational resources, thus making them difficult to use for reconstructing a large-scale map. We developed in this article a novel method called unidirectional growth (UG) to help solve this problem. The UG algorithm sequentially constructs the linkage map on the basis of novel results about additive distance. It not only is fast but also has a very high accuracy in recovering the true order of loci according to our simulation studies. Since the UG method requires n − 1 cycles to estimate the ordering of n loci, it is particularly useful for estimating linkage maps consisting of hundreds or even thousands of linked codominant loci on a chromosome. PMID:16783016

  13. Demographic estimation methods for plants with dormancy

    USGS Publications Warehouse

    Kery, M.; Gregg, K.B.

    2004-01-01

    Demographic studies in plants appear simple because unlike animals, plants do not run away. Plant individuals can be marked with, e.g., plastic tags, but often the coordinates of an individual may be sufficient to identify it. Vascular plants in temperate latitudes have a pronounced seasonal life–cycle, so most plant demographers survey their study plots once a year often during or shortly after flowering. Life–states are pervasive in plants, hence the results of a demographic study for an individual can be summarized in a familiar encounter history, such as 0VFVVF000. A zero means that an individual was not seen in a year and a letter denotes its state for years when it was seen aboveground. V and F here stand for vegetative and flowering states, respectively. Probabilities of survival and state transitions can then be obtained by mere counting.Problems arise when there is an unobservable dormant state, i.e., when plants may stay belowground for one or more growing seasons. Encounter histories such as 0VF00F000 may then occur where the meaning of zeroes becomes ambiguous. A zero can either mean a dead or a dormant plant. Various ad hoc methods in wide use among plant ecologists have made strong assumptions about when a zero should be equated to a dormant individual. These methods have never been compared among each other. In our talk and in Kéry et al. (submitted), we show that these ad hoc estimators provide spurious estimates of survival and should not be used.In contrast, if detection probabilities for aboveground plants are known or can be estimated, capturerecapture (CR) models can be used to estimate probabilities of survival and state–transitions and the fraction of the population that is dormant. We have used this approach in two studies of terrestrial orchids, Cleistes bifaria (Kéry et al., submitted) and Cypripedium reginae(Kéry & Gregg, submitted) in West Virginia, U.S.A. For Cleistes, our data comprised one population with a total of 620

  14. Demographic estimation methods for plants with dormancy

    USGS Publications Warehouse

    Kery, M.; Gregg, K.B.

    2004-01-01

    Demographic studies in plants appear simple because unlike animals, plants do not run away. Plant individuals can be marked with, e.g., plastic tags, but often the coordinates of an individual may be sufficient to identify it. Vascular plants in temperate latitudes have a pronounced seasonal life-cycle, so most plant demographers survey their study plots once a year often during or shortly after flowering. Life-states are pervasive in plants, hence the results of a demographic study for an individual can be summarized in a familiar encounter history, such as OVFVVF000. A zero means that an individual was not seen in a year and a letter denotes its state for years when it was seen aboveground. V and F here stand for vegetative and flowering states, respectively. Probabilities of survival and state transitions can then be obtained by mere counting. Problems arise when there is an unobservable dormant state, I.e., when plants may stay belowground for one or more growing seasons. Encounter histories such as OVFOOF000 may then occur where the meaning of zeroes becomes ambiguous. A zero can either mean a dead or a dormant plant. Various ad hoc methods in wide use among plant ecologists have made strong assumptions about when a zero should be equated to a dormant individual. These methods have never been compared among each other. In our talk and in Kery et al. (submitted), we show that these ad hoc estimators provide spurious estimates of survival and should not be used. In contrast, if detection probabilities for aboveground plants are known or can be estimated, capture-recapture (CR) models can be used to estimate probabilities of survival and state-transitions and the fraction of the population that is dormant. We have used this approach in two studies of terrestrial orchids, Cleistes bifaria (Kery et aI., submitted) and Cypripedium reginae (Kery & Gregg, submitted) in West Virginia, U.S.A. For Cleistes, our data comprised one population with a total of 620 marked

  15. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.

  16. Molecular-clock methods for estimating evolutionary rates and timescales.

    PubMed

    Ho, Simon Y W; Duchêne, Sebastián

    2014-12-01

    The molecular clock presents a means of estimating evolutionary rates and timescales using genetic data. These estimates can lead to important insights into evolutionary processes and mechanisms, as well as providing a framework for further biological analyses. To deal with rate variation among genes and among lineages, a diverse range of molecular-clock methods have been developed. These methods have been implemented in various software packages and differ in their statistical properties, ability to handle different models of rate variation, capacity to incorporate various forms of calibrating information and tractability for analysing large data sets. Choosing a suitable molecular-clock model can be a challenging exercise, but a number of model-selection techniques are available. In this review, we describe the different forms of evolutionary rate heterogeneity and explain how they can be accommodated in molecular-clock analyses. We provide an outline of the various clock methods and models that are available, including the strict clock, local clocks, discrete clocks and relaxed clocks. Techniques for calibration and clock-model selection are also described, along with methods for handling multilocus data sets. We conclude our review with some comments about the future of molecular clocks.

  17. A Critique of Raju and Oshima's Prophecy Formulas for Assessing the Reliability of Item Response Theory-Based Ability Estimates

    ERIC Educational Resources Information Center

    Wang, Wen-Chung

    2008-01-01

    Raju and Oshima (2005) proposed two prophecy formulas based on item response theory in order to predict the reliability of ability estimates for a test after change in its length. The first prophecy formula is equivalent to the classical Spearman-Brown prophecy formula. The second prophecy formula is misleading because of an underlying false…

  18. Effect of Person Cluster on Accuracy of Ability Estimation of Computerized Adaptive Testing in K-12 Education Assessment

    ERIC Educational Resources Information Center

    Wang, Shudong; Jiao, Hong; He, Wei

    2011-01-01

    The ability estimation procedure is one of the most important components in a computerized adaptive testing (CAT) system. Currently, all CATs that provide K-12 student scores are based on the item response theory (IRT) model(s); while such application directly violates the assumption of independent sample of a person in IRT models because ability…

  19. Empirical Power and Type I Error Rates for an IRT Fit Statistic That Considers the Precision of Ability Estimates.

    ERIC Educational Resources Information Center

    Stone, Clement A.

    2003-01-01

    Developed and investigated a goodness-of-fit statistic that considers the uncertainty with which ability is estimated and a resampling-based hypothesis testing procedure. Simulation study results indicate that the procedure should be useful for evaluating goodness-of-fit item response theory models for most testing applications when uncertainty in…

  20. The Confounding Effects of Ability, Item Difficulty, and Content Balance within Multiple Dimensions on the Estimation of Unidimensional Thetas

    ERIC Educational Resources Information Center

    Matlock, Ki Lynn

    2013-01-01

    When test forms that have equal total test difficulty and number of items vary in difficulty and length within sub-content areas, an examinee's estimated score may vary across equivalent forms, depending on how well his or her true ability in each sub-content area aligns with the difficulty of items and number of items within these areas.…

  1. Using optimal estimation method for upper atmospheric Lidar temperature retrieval

    NASA Astrophysics Data System (ADS)

    Zou, Rongshi; Pan, Weilin; Qiao, Shuai

    2016-07-01

    Conventional ground based Rayleigh lidar temperature retrieval use integrate technique, which has limitations that necessitate abandoning temperatures retrieved at the greatest heights due to the assumption of a seeding value required to initialize the integration at the highest altitude. Here we suggests the use of a method that can incorporate information from various sources to improve the quality of the retrieval result. This approach inverts lidar equation via optimal estimation method(OEM) based on Bayesian theory together with Gaussian statistical model. It presents many advantages over the conventional ones: 1) the possibility of incorporating information from multiple heterogeneous sources; 2) provides diagnostic information about retrieval qualities; 3) ability of determining vertical resolution and maximum height to which the retrieval is mostly independent of the a priori profile. This paper compares one-hour temperature profiles retrieved using conventional and optimal estimation methods at Golmud, Qinghai province, China. Results show that OEM results show a better agreement with SABER profile compared with conventional one, in some region it is much lower than SABER profile, which is a very different results compared with previous studies, further studies are needed to explain this phenomenon. The success of applying OEM on temperature retrieval is a validation for using as retrieval framework in large synthetic observation systems including various active remote sensing instruments by incorporating all available measurement information into the model and analyze groups of measurements simultaneously to improve the results.

  2. Comparisons of Four Methods for Estimating a Dynamic Factor Model

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.

    2008-01-01

    Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…

  3. How Smart Do You Think You Are? A Meta-Analysis on the Validity of Self-Estimates of Cognitive Ability

    ERIC Educational Resources Information Center

    Freund, Philipp Alexander; Kasten, Nadine

    2012-01-01

    Individuals' perceptions of their own level of cognitive ability are expressed through self-estimates. They play an important role in a person's self-concept because they facilitate an understanding of how one's own abilities relate to those of others. People evaluate their own and other persons' abilities all the time, but self-estimates are also…

  4. Statistical methods of estimating mining costs

    USGS Publications Warehouse

    Long, K.R.

    2011-01-01

    Until it was defunded in 1995, the U.S. Bureau of Mines maintained a Cost Estimating System (CES) for prefeasibility-type economic evaluations of mineral deposits and estimating costs at producing and non-producing mines. This system had a significant role in mineral resource assessments to estimate costs of developing and operating known mineral deposits and predicted undiscovered deposits. For legal reasons, the U.S. Geological Survey cannot update and maintain CES. Instead, statistical tools are under development to estimate mining costs from basic properties of mineral deposits such as tonnage, grade, mineralogy, depth, strip ratio, distance from infrastructure, rock strength, and work index. The first step was to reestimate "Taylor's Rule" which relates operating rate to available ore tonnage. The second step was to estimate statistical models of capital and operating costs for open pit porphyry copper mines with flotation concentrators. For a sample of 27 proposed porphyry copper projects, capital costs can be estimated from three variables: mineral processing rate, strip ratio, and distance from nearest railroad before mine construction began. Of all the variables tested, operating costs were found to be significantly correlated only with strip ratio.

  5. A QUALITATIVE METHOD TO ESTIMATE HSI DISPLAY COMPLEXITY

    SciTech Connect

    Jacques Hugo; David Gertman

    2013-04-01

    There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation.

  6. Further Explorations of Perceptual Speed Abilities in the Context of Assessment Methods, Cognitive Abilities, and Individual Differences during Skill Acquisition

    ERIC Educational Resources Information Center

    Ackerman, Phillip L.; Beier, Margaret E.

    2007-01-01

    Measures of perceptual speed ability have been shown to be an important part of assessment batteries for predicting performance on tasks and jobs that require a high level of speed and accuracy. However, traditional measures of perceptual speed ability sometimes have limited cost-effectiveness because of the requirements for administration and…

  7. A Study of Variance Estimation Methods. Working Paper Series.

    ERIC Educational Resources Information Center

    Zhang, Fan; Weng, Stanley; Salvucci, Sameena; Hu, Ming-xiu

    This working paper contains reports of five studies of variance estimation methods. The first, An Empirical Study of Poststratified Estimator, by Fan Zhang uses data from the National Household Education Survey to illustrate use of poststratified estimation. The second paper, BRR Variance Estimation Using BPLX Hadamard Procedure, by Stanley Weng…

  8. The ability of atmospheric data to resolve discrepancies in wetland methane estimates over North America

    NASA Astrophysics Data System (ADS)

    Miller, S. M.; Andrews, A. E.; Benmergui, J.; Commane, R.; Dlugokencky, E. J.; Janssens-Maenhout, G.; Melton, J. R.; Michalak, A. M.; Sweeney, C.; Worthy, D. E. J.

    2015-06-01

    Existing estimates of methane fluxes from North American wetlands vary widely in both magnitude and distribution. In light of these disagreements, this study uses atmospheric methane observations from the US and Canada to analyze seven different bottom-up, wetland methane estimates reported in a recent model comparison project. We first use synthetic data to explore how well atmospheric observations can constrain wetland fluxes. We find that observation sites can identify an atmospheric pattern from Canadian wetlands but not reliably from US wetlands. The network can also identify the spatial distribution of fluxes in Canada at multi-province spatial scales. Based upon these results, we then use real data to evaluate the magnitude, temporal distribution, and spatial distribution of each model estimate. Most models overestimate the magnitude of fluxes across Canada. Most predict a seasonality that is too narrow, potentially indicating an over-sensitivity to air or soil temperatures. In addition, the LPJ-Bern model has a spatial distribution that is most consistent with atmospheric observations. Unlike most models, LPJ-Bern utilizes land cover maps, not just remote sensing inundation data, to estimate wetland coverage. A flux model with a constant spatial distribution outperforms most other existing flux estimates across Canada.

  9. Estimation Methods for One-Parameter Testlet Models

    ERIC Educational Resources Information Center

    Jiao, Hong; Wang, Shudong; He, Wei

    2013-01-01

    This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…

  10. Two Prophecy Formulas for Assessing the Reliability of Item Response Theory-Based Ability Estimates

    ERIC Educational Resources Information Center

    Raju, Nambury S.; Oshima, T.C.

    2005-01-01

    Two new prophecy formulas for estimating item response theory (IRT)-based reliability of a shortened or lengthened test are proposed. Some of the relationships between the two formulas, one of which is identical to the well-known Spearman-Brown prophecy formula, are examined and illustrated. The major assumptions underlying these formulas are…

  11. Simultaneous Estimation of Overall and Domain Abilities: A Higher-Order IRT Model Approach

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Song, Hao

    2009-01-01

    Assessments consisting of different domains (e.g., content areas, objectives) are typically multidimensional in nature but are commonly assumed to be unidimensional for estimation purposes. The different domains of these assessments are further treated as multi-unidimensional tests for the purpose of obtaining diagnostic information. However, when…

  12. Development of advanced acreage estimation methods

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr. (Principal Investigator)

    1980-01-01

    The use of the AMOEBA clustering/classification algorithm was investigated as a basis for both a color display generation technique and maximum likelihood proportion estimation procedure. An approach to analyzing large data reduction systems was formulated and an exploratory empirical study of spatial correlation in LANDSAT data was also carried out. Topics addressed include: (1) development of multiimage color images; (2) spectral spatial classification algorithm development; (3) spatial correlation studies; and (4) evaluation of data systems.

  13. Bin mode estimation methods for Compton camera imaging

    NASA Astrophysics Data System (ADS)

    Ikeda, S.; Odaka, H.; Uemura, M.; Takahashi, T.; Watanabe, S.; Takeda, S.

    2014-10-01

    We study the image reconstruction problem of a Compton camera which consists of semiconductor detectors. The image reconstruction is formulated as a statistical estimation problem. We employ a bin-mode estimation (BME) and extend an existing framework to a Compton camera with multiple scatterers and absorbers. Two estimation algorithms are proposed: an accelerated EM algorithm for the maximum likelihood estimation (MLE) and a modified EM algorithm for the maximum a posteriori (MAP) estimation. Numerical simulations demonstrate the potential of the proposed methods.

  14. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.

  15. Development of the WAIS-III estimate of premorbid ability for Canadians (EPAC).

    PubMed

    Lange, Rael T; Schoenberg, Mike R; Woodward, Todd S; Brickell, Tracey A

    2005-12-01

    This study developed regression algorithms for estimating IQ scores using the Canadian WAIS-III norms. Participants were the Canadian WAIS-III standardization sample (n = 1,105). The sample was randomly divided into two groups (Development and Validation groups). The Development group was used to generate 12 regression algorithms for FSIQ and three algorithms each for VIQ and PIQ. Algorithms combined demographic variables with WAIS-III subtest raw scores. The algorithms accounted for 48-78% of the variance in FSIQ, 70-71% in VIQ, and 45-55% in PIQ. In the Validation group, the majority of the sample had predicted IQs that fell within a 95% CI band (FSIQ=92-94%; VIQ=93-95%; PIQ=94-94%). These algorithms yielded reasonably accurate estimates of FSIQ, VIQ, and PIQ in this healthy adult population. It is anticipated that these algorithms will be useful as a means for estimating premorbid IQ scores in a clinical population. However, prior to clinical use, these algorithms must be validated for this purpose. PMID:16087311

  16. The Stability of Four Methods for Estimating Item Bias.

    ERIC Educational Resources Information Center

    Bezruczko, Nikolaus; And Others

    The stability of bias estimates from J. Schueneman's chi-square method, the transformed Delta method, Rasch's one-parameter residual analysis, and the Mantel-Haenszel procedure, were compared across small and large samples for a data set of 30,000 cases. Bias values for 30 samples were estimated for each method, and means and variances of item…

  17. Effect of methods of evaluation on sealing ability of mineral trioxide aggregate apical plug

    PubMed Central

    Nikhil, Vineeta; Jha, Padmanabh; Suri, Navleen Kaur

    2016-01-01

    Aim: The purpose of the study was to evaluate and compare the sealing ability of mineral trioxide aggregate (MTA) with three different methods. Materials and Methods: Forty single canal teeth were decoronated, and root canals were enlarged to simulate immature apex. The samples were randomly divided into Group MD = MTA-angelus mixed with distilled water and Group MC = MTA-angelus mixed with 2% chlorhexidine, and apical seal was recorded with glucose penetration method, fluid filtration method, and dye penetration methods and compared. Results: The three methods of evaluation resulted differently. The glucose penetration method showed that MD sealed better than MC, but difference was statistically insignificant (P > 0.05). The fluid filtration method resulted that Group MC was statistically insignificant superior to Group MD (P > 0.05). The dye penetration method showed that Group MC sealed statistically better than Group MD. Conclusion: No correlation was found among the results obtained with the three methods of evaluation. Addition of chlorhexidine enhanced the sealing ability of MTA according to the fluid filtration test and dye leakage while according to the glucose penetration test, chlorhexidine did not enhance the sealing ability of MTA. This study showed that relying on the results of apical sealing by only method can be misleading. PMID:27217635

  18. Morphological method for estimation of simian virus 40 infectious titer.

    PubMed

    Landau, S M; Nosach, L N; Pavlova, G V

    1982-01-01

    The cytomorphologic method previously reported for titration of adenoviruses has been employed for estimating the infectious titer of simian virus 40 (SV 40). Infected cells forming intranuclear inclusions were determined. The method examined possesses a number of advantages over virus titration by plaque assay and cytopathic effect. The virus titer estimated by the method of inclusion counting and expressed as IFU/ml (Inclusion Forming Units/ml) corresponds to that estimated by plaque count and expressed as PFU/ml.

  19. Advancing Methods for Estimating Cropland Area

    NASA Astrophysics Data System (ADS)

    King, L.; Hansen, M.; Stehman, S. V.; Adusei, B.; Potapov, P.; Krylov, A.

    2014-12-01

    Measurement and monitoring of complex and dynamic agricultural land systems is essential with increasing demands on food, feed, fuel and fiber production from growing human populations, rising consumption per capita, the expansion of crops oils in industrial products, and the encouraged emphasis on crop biofuels as an alternative energy source. Soybean is an important global commodity crop, and the area of land cultivated for soybean has risen dramatically over the past 60 years, occupying more than 5% of all global croplands (Monfreda et al 2008). Escalating demands for soy over the next twenty years are anticipated to be met by an increase of 1.5 times the current global production, resulting in expansion of soybean cultivated land area by nearly the same amount (Masuda and Goldsmith 2009). Soybean cropland area is estimated with the use of a sampling strategy and supervised non-linear hierarchical decision tree classification for the United States, Argentina and Brazil as the prototype in development of a new methodology for crop specific agricultural area estimation. Comparison of our 30 m2 Landsat soy classification with the National Agricultural Statistical Services Cropland Data Layer (CDL) soy map shows a strong agreement in the United States for 2011, 2012, and 2013. RapidEye 5m2 imagery was also classified for soy presence and absence and used at the field scale for validation and accuracy assessment of the Landsat soy maps, describing a nearly 1 to 1 relationship in the United States, Argentina and Brazil. The strong correlation found between all products suggests high accuracy and precision of the prototype and has proven to be a successful and efficient way to assess soybean cultivated area at the sub-national and national scale for the United States with great potential for application elsewhere.

  20. Reading Ability as an Estimator of Premorbid Intelligence: Does It Remain Stable Among Ethnically Diverse HIV+ Adults?

    PubMed Central

    Olsen, J. Pat; Fellows, Robert P.; Rivera-Mindt, Monica; Morgello, Susan; Byrd, Desiree A.

    2015-01-01

    The Wide Range Achievement Test, 3rd edition, Reading-Recognition subtest (WRAT-3 RR) is an established measure of premorbid ability. Furthermore, its long-term reliability is not well documented, particularly in diverse populations with CNS-relevant disease. Objective: We examined test-retest reliability of the WRAT-3 RR over time in an HIV+ sample of predominantly racial/ethnic minority adults. Method: Participants (N = 88) completed a comprehensive neuropsychological battery, including the WRAT-3 RR, on at least two separate study visits. Intraclass correlation coefficients (ICCs) were computed using scores from baseline and follow-up assessments to determine the test-retest reliability of the WRAT-3 RR across racial/ethnic groups and changes in medical (immunological) and clinical (neurocognitive) factors. Additionally, Fisher’s Z tests were used to determine the significance of the differences between ICCs. Results: The average test-retest interval was 58.7 months (SD=36.4). The overall WRAT-3 RR test-retest reliability was high (r = .97, p < .001), and remained robust across all demographic, medical, and clinical variables (all r’s > .92). Intraclass correlation coefficients did not differ significantly between the subgroups tested (all Fisher’s Z p’s > .05). Conclusions: Overall, this study supports the appropriateness of word-reading tests, such as the WRAT-3 RR, for use as stable premorbid IQ estimates among ethnically diverse groups. Moreover, this study supports the reliability of this measure in the context of change in health and neurocognitive status, and in lengthy inter-test intervals. These findings offer strong rationale for reading as a “hold” test, even in the presence of a chronic, variable disease such as HIV. PMID:26689235

  1. Research on evaluation methods for water regulation ability of dams in the Huai River Basin

    NASA Astrophysics Data System (ADS)

    Shan, G. H.; Lv, S. F.; Ma, K.

    2016-08-01

    Water environment protection is a global and urgent problem that requires correct and precise evaluation. Evaluation methods have been studied for many years; however, there is a lack of research on the methods of assessing the water regulation ability of dams. Currently, evaluating the ability of dams has become a practical and significant research orientation because of the global water crisis, and the lack of effective ways to manage a dam's regulation ability has only compounded this. This paper firstly constructs seven evaluation factors and then develops two evaluation approaches to implement the factors according to the features of the problem. Dams of the Yin Shang ecological control section in the Huai He River basin are selected as an example to demonstrate the method. The results show that the evaluation approaches can produce better and more practical suggestions for dam managers.

  2. Estimation of Convective Momentum Fluxes Using Satellite-Based Methods

    NASA Astrophysics Data System (ADS)

    Jewett, C.; Mecikalski, J. R.

    2009-12-01

    as defined by Austin and Houze (1973). However, this method only considers climatological updraft speeds determined from cloud base and cloud top heights. Fortunately, this project also incorporates the unique dataset provided by the space cloud radar, CloudSat. However, with CloudSat pointing only at nadir, it is limited in its abilities to compute a three-dimensional draft-tilt. Nevertheless, this instrument can provide critical information toward estimating CMFs. Efforts are currently being made to correlate the Ice Water Content (IWC; from product 2B-CWC-RO) of convective storms to vertical velocities. It is hypothesized that a positive correlation exists between IWC and vertical velocity (Li 2006). With a positive correlation, vertical velocity estimates can be applied to CloudSat data. These vertical velocity estimates will be included in the TRMM algorithm to create a synergistic approach to estimating convective momentum fluxes. This approach also considers the sub-cloud base fluxes from QuikScat data, derived using the divergence along the surface and calculating vertical motion with continuity equation.

  3. Estimated Accuracy of Three Common Trajectory Statistical Methods

    NASA Technical Reports Server (NTRS)

    Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.

    2011-01-01

    Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h

  4. An assessment of the ability of the obstruction-scaling model to estimate solute diffusion coefficients in hydrogels.

    PubMed

    Hadjiev, Nicholas A; Amsden, Brian G

    2015-02-10

    The ability to estimate the diffusion coefficient of a solute within hydrogels has important application in the design and analysis of hydrogels used in drug delivery, tissue engineering, and regenerative medicine. A number of mathematical models have been derived for this purpose; however, they often rely on fitted parameters and so have limited predictive capability. Herein we assess the ability of the obstruction-scaling model to provide reasonable estimates of solute diffusion coefficients within hydrogels, as well as the assumption that a hydrogel can be represented as an entangled polymer solution of an equivalent concentration. Fluorescein isothiocyanate dextran solutes were loaded into sodium alginate solutions as well as hydrogels of different polymer volume fractions formed from photoinitiated cross-linking of methacrylate sodium alginate. The tracer diffusion coefficients of these solutes were measured using fluorescence recovery after photobleaching (FRAP). The measured diffusion coefficients were then compared to the values predicted by the obstruction-scaling model. The model predictions were within ±15% of the measured values, suggesting that the model can provide useful estimates of solute diffusion coefficients within hydrogels and solutions. Moreover, solutes diffusing in both sodium alginate solutions and hydrogels were demonstrated to experience the same degree of solute mobility restriction given the same effective polymer concentration, supporting the assumption that a hydrogel can be represented as an entangled polymer solution of equivalent concentration.

  5. An assessment of the ability of the obstruction-scaling model to estimate solute diffusion coefficients in hydrogels.

    PubMed

    Hadjiev, Nicholas A; Amsden, Brian G

    2015-02-10

    The ability to estimate the diffusion coefficient of a solute within hydrogels has important application in the design and analysis of hydrogels used in drug delivery, tissue engineering, and regenerative medicine. A number of mathematical models have been derived for this purpose; however, they often rely on fitted parameters and so have limited predictive capability. Herein we assess the ability of the obstruction-scaling model to provide reasonable estimates of solute diffusion coefficients within hydrogels, as well as the assumption that a hydrogel can be represented as an entangled polymer solution of an equivalent concentration. Fluorescein isothiocyanate dextran solutes were loaded into sodium alginate solutions as well as hydrogels of different polymer volume fractions formed from photoinitiated cross-linking of methacrylate sodium alginate. The tracer diffusion coefficients of these solutes were measured using fluorescence recovery after photobleaching (FRAP). The measured diffusion coefficients were then compared to the values predicted by the obstruction-scaling model. The model predictions were within ±15% of the measured values, suggesting that the model can provide useful estimates of solute diffusion coefficients within hydrogels and solutions. Moreover, solutes diffusing in both sodium alginate solutions and hydrogels were demonstrated to experience the same degree of solute mobility restriction given the same effective polymer concentration, supporting the assumption that a hydrogel can be represented as an entangled polymer solution of equivalent concentration. PMID:25499554

  6. Seismic Methods of Identifying Explosions and Estimating Their Yield

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Ford, S. R.; Pasyanos, M.; Pyle, M. L.; Myers, S. C.; Mellors, R. J.; Pitarka, A.; Rodgers, A. J.; Hauk, T. F.

    2014-12-01

    Seismology plays a key national security role in detecting, locating, identifying and determining the yield of explosions from a variety of causes, including accidents, terrorist attacks and nuclear testing treaty violations (e.g. Koper et al., 2003, 1999; Walter et al. 1995). A collection of mainly empirical forensic techniques has been successfully developed over many years to obtain source information on explosions from their seismic signatures (e.g. Bowers and Selby, 2009). However a lesson from the three DPRK declared nuclear explosions since 2006, is that our historic collection of data may not be representative of future nuclear test signatures (e.g. Selby et al., 2012). To have confidence in identifying future explosions amongst the background of other seismic signals, and accurately estimate their yield, we need to put our empirical methods on a firmer physical footing. Goals of current research are to improve our physical understanding of the mechanisms of explosion generation of S- and surface-waves, and to advance our ability to numerically model and predict them. As part of that process we are re-examining regional seismic data from a variety of nuclear test sites including the DPRK and the former Nevada Test Site (now the Nevada National Security Site (NNSS)). Newer relative location and amplitude techniques can be employed to better quantify differences between explosions and used to understand those differences in term of depth, media and other properties. We are also making use of the Source Physics Experiments (SPE) at NNSS. The SPE chemical explosions are explicitly designed to improve our understanding of emplacement and source material effects on the generation of shear and surface waves (e.g. Snelson et al., 2013). Finally we are also exploring the value of combining seismic information with other technologies including acoustic and InSAR techniques to better understand the source characteristics. Our goal is to improve our explosion models

  7. A Monte Carlo method for variance estimation for estimators based on induced smoothing

    PubMed Central

    Jin, Zhezhen; Shao, Yongzhao; Ying, Zhiliang

    2015-01-01

    An important issue in statistical inference for semiparametric models is how to provide reliable and consistent variance estimation. Brown and Wang (2005. Standard errors and covariance matrices for smoothed rank estimators. Biometrika 92, 732–746) proposed a variance estimation procedure based on an induced smoothing for non-smooth estimating functions. Herein a Monte Carlo version is developed that does not require any explicit form for the estimating function itself, as long as numerical evaluation can be carried out. A general convergence theory is established, showing that any one-step iteration leads to a consistent variance estimator and continuation of the iterations converges at an exponential rate. The method is demonstrated through the Buckley–James estimator and the weighted log-rank estimators for censored linear regression, and rank estimation for multiple event times data. PMID:24812418

  8. Evaluation of Two Methods to Estimate and Monitor Bird Populations

    PubMed Central

    Taylor, Sandra L.; Pollard, Katherine S.

    2008-01-01

    Background Effective management depends upon accurately estimating trends in abundance of bird populations over time, and in some cases estimating abundance. Two population estimation methods, double observer (DO) and double sampling (DS), have been advocated for avian population studies and the relative merits and short-comings of these methods remain an area of debate. Methodology/Principal Findings We used simulations to evaluate the performances of these two population estimation methods under a range of realistic scenarios. For three hypothetical populations with different levels of clustering, we generated DO and DS population size estimates for a range of detection probabilities and survey proportions. Population estimates for both methods were centered on the true population size for all levels of population clustering and survey proportions when detection probabilities were greater than 20%. The DO method underestimated the population at detection probabilities less than 30% whereas the DS method remained essentially unbiased. The coverage probability of 95% confidence intervals for population estimates was slightly less than the nominal level for the DS method but was substantially below the nominal level for the DO method at high detection probabilities. Differences in observer detection probabilities did not affect the accuracy and precision of population estimates of the DO method. Population estimates for the DS method remained unbiased as the proportion of units intensively surveyed changed, but the variance of the estimates decreased with increasing proportion intensively surveyed. Conclusions/Significance The DO and DS methods can be applied in many different settings and our evaluations provide important information on the performance of these two methods that can assist researchers in selecting the method most appropriate for their particular needs. PMID:18728775

  9. Novel method of screening the oxidation and reduction abilities of photocatalytic materials.

    PubMed

    Katayama, K; Takeda, Y; Shimaoka, K; Yoshida, K; Shimizu, R; Ishiwata, T; Nakamura, A; Kuwahara, S; Mase, A; Sugita, T; Mori, M

    2014-04-21

    Two analytical methods for the evaluation of photocatalytic oxidation and reduction abilities were developed using a photocatalytic microreactor; one is product analysis and the other is reaction rate analysis. Two simple organic conversion reactions were selected for the oxidation and reduction. Since the reactions were one-to-one conversions from the reactant species to the product species, the product analysis was simply performed using gas chromatography, and the reactions were monitored in situ in the photocatalytic microreactor using the UV absorption spectra. The partial oxidation and reduction abilities for each functional group can be judged from the yield and selectivity, and the corresponding reaction rate, while the total oxidation ability can be judged from the conversion. We demonstrated the application of these methods for several kinds of visible light photocatalysts.

  10. Robust time and frequency domain estimation methods in adaptive control

    NASA Technical Reports Server (NTRS)

    Lamaire, Richard Orville

    1987-01-01

    A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.

  11. System and method for motor parameter estimation

    SciTech Connect

    Luhrs, Bin; Yan, Ting

    2014-03-18

    A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values for motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.

  12. The Effects of Presentation Method and Information Density on Visual Search Ability and Working Memory Load

    ERIC Educational Resources Information Center

    Chang, Ting-Wen; Kinshuk; Chen, Nian-Shing; Yu, Pao-Ta

    2012-01-01

    This study investigates the effects of successive and simultaneous information presentation methods on learner's visual search ability and working memory load for different information densities. Since the processing of information in the brain depends on the capacity of visual short-term memory (VSTM), the limited information processing capacity…

  13. Adjoint method for estimating Jiles-Atherton hysteresis model parameters

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif; Hansen, Paul C.; Neustock, Lars T.; Padhy, Punnag; Hesselink, Lambertus

    2016-09-01

    A computationally efficient method for identifying the parameters of the Jiles-Atherton hysteresis model is presented. Adjoint analysis is used in conjecture with an accelerated gradient descent optimization algorithm. The proposed method is used to estimate the Jiles-Atherton model parameters of two different materials. The obtained results are found to be in good agreement with the reported values. By comparing with existing methods of model parameter estimation, the proposed method is found to be computationally efficient and fast converging.

  14. Carbon footprint: current methods of estimation.

    PubMed

    Pandey, Divya; Agrawal, Madhoolika; Pandey, Jai Shanker

    2011-07-01

    Increasing greenhouse gaseous concentration in the atmosphere is perturbing the environment to cause grievous global warming and associated consequences. Following the rule that only measurable is manageable, mensuration of greenhouse gas intensiveness of different products, bodies, and processes is going on worldwide, expressed as their carbon footprints. The methodologies for carbon footprint calculations are still evolving and it is emerging as an important tool for greenhouse gas management. The concept of carbon footprinting has permeated and is being commercialized in all the areas of life and economy, but there is little coherence in definitions and calculations of carbon footprints among the studies. There are disagreements in the selection of gases, and the order of emissions to be covered in footprint calculations. Standards of greenhouse gas accounting are the common resources used in footprint calculations, although there is no mandatory provision of footprint verification. Carbon footprinting is intended to be a tool to guide the relevant emission cuts and verifications, its standardization at international level are therefore necessary. Present review describes the prevailing carbon footprinting methods and raises the related issues. PMID:20848311

  15. Estimate octane numbers using an enhanced method

    SciTech Connect

    Twu, C.H.; Coon, J.E.

    1997-03-01

    An improved model, based on the Twu-Coon method, is not only internally consistent, but also retains the same level of accuracy as the previous model in predicting octanes of gasoline blends. The enhanced model applies the same binary interaction parameters to components in each gasoline cut and their blends. Thus, the enhanced model can blend gasoline cuts in any order, in any combination or from any splitting of gasoline cuts and still yield the identical value of octane number for blending the same number of gasoline cuts. Setting binary interaction parameters to zero for identical gasoline cuts during the blending process is not required. The new model changes the old model`s methodology so that the same binary interaction parameters can be applied between components inside a gasoline cut as are applied to the same components between gasoline cuts. The enhanced model is more consistent in methodology than the original model, but it has equal accuracy for predicting octane numbers of gasoline blends, and it has the same number of binary interaction parameters. The paper discusses background, enhancement of the Twu-Coon interaction model, and three examples: blend of 2 identical gasoline cuts, blend of 3 gasoline cuts, and blend of the same 3 gasoline cuts in a different order.

  16. Carbon footprint: current methods of estimation.

    PubMed

    Pandey, Divya; Agrawal, Madhoolika; Pandey, Jai Shanker

    2011-07-01

    Increasing greenhouse gaseous concentration in the atmosphere is perturbing the environment to cause grievous global warming and associated consequences. Following the rule that only measurable is manageable, mensuration of greenhouse gas intensiveness of different products, bodies, and processes is going on worldwide, expressed as their carbon footprints. The methodologies for carbon footprint calculations are still evolving and it is emerging as an important tool for greenhouse gas management. The concept of carbon footprinting has permeated and is being commercialized in all the areas of life and economy, but there is little coherence in definitions and calculations of carbon footprints among the studies. There are disagreements in the selection of gases, and the order of emissions to be covered in footprint calculations. Standards of greenhouse gas accounting are the common resources used in footprint calculations, although there is no mandatory provision of footprint verification. Carbon footprinting is intended to be a tool to guide the relevant emission cuts and verifications, its standardization at international level are therefore necessary. Present review describes the prevailing carbon footprinting methods and raises the related issues.

  17. Simplified sampling methods for estimating levels of lactobacilli in saliva in dental clinical practice.

    PubMed

    Gabre, P; Martinsson, T; Gahnberg, L

    1999-08-01

    The aim of the present study was to evaluate whether estimation of lactobacilli was possible with simplified saliva sampling methods. Dentocult LB (Orion Diagnostica AB, Trosa, Sweden) was used to estimate the number of lactobacilli in saliva sampled by 3 different methods from 96 individuals: (i) Collecting and pouring stimulated saliva over a Dentocult dip-slide; (ii) direct licking of the Dentocult LB dip-slide; (iii) contaminating a wooden spatula with saliva and pressing against the Dentocult dip-slide. The first method was in accordance with the manufacturer's instructions and selected as the 'gold standard'; the other 2 methods were compared with this result. The 2 simplified methods for estimating levels of lactobacilli in saliva showed good reliability and specificity. Sensitivity, defined as the ability to detect individuals with a high number of lactabacilli in saliva, was sufficient for the licking method (85%), but significantly reduced for the wooden spatula method (52%).

  18. Dental age estimation in Egyptian children, comparison between two methods.

    PubMed

    El-Bakary, Amal A; Hammad, Shaza M; Mohammed, Fatma

    2010-10-01

    The need to estimate age of living individuals is becoming increasingly more important in both forensic science and clinical dentistry. The study of the morphological parameters of teeth on dental radiographs of adult humans is more reliable than most other methods for age estimation. Willems and Cameriere methods are newly presented methods. The aim of this work was to evaluate the applicability of using these methods for Egyptian children. Digitalized panoramas taken from 286 Egyptian children (134 boys, 152 girls) with age range from 5 to 16 years were analyzed. The seven left permanent mandibular teeth were evaluated using the two methods. The results of this research showed that dental age estimated by both methods was significantly correlated to real age. However, Willems method was slightly more accurate (98.62%) compared to Cameriere method (98.02%). Therefore, both methods can be recommended for practical application in clinical dentistry and forensic procedures on the Egyptian population.

  19. Estimating tree height-diameter models with the Bayesian method.

    PubMed

    Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.

  20. On the estimation of robustness and filtering ability of dynamic biochemical networks under process delays, internal parametric perturbations and external disturbances.

    PubMed

    Chen, Bor-Sen; Chen, Po-Wei

    2009-12-01

    Inherently, biochemical regulatory networks suffer from process delays, internal parametrical perturbations as well as external disturbances. Robustness is the property to maintain the functions of intracellular biochemical regulatory networks despite these perturbations. In this study, system and signal processing theories are employed for measurement of robust stability and filtering ability of linear and nonlinear time-delay biochemical regulatory networks. First, based on Lyapunov stability theory, the robust stability of biochemical network is measured for the tolerance of additional process delays and additive internal parameter fluctuations. Then the filtering ability of attenuating additive external disturbances is estimated for time-delay biochemical regulatory networks. In order to overcome the difficulty of solving the Hamilton Jacobi inequality (HJI), the global linearization technique is employed to simplify the measurement procedure by a simple linear matrix inequality (LMI) method. Finally, an example is given in silico to illustrate how to measure the robust stability and filtering ability of a nonlinear time-delay perturbative biochemical network. This robust stability and filtering ability measurement for biochemical network has potential application to synthetic biology, gene therapy and drug design. PMID:19788895

  1. Sedimentary phosphate method for estimating paleosalinities: a paleontological assumption.

    PubMed

    Guber, A L

    1969-11-01

    Paleosalinity values in certain rocks determined by the sedimentary phosphate method differ from salinity estimates based upon contained fossil assemblages, geochemical methods, and existing stratigraphic controls. Some anomalous values are related to the abundance of fossil organisms known to be concentrators of calcium phosphate. Because of the abundance and diversity of organisms which might introduce significant errors into paleosalinity estimates, the sedimentary phosphate method seemingly is of limited applicability.

  2. Validity of Using Two Numerical Analysis Techniques To Estimate Item and Ability Parameters via MMLE: Gauss-Hermite Quadrature Formula and Mislevy's Histogram Solution.

    ERIC Educational Resources Information Center

    Seong, Tae-Je

    The similarity of item and ability parameter estimations was investigated using two numerical analysis techniques via marginal maximum likelihood estimation (MMLE) with a large simulated data set (n=1,000 examinees) and changing the number of quadrature points. MMLE estimation uses a numerical analysis technique to integrate examinees' abilities…

  3. A source number estimation method for single optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu

    2015-10-01

    The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.

  4. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  5. Predictive ability of genomic selection models for breeding value estimation on growth traits of Pacific white shrimp Litopenaeus vannamei

    NASA Astrophysics Data System (ADS)

    Wang, Quanchao; Yu, Yang; Li, Fuhua; Zhang, Xiaojun; Xiang, Jianhai

    2016-10-01

    Genomic selection (GS) can be used to accelerate genetic improvement by shortening the selection interval. The successful application of GS depends largely on the accuracy of the prediction of genomic estimated breeding value (GEBV). This study is a first attempt to understand the practicality of GS in Litopenaeus vannamei and aims to evaluate models for GS on growth traits. The performance of GS models in L. vannamei was evaluated in a population consisting of 205 individuals, which were genotyped for 6 359 single nucleotide polymorphism (SNP) markers by specific length amplified fragment sequencing (SLAF-seq) and phenotyped for body length and body weight. Three GS models (RR-BLUP, BayesA, and Bayesian LASSO) were used to obtain the GEBV, and their predictive ability was assessed by the reliability of the GEBV and the bias of the predicted phenotypes. The mean reliability of the GEBVs for body length and body weight predicted by the different models was 0.296 and 0.411, respectively. For each trait, the performances of the three models were very similar to each other with respect to predictability. The regression coefficients estimated by the three models were close to one, suggesting near to zero bias for the predictions. Therefore, when GS was applied in a L. vannamei population for the studied scenarios, all three models appeared practicable. Further analyses suggested that improved estimation of the genomic prediction could be realized by increasing the size of the training population as well as the density of SNPs.

  6. A phase match based frequency estimation method for sinusoidal signals

    NASA Astrophysics Data System (ADS)

    Shen, Yan-Lin; Tu, Ya-Qing; Chen, Lin-Jun; Shen, Ting-Ao

    2015-04-01

    Accurate frequency estimation affects the ranging precision of linear frequency modulated continuous wave (LFMCW) radars significantly. To improve the ranging precision of LFMCW radars, a phase match based frequency estimation method is proposed. To obtain frequency estimation, linear prediction property, autocorrelation, and cross correlation of sinusoidal signals are utilized. The analysis of computational complex shows that the computational load of the proposed method is smaller than those of two-stage autocorrelation (TSA) and maximum likelihood. Simulations and field experiments are performed to validate the proposed method, and the results demonstrate the proposed method has better performance in terms of frequency estimation precision than methods of Pisarenko harmonic decomposition, modified covariance, and TSA, which contribute to improving the precision of LFMCW radars effectively.

  7. Evaluating methods for estimating local effective population size with and without migration.

    PubMed

    Gilbert, Kimberly J; Whitlock, Michael C

    2015-08-01

    Effective population size is a fundamental parameter in population genetics, evolutionary biology, and conservation biology, yet its estimation can be fraught with difficulties. Several methods to estimate Ne from genetic data have been developed that take advantage of various approaches for inferring Ne . The ability of these methods to accurately estimate Ne , however, has not been comprehensively examined. In this study, we employ seven of the most cited methods for estimating Ne from genetic data (Colony2, CoNe, Estim, MLNe, ONeSAMP, TMVP, and NeEstimator including LDNe) across simulated datasets with populations experiencing migration or no migration. The simulated population demographies are an isolated population with no immigration, an island model metapopulation with a sink population receiving immigrants, and an isolation by distance stepping stone model of populations. We find considerable variance in performance of these methods, both within and across demographic scenarios, with some methods performing very poorly. The most accurate estimates of Ne can be obtained by using LDNe, MLNe, or TMVP; however each of these approaches is outperformed by another in a differing demographic scenario. Knowledge of the approximate demography of population as well as the availability of temporal data largely improves Ne estimates.

  8. Evaluating methods for estimating local effective population size with and without migration.

    PubMed

    Gilbert, Kimberly J; Whitlock, Michael C

    2015-08-01

    Effective population size is a fundamental parameter in population genetics, evolutionary biology, and conservation biology, yet its estimation can be fraught with difficulties. Several methods to estimate Ne from genetic data have been developed that take advantage of various approaches for inferring Ne . The ability of these methods to accurately estimate Ne , however, has not been comprehensively examined. In this study, we employ seven of the most cited methods for estimating Ne from genetic data (Colony2, CoNe, Estim, MLNe, ONeSAMP, TMVP, and NeEstimator including LDNe) across simulated datasets with populations experiencing migration or no migration. The simulated population demographies are an isolated population with no immigration, an island model metapopulation with a sink population receiving immigrants, and an isolation by distance stepping stone model of populations. We find considerable variance in performance of these methods, both within and across demographic scenarios, with some methods performing very poorly. The most accurate estimates of Ne can be obtained by using LDNe, MLNe, or TMVP; however each of these approaches is outperformed by another in a differing demographic scenario. Knowledge of the approximate demography of population as well as the availability of temporal data largely improves Ne estimates. PMID:26118738

  9. Comparisons of Four Methods for Evapotranspiration Estimates in Jordan

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Gorelick, S.; Yoon, J.

    2014-12-01

    We compared evapotranspiration (ET) estimates in Jordan calculated by four theoretically-different methods. The first method was the FAO Single Crop Coefficient method. Our calculation took into account 20 dominant crop species in Jordan, utilized the global Climate Forecast System Reanalysis (CFSR) data set, and generated spatially heterogeneous crop coefficients. The second approach was the Surface Energy Balance Algorithms for Land (SEBAL) method. It was used with Landsat TM/ETM+ images to calculate instantaneous ET at the moment of satellite overpass, and the results of multiple images were combined to derive seasonal and annual ET estimates. The third method was based on the 1-km land surface ET product from MODIS, which was calculated using MODIS-observed land cover and photosynthetically active radiation. The fourth method was based on the SWAT model, which combines the Penman-Monteith equation and vegetation growth to estimate daily ET rates at the watershed scale. The results show substantial differences in both magnitude and spatiotemporal patterns of ET estimates across different regions from the four methods. Such differences were particularly evident in the Highlands region, where irrigation plays a critical role in local water balance. Results also suggest that land cover data is a major source of uncertainty in estimating regional ET rates. Although it is difficult to conclude which method was more reliable due to the limited availability of validation data, the results suggest caution in developing and interpreting ET estimates in this arid environment.

  10. Evaluating the ability of Bayesian clustering methods to detect hybridization and introgression using an empirical red wolf data set.

    PubMed

    Bohling, Justin H; Adams, Jennifer R; Waits, Lisette P

    2013-01-01

    Bayesian clustering methods have emerged as a popular tool for assessing hybridization using genetic markers. Simulation studies have shown these methods perform well under certain conditions; however, these methods have not been evaluated using empirical data sets with individuals of known ancestry. We evaluated the performance of two clustering programs, baps and structure, with genetic data from a reintroduced red wolf (Canis rufus) population in North Carolina, USA. Red wolves hybridize with coyotes (C. latrans), and a single hybridization event resulted in introgression of coyote genes into the red wolf population. A detailed pedigree has been reconstructed for the wild red wolf population that includes individuals of 50-100% red wolf ancestry, providing an ideal case study for evaluating the ability of these methods to estimate admixture. Using 17 microsatellite loci, we tested the programs using different training set compositions and varying numbers of loci. structure was more likely than baps to detect an admixed genotype and correctly estimate an individual's true ancestry composition. However, structure was more likely to misclassify a pure individual as a hybrid. Both programs were outperformed by a maximum-likelihood-based test designed specifically for this system, which never misclassified a hybrid (50-75% red wolf) as a red wolf or vice versa. Training set composition and the number of loci both had an impact on accuracy but their relative importance varied depending on the program. Our findings demonstrate the importance of evaluating methods used for detecting admixture in the context of endangered species management. PMID:23163531

  11. Evaluating the ability of Bayesian clustering methods to detect hybridization and introgression using an empirical red wolf data set.

    PubMed

    Bohling, Justin H; Adams, Jennifer R; Waits, Lisette P

    2013-01-01

    Bayesian clustering methods have emerged as a popular tool for assessing hybridization using genetic markers. Simulation studies have shown these methods perform well under certain conditions; however, these methods have not been evaluated using empirical data sets with individuals of known ancestry. We evaluated the performance of two clustering programs, baps and structure, with genetic data from a reintroduced red wolf (Canis rufus) population in North Carolina, USA. Red wolves hybridize with coyotes (C. latrans), and a single hybridization event resulted in introgression of coyote genes into the red wolf population. A detailed pedigree has been reconstructed for the wild red wolf population that includes individuals of 50-100% red wolf ancestry, providing an ideal case study for evaluating the ability of these methods to estimate admixture. Using 17 microsatellite loci, we tested the programs using different training set compositions and varying numbers of loci. structure was more likely than baps to detect an admixed genotype and correctly estimate an individual's true ancestry composition. However, structure was more likely to misclassify a pure individual as a hybrid. Both programs were outperformed by a maximum-likelihood-based test designed specifically for this system, which never misclassified a hybrid (50-75% red wolf) as a red wolf or vice versa. Training set composition and the number of loci both had an impact on accuracy but their relative importance varied depending on the program. Our findings demonstrate the importance of evaluating methods used for detecting admixture in the context of endangered species management.

  12. Food portion estimation by children with obesity: the effects of estimation method and food type.

    PubMed

    Friedman, Alinda; Bennett, Tesia G; Barbarich, Bobbi N; Keaschuk, Rachel A; Ball, Geoff D C

    2012-02-01

    Several factors influence children's ability to report accurate information about their dietary intake. To date, one understudied area of dietary assessment research relates to children's ability to estimate portion sizes of food. The purpose of this cross-sectional research was to examine food portion size estimation accuracy in 7- to 18-year-old children with obesity. Two within-subject experiments (Experiment 1: n=28, Experiment 2: n=27) were conducted in Edmonton, Alberta, Canada, during 2007-2008. Three types of portion size measurement aids (PSMAs) (eg, measuring cups and spoons, household objects [full and half-sized], and modeling clay) were counterbalanced in a Latin Square design for participants to estimate four types of foods (ie, solid, liquid, amorphous pieces, and amorphous masses). Analyses of variance conducted on percent of signed and absolute errors yielded significant PSMA type×food type interactions (P<0.01) in both experiments. Across all food types, for Experiments 1 and 2, measuring cups and spoons produced the least accurate estimates with respect to absolute error (54.2% and 53.1%, respectively), whereas modeling clay produced the most accurate estimates (40.6% and 33.2%, respectively). Half sizes of household objects also yielded enhanced accuracy (47.9% to 37.2%). Finally, there were significant differences in accuracy between amorphous pieces (eg, grapes) vs amorphous masses (eg, mashed potatoes; P<0.01), indicating that there are qualitative differences in how different amorphous foods are estimated. These data are relevant when collecting food intake data from children with obesity and indicate that different PSMAs may be needed to optimize food portion size estimation accuracy for different food types. PMID:22732463

  13. SAR imaging via modern 2-D spectral estimation methods.

    PubMed

    DeGraaf, S R

    1998-01-01

    This paper discusses the use of modern 2D spectral estimation algorithms for synthetic aperture radar (SAR) imaging. The motivation for applying power spectrum estimation methods to SAR imaging is to improve resolution, remove sidelobe artifacts, and reduce speckle compared to what is possible with conventional Fourier transform SAR imaging techniques. This paper makes two principal contributions to the field of adaptive SAR imaging. First, it is a comprehensive comparison of 2D spectral estimation methods for SAR imaging. It provides a synopsis of the algorithms available, discusses their relative merits for SAR imaging, and illustrates their performance on simulated and collected SAR imagery. Some of the algorithms presented or their derivations are new, as are some of the insights into or analyses of the algorithms. Second, this work develops multichannel variants of four related algorithms, minimum variance method (MVM), reduced-rank MVM (RRMVM), adaptive sidelobe reduction (ASR) and space variant apodization (SVA) to estimate both reflectivity intensity and interferometric height from polarimetric displaced-aperture interferometric data. All of these interferometric variants are new. In the interferometric contest, adaptive spectral estimation can improve the height estimates through a combination of adaptive nulling and averaging. Examples illustrate that MVM, ASR, and SVA offer significant advantages over Fourier methods for estimating both scattering intensity and interferometric height, and allow empirical comparison of the accuracies of Fourier, MVM, ASR, and SVA interferometric height estimates.

  14. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    ERIC Educational Resources Information Center

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  15. A Novel Monopulse Angle Estimation Method for Wideband LFM Radars

    PubMed Central

    Zhang, Yi-Xiong; Liu, Qi-Fan; Hong, Ru-Jia; Pan, Ping-Ping; Deng, Zhen-Miao

    2016-01-01

    Traditional monopulse angle estimations are mainly based on phase comparison and amplitude comparison methods, which are commonly adopted in narrowband radars. In modern radar systems, wideband radars are becoming more and more important, while the angle estimation for wideband signals is little studied in previous works. As noise in wideband radars has larger bandwidth than narrowband radars, the challenge lies in the accumulation of energy from the high resolution range profile (HRRP) of monopulse. In wideband radars, linear frequency modulated (LFM) signals are frequently utilized. In this paper, we investigate the monopulse angle estimation problem for wideband LFM signals. To accumulate the energy of the received echo signals from different scatterers of a target, we propose utilizing a cross-correlation operation, which can achieve a good performance in low signal-to-noise ratio (SNR) conditions. In the proposed algorithm, the problem of angle estimation is converted to estimating the frequency of the cross-correlation function (CCF). Experimental results demonstrate the similar performance of the proposed algorithm compared with the traditional amplitude comparison method. It means that the proposed method for angle estimation can be adopted. When adopting the proposed method, future radars may only need wideband signals for both tracking and imaging, which can greatly increase the data rate and strengthen the capability of anti-jamming. More importantly, the estimated angle will not become ambiguous under an arbitrary angle, which can significantly extend the estimated angle range in wideband radars. PMID:27271629

  16. Methods for Estimating Medical Expenditures Attributable to Intimate Partner Violence

    ERIC Educational Resources Information Center

    Brown, Derek S.; Finkelstein, Eric A.; Mercy, James A.

    2008-01-01

    This article compares three methods for estimating the medical cost burden of intimate partner violence against U.S. adult women (18 years and older), 1 year postvictimization. To compute the estimates, prevalence data from the National Violence Against Women Survey are combined with cost data from the Medical Expenditure Panel Survey, the…

  17. A Novel Monopulse Angle Estimation Method for Wideband LFM Radars.

    PubMed

    Zhang, Yi-Xiong; Liu, Qi-Fan; Hong, Ru-Jia; Pan, Ping-Ping; Deng, Zhen-Miao

    2016-01-01

    Traditional monopulse angle estimations are mainly based on phase comparison and amplitude comparison methods, which are commonly adopted in narrowband radars. In modern radar systems, wideband radars are becoming more and more important, while the angle estimation for wideband signals is little studied in previous works. As noise in wideband radars has larger bandwidth than narrowband radars, the challenge lies in the accumulation of energy from the high resolution range profile (HRRP) of monopulse. In wideband radars, linear frequency modulated (LFM) signals are frequently utilized. In this paper, we investigate the monopulse angle estimation problem for wideband LFM signals. To accumulate the energy of the received echo signals from different scatterers of a target, we propose utilizing a cross-correlation operation, which can achieve a good performance in low signal-to-noise ratio (SNR) conditions. In the proposed algorithm, the problem of angle estimation is converted to estimating the frequency of the cross-correlation function (CCF). Experimental results demonstrate the similar performance of the proposed algorithm compared with the traditional amplitude comparison method. It means that the proposed method for angle estimation can be adopted. When adopting the proposed method, future radars may only need wideband signals for both tracking and imaging, which can greatly increase the data rate and strengthen the capability of anti-jamming. More importantly, the estimated angle will not become ambiguous under an arbitrary angle, which can significantly extend the estimated angle range in wideband radars. PMID:27271629

  18. A bootstrap method for estimating uncertainty of water quality trends

    USGS Publications Warehouse

    Hirsch, Robert M.; Archfield, Stacey A.; DeCicco, Laura

    2015-01-01

    Estimation of the direction and magnitude of trends in surface water quality remains a problem of great scientific and practical interest. The Weighted Regressions on Time, Discharge, and Season (WRTDS) method was recently introduced as an exploratory data analysis tool to provide flexible and robust estimates of water quality trends. This paper enhances the WRTDS method through the introduction of the WRTDS Bootstrap Test (WBT), an extension of WRTDS that quantifies the uncertainty in WRTDS-estimates of water quality trends and offers various ways to visualize and communicate these uncertainties. Monte Carlo experiments are applied to estimate the Type I error probabilities for this method. WBT is compared to other water-quality trend-testing methods appropriate for data sets of one to three decades in length with sampling frequencies of 6–24 observations per year. The software to conduct the test is in the EGRETci R-package.

  19. A new FOA estimation method in SAR/GALILEO system

    NASA Astrophysics Data System (ADS)

    Liu, Gang; He, Bing; Li, Jilin

    2007-11-01

    The European Galileo Plan will include the Search and Rescue (SAR) transponder which will become part of the future MEOSAR (Medium earth orbit Search and Rescue) system, the new SAR system can improve localization accuracy through measuring the frequency of arrival (FOA) and time of arrival (TOA) of beacons, the FOA estimation is one of the most important part. In this paper, we aim to find a good FOA algorithm with minimal estimation error, which must be less than 0.1Hz. We propose a new method called Kay algorithm for the SAR/GALILEO system by comparing some frequency estimation methods and current methods using in the COAPAS-SARSAT system and analyzing distress beacon in terms of signal structure, spectrum characteristic. The simulation proves that the Kay method for FOA estimation is better.

  20. Methods for Estimating Uncertainty in Factor Analytic Solutions

    EPA Science Inventory

    The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...

  1. Evapotranspiration: Mass balance measurements compared with flux estimation methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Evapotranspiration (ET) may be measured by mass balance methods and estimated by flux sensing methods. The mass balance methods are typically restricted in terms of the area that can be represented (e.g., surface area of weighing lysimeter (LYS) or equivalent representative area of neutron probe (NP...

  2. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1988-01-01

    Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.

  3. A posteriori pointwise error estimates for the boundary element method

    SciTech Connect

    Paulino, G.H.; Gray, L.J.; Zarikian, V.

    1995-01-01

    This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.

  4. Two-dimensional location and direction estimating method.

    PubMed

    Haga, Teruhiro; Tsukamoto, Sosuke; Hoshino, Hiroshi

    2008-01-01

    In this paper, a method of estimating both the position and the rotation angle of an object on a measurement stage was proposed. The system utilizes the radio communication technology and the directivity of an antenna. As a prototype system, a measurement stage (a circle 240mm in diameter) with 36 antennas that placed in each 10 degrees was developed. Two transmitter antennas are settled in a right angle on the stage as the target object, and the position and the rotation angle is estimated by measuring efficiency of the radio communication of each 36 antennas. The experimental result revealed that even when the estimated location is not so accurate (about a 30 mm error), the rotation angle is accurately estimated (about 2.33 degree error on average). The result suggests that the proposed method will be useful for estimating the location and the direction of an object.

  5. A Channelization-Based DOA Estimation Method for Wideband Signals.

    PubMed

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-01-01

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  6. A Channelization-Based DOA Estimation Method for Wideband Signals

    PubMed Central

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-01-01

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  7. Children with developmental coordination disorder demonstrate a spatial mismatch when estimating coincident-timing ability with tools.

    PubMed

    Caçola, Priscila; Ibana, Melvin; Ricard, Mark; Gabbard, Carl

    2016-01-01

    Coincident timing or interception ability can be defined as the capacity to precisely time sensory input and motor output. This study compared accuracy of typically developing (TD) children and those with Developmental Coordination Disorder (DCD) on a task involving estimation of coincident timing with their arm and various tool lengths. Forty-eight (48) participants performed two experiments where they imagined intercepting a target moving toward (Experiment 1) and target moving away (Experiment 2) from them in 5 conditions with their arm and tool lengths: arm, 10, 20, 30, and 40 cm. In Experiment 1, the DCD group overestimated interception points approximately twice as much as the TD group, and both groups overestimated consistently regardless of the tool used. Results for Experiment 2 revealed that those with DCD underestimated about three times as much as the TD group, with the exception of when no tool was used. Overall, these results indicate that children with DCD are less accurate with estimation of coincident-timing; which might in part explain their difficulties with common motor activities such as catching a ball or striking a baseball pitch.

  8. A robust method for rotation estimation using spherical harmonics representation.

    PubMed

    Althloothi, Salah; Mahoor, Mohammad H; Voyles, Richard M

    2013-06-01

    This paper presents a robust method for 3D object rotation estimation using spherical harmonics representation and the unit quaternion vector. The proposed method provides a closed-form solution for rotation estimation without recurrence relations or searching for point correspondences between two objects. The rotation estimation problem is casted as a minimization problem, which finds the optimum rotation angles between two objects of interest in the frequency domain. The optimum rotation angles are obtained by calculating the unit quaternion vector from a symmetric matrix, which is constructed from the two sets of spherical harmonics coefficients using eigendecomposition technique. Our experimental results on hundreds of 3D objects show that our proposed method is very accurate in rotation estimation, robust to noisy data, missing surface points, and can handle intra-class variability between 3D objects. PMID:23475364

  9. A Fast Estimation Method of Railway Passengers' Flow

    NASA Astrophysics Data System (ADS)

    Nagasaki, Yusaku; Asuka, Masashi; Komaya, Kiyotoshi

    To evaluate a train schedule from the viewpoint of passengers' convenience, it is important to know each passenger's choice of trains and transfer stations to arrive at his/her destination. Because of difficulties of measuring such passengers' behavior, estimation methods of railway passengers' flow are proposed to execute such an evaluation. However, a train schedule planning system equipped with those methods is not practical due to necessity of much time to complete the estimation. In this article, the authors propose a fast passengers' flow estimation method that employs features of passengers' flow graph using preparative search based on each train's arrival time at each station. And the authors show the results of passengers' flow estimation applied on a railway in an urban area.

  10. Analytic study of the Tadoma method: language abilities of three deaf-blind subjects.

    PubMed

    Chomsky, C

    1986-09-01

    This study reports on the linguistic abilities of 3 adult deaf-blind subjects. The subjects perceive spoken language through touch, placing a hand on the face of the speaker and monitoring the speaker's articulatory motions, a method of speechreading known as Tadoma. Two of the subjects, deaf-blind since infancy, acquired language and learned to speak through this tactile system; the third subject has used Tadoma since becoming deaf-blind at age 7. Linguistic knowledge and productive language are analyzed, using standardized tests and several tests constructed for this study. The subjects' language abilities prove to be extensive, comparing favorably in many areas with hearing individuals. The results illustrate a relatively minor effect of limited language exposure on eventual language achievement. The results also demonstrate the adequacy of the tactile sense, in these highly trained Tadoma users, for transmitting information about spoken language sufficient to support the development of language and learning to produce speech.

  11. Demographic estimation methods for plants with unobservable life-states

    USGS Publications Warehouse

    Kery, M.; Gregg, K.B.; Schaub, M.

    2005-01-01

    Demographic estimation of vital parameters in plants with an unobservable dormant state is complicated, because time of death is not known. Conventional methods assume that death occurs at a particular time after a plant has last been seen aboveground but the consequences of assuming a particular duration of dormancy have never been tested. Capture-recapture methods do not make assumptions about time of death; however, problems with parameter estimability have not yet been resolved. To date, a critical comparative assessment of these methods is lacking. We analysed data from a 10 year study of Cleistes bifaria, a terrestrial orchid with frequent dormancy, and compared demographic estimates obtained by five varieties of the conventional methods, and two capture-recapture methods. All conventional methods produced spurious unity survival estimates for some years or for some states, and estimates of demographic rates sensitive to the time of death assumption. In contrast, capture-recapture methods are more parsimonious in terms of assumptions, are based on well founded theory and did not produce spurious estimates. In Cleistes, dormant episodes lasted for 1-4 years (mean 1.4, SD 0.74). The capture-recapture models estimated ramet survival rate at 0.86 (SE~ 0.01), ranging from 0.77-0.94 (SEs # 0.1) in anyone year. The average fraction dormant was estimated at 30% (SE 1.5), ranging 16 -47% (SEs # 5.1) in anyone year. Multistate capture-recapture models showed that survival rates were positively related to precipitation in the current year, but transition rates were more strongly related to precipitation in the previous than in the current year, with more ramets going dormant following dry years. Not all capture-recapture models of interest have estimable parameters; for instance, without excavating plants in years when they do not appear aboveground, it is not possible to obtain independent timespecific survival estimates for dormant plants. We introduce rigorous

  12. Non-Linear Transformation of IRT Scale To Account for the Effect of Non-Normal Ability Distribution of the Item Parameter Estimation.

    ERIC Educational Resources Information Center

    Yamamoto, Kentaro; Muraki, Eiji

    The extent to which properties of the ability scale and the form of the latent trait distribution influence the estimated item parameters of item response theory (IRT) was investigated using real and simulated data. Simulated data included 5,000 ability values randomly drawn from the standard normal distribution. Real data included the results for…

  13. A Didactic Presentation of Snijders's "l[subscript z]*" Index of Person Fit with Emphasis on Response Model Selection and Ability Estimation

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles; Beland, Sebastien

    2012-01-01

    This paper focuses on two likelihood-based indices of person fit, the index "l[subscript z]" and the Snijders's modified index "l[subscript z]*". The first one is commonly used in practical assessment of person fit, although its asymptotic standard normal distribution is not valid when true abilities are replaced by sample ability estimates. The…

  14. Comparison of volume estimation methods for pancreatic islet cells

    NASA Astrophysics Data System (ADS)

    Dvořák, JiřÃ.­; Å vihlík, Jan; Habart, David; Kybic, Jan

    2016-03-01

    In this contribution we study different methods of automatic volume estimation for pancreatic islets which can be used in the quality control step prior to the islet transplantation. The total islet volume is an important criterion in the quality control. Also, the individual islet volume distribution is interesting -- it has been indicated that smaller islets can be more effective. A 2D image of a microscopy slice containing the islets is acquired. The input of the volume estimation methods are segmented images of individual islets. The segmentation step is not discussed here. We consider simple methods of volume estimation assuming that the islets have spherical or ellipsoidal shape. We also consider a local stereological method, namely the nucleator. The nucleator does not rely on any shape assumptions and provides unbiased estimates if isotropic sections through the islets are observed. We present a simulation study comparing the performance of the volume estimation methods in different scenarios and an experimental study comparing the methods on a real dataset.

  15. Motion estimation using point cluster method and Kalman filter.

    PubMed

    Senesh, M; Wolf, A

    2009-05-01

    The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal

  16. Modeling an exhumed basin: A method for estimating eroded overburden

    USGS Publications Warehouse

    Poelchau, H.S.

    2001-01-01

    The Alberta Deep Basin in western Canada has undergone a large amount of erosion following deep burial in the Eocene. Basin modeling and simulation of burial and temperature history require estimates of maximum overburden for each gridpoint in the basin model. Erosion can be estimated using shale compaction trends. For instance, the widely used Magara method attempts to establish a sonic log gradient for shales and uses the extrapolation to a theoretical uncompacted shale value as a first indication of overcompaction and estimation of the amount of erosion. Because such gradients are difficult to establish in many wells, an extension of this method was devised to help map erosion over a large area. Sonic A; values of one suitable shale formation are calibrated with maximum depth of burial estimates from sonic log extrapolation for several wells. This resulting regression equation then can be used to estimate and map maximum depth of burial or amount of erosion for all wells in which this formation has been logged. The example from the Alberta Deep Basin shows that the magnitude of erosion calculated by this method is conservative and comparable to independent estimates using vitrinite reflectance gradient methods. ?? 2001 International Association for Mathematical Geology.

  17. Stability and error estimation for Component Adaptive Grid methods

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph; Zhu, Xiaolei

    1994-01-01

    Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.

  18. The estimation of the measurement results with using statistical methods

    NASA Astrophysics Data System (ADS)

    Velychko, O.; Gordiyenko, T.

    2015-02-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.

  19. A Study of Methods for Estimating Distributions of Test Scores.

    ERIC Educational Resources Information Center

    Cope, Ronald T.; Kolen, Michael J.

    This study compared five density estimation techniques applied to samples from a population of 272,244 examinees' ACT English Usage and Mathematics Usage raw scores. Unsmoothed frequencies, kernel method, negative hypergeometric, four-parameter beta compound binomial, and Cureton-Tukey methods were applied to 500 replications of random samples of…

  20. Rapid-estimation method for assessing scour at highway bridges

    USGS Publications Warehouse

    Holnbeck, Stephen R.

    1998-01-01

    A method was developed by the U.S. Geological Survey for rapid estimation of scour at highway bridges using limited site data and analytical procedures to estimate pier, abutment, and contraction scour depths. The basis for the method was a procedure recommended by the Federal Highway Administration for conducting detailed scour investigations, commonly referred to as the Level 2 method. Using pier, abutment, and contraction scour results obtained from Level 2 investigations at 122 sites in 10 States, envelope curves and graphical relations were developed that enable determination of scour-depth estimates at most bridge sites in a matter of a few hours. Rather than using complex hydraulic variables, surrogate variables more easily obtained in the field were related to calculated scour-depth data from Level 2 studies. The method was tested by having several experienced individuals apply the method in the field, and results were compared among the individuals and with previous detailed analyses performed for the sites. Results indicated that the variability in predicted scour depth among individuals applying the method generally was within an acceptable range, and that conservatively greater scour depths generally were obtained by the rapid-estimation method compared to the Level 2 method. The rapid-estimation method is considered most applicable for conducting limited-detail scour assessments and as a screening tool to determine those bridge sites that may require more detailed analysis. The method is designed to be applied only by a qualified professional possessing knowledge and experience in the fields of bridge scour, hydraulics, and flood hydrology, and having specific expertise with the Level 2 method.

  1. Precision of two methods for estimating age from burbot otoliths

    USGS Publications Warehouse

    Edwards, W.H.; Stapanian, M.A.; Stoneman, A.T.

    2011-01-01

    Lower reproductive success and older age structure are associated with many burbot (Lota lota L.) populations that are declining or of conservation concern. Therefore, reliable methods for estimating the age of burbot are critical for effective assessment and management. In Lake Erie, burbot populations have declined in recent years due to the combined effects of an aging population (&xmacr; = 10 years in 2007) and extremely low recruitment since 2002. We examined otoliths from burbot (N = 91) collected in Lake Erie in 2007 and compared the estimates of burbot age by two agers, each using two established methods (cracked-and-burned and thin-section) of estimating ages from burbot otoliths. One ager was experienced at estimating age from otoliths, the other was a novice. Agreement (precision) between the two agers was higher for the thin-section method, particularly at ages 6–11 years, based on linear regression analyses and 95% confidence intervals. As expected, precision between the two methods was higher for the more experienced ager. Both agers reported that the thin sections offered clearer views of the annuli, particularly near the margins on otoliths from burbot ages ≥8. Slides for the thin sections required some costly equipment and more than 2 days to prepare. In contrast, preparing the cracked-and-burned samples was comparatively inexpensive and quick. We suggest use of the thin-section method for estimating the age structure of older burbot populations.

  2. Increasing confidence in mass discharge estimates using geostatistical methods.

    PubMed

    Cai, Zuansi; Wilson, Ryan D; Cardiff, Michael A; Kitanidis, Peter K

    2011-01-01

    Mass discharge is one metric rapidly gaining acceptance for assessing the performance of in situ groundwater remediation systems. Multilevel sampling transects provide the data necessary to make such estimates, often using the Thiessen Polygon method. This method, however, does not provide a direct estimate of uncertainty. We introduce a geostatistical mass discharge estimation approach that involves a rigorous analysis of data spatial variability and selection of an appropriate variogram model. High-resolution interpolation was applied to create a map of measurements across a transect, and the magnitude and uncertainty of mass discharge were quantified by conditional simulation. An important benefit of the approach is quantified uncertainty of the mass discharge estimate. We tested the approach on data from two sites monitored using multilevel transects. We also used the approach to explore the effect of lower spatial monitoring resolution on the accuracy and uncertainty of mass discharge estimates. This process revealed two important findings: (1) appropriate monitoring resolution is that which yielded an estimate comparable with the full dataset value, and (2) high-resolution sampling yields a more representative spatial data structure descriptor, which can then be used via conditional simulation to make subsequent mass discharge estimates from lower resolution sampling of the same transect. The implication of the latter is that a high-resolution multilevel transect needs to be sampled only once to obtain the necessary spatial data descriptor for a contaminant plume exhibiting minor temporal variability, and thereafter less spatially intensely to reduce costs.

  3. A simple and reliable method for estimating haemoglobin.

    PubMed Central

    Stott, G. J.; Lewis, S. M.

    1995-01-01

    A new colour scale has been advised for estimating haemoglobin levels by matching the blood samples with ten levels of haemoglobin (3, 4, 5, 6, 7, 8, 9, 10, 12, and 14 g/dl) on the scale. Preliminary results show good correlations with spectrophotometric readings. The new device is being field tested and if the initial promise is confirmed, will provide a simple and reliable method for estimating haemoglobin where laboratory facilities are not available. Images Fig. 2 PMID:7614669

  4. Time domain attenuation estimation method from ultrasonic backscattered signals

    PubMed Central

    Ghoshal, Goutam; Oelze, Michael L.

    2012-01-01

    Ultrasonic attenuation is important not only as a parameter for characterizing tissue but also for compensating other parameters that are used to classify tissues. Several techniques have been explored for estimating ultrasonic attenuation from backscattered signals. In the present study, a technique is developed to estimate the local ultrasonic attenuation coefficient by analyzing the time domain backscattered signal. The proposed method incorporates an objective function that combines the diffraction pattern of the source/receiver with the attenuation slope in an integral equation. The technique was assessed through simulations and validated through experiments with a tissue mimicking phantom and fresh rabbit liver samples. The attenuation values estimated using the proposed technique were compared with the attenuation estimated using insertion loss measurements. For a data block size of 15 pulse lengths axially and 15 beamwidths laterally, the mean attenuation estimates from the tissue mimicking phantoms were within 10% of the estimates using insertion loss measurements. With a data block size of 20 pulse lengths axially and 20 beamwidths laterally, the error in the attenuation values estimated from the liver samples were within 10% of the attenuation values estimated from the insertion loss measurements. PMID:22779499

  5. Improved method for estimating tree crown diameter using high-resolution airborne data

    NASA Astrophysics Data System (ADS)

    Brovkina, Olga; Latypov, Iscander Sh.; Cienciala, Emil; Fabianek, Tomas

    2016-04-01

    Automatic mapping of tree crown size (radius, diameter, or width) from remote sensing can provide a major benefit for practical and scientific purposes, but requires the development of accurate methods. This study presents an improved method for average tree crown diameter estimation at a forest plot level from high-resolution airborne data. The improved method consists of the combination of a window binarization procedure and a granulometric algorithm, and avoids the complicated crown delineation procedure that is currently used to estimate crown size. The systematic error in average crown diameter estimates is corrected with the improved method. The improved method is tested with coniferous, beech, and mixed-species forest plots based on airborne images of various spatial resolutions. The absolute (quantitative) accuracy of the improved crown diameter estimates is comparable or higher for both monospecies plots and mixed-species plots than the current methods. The ability of the improved method to produce good estimates for average crown diameters for monoculture and mixed species, to use remote sensing data of various spatial resolution and to operate in automatic mode promisingly suggests its applicability to a wide range of forest systems.

  6. Estimating Population Size Using the Network Scale Up Method

    PubMed Central

    Maltiel, Rachael; Raftery, Adrian E.; McCormick, Tyler H.; Baraff, Aaron J.

    2015-01-01

    We develop methods for estimating the size of hard-to-reach populations from data collected using network-based questions on standard surveys. Such data arise by asking respondents how many people they know in a specific group (e.g. people named Michael, intravenous drug users). The Network Scale up Method (NSUM) is a tool for producing population size estimates using these indirect measures of respondents’ networks. Killworth et al. (1998a,b) proposed maximum likelihood estimators of population size for a fixed effects model in which respondents’ degrees or personal network sizes are treated as fixed. We extend this by treating personal network sizes as random effects, yielding principled statements of uncertainty. This allows us to generalize the model to account for variation in people’s propensity to know people in particular subgroups (barrier effects), such as their tendency to know people like themselves, as well as their lack of awareness of or reluctance to acknowledge their contacts’ group memberships (transmission bias). NSUM estimates also suffer from recall bias, in which respondents tend to underestimate the number of members of larger groups that they know, and conversely for smaller groups. We propose a data-driven adjustment method to deal with this. Our methods perform well in simulation studies, generating improved estimates and calibrated uncertainty intervals, as well as in back estimates of real sample data. We apply them to data from a study of HIV/AIDS prevalence in Curitiba, Brazil. Our results show that when transmission bias is present, external information about its likely extent can greatly improve the estimates. The methods are implemented in the NSUM R package. PMID:26949438

  7. Fault detection in electromagnetic suspension systems with state estimation methods

    SciTech Connect

    Sinha, P.K.; Zhou, F.B.; Kutiyal, R.S. . Dept. of Engineering)

    1993-11-01

    High-speed maglev vehicles need a high level of safety that depends on the whole vehicle system's reliability. There are many ways of attaining high reliability for the system. Conventional method uses redundant hardware with majority vote logic circuits. Hardware redundancy costs more, weigh more and occupy more space than that of analytically redundant methods. Analytically redundant systems use parameter identification and state estimation methods based on the system models to detect and isolate the fault of instruments (sensors), actuator and components. In this paper the authors use the Luenberger observer to estimate three state variables of the electromagnetic suspension system: position (airgap), vehicle velocity, and vertical acceleration. These estimates are compared with the corresponding sensor outputs for fault detection. In this paper, they consider FDI of the accelerometer, the sensor which provides the ride quality.

  8. A New Method for Radar Rainfall Estimation Using Merged Radar and Gauge Derived Fields

    NASA Astrophysics Data System (ADS)

    Hasan, M. M.; Sharma, A.; Johnson, F.; Mariethoz, G.; Seed, A.

    2014-12-01

    Accurate estimation of rainfall is critical for any hydrological analysis. The advantage of radar rainfall measurements is their ability to cover large areas. However, the uncertainties in the parameters of the power law, that links reflectivity to rainfall intensity, have to date precluded the widespread use of radars for quantitative rainfall estimates for hydrological studies. There is therefore considerable interest in methods that can combine the strengths of radar and gauge measurements by merging the two data sources. In this work, we propose two new developments to advance this area of research. The first contribution is a non-parametric radar rainfall estimation method (NPZR) which is based on kernel density estimation. Instead of using a traditional Z-R relationship, the NPZR accounts for the uncertainty in the relationship between reflectivity and rainfall intensity. More importantly, this uncertainty can vary for different values of reflectivity. The NPZR method reduces the Mean Square Error (MSE) of the estimated rainfall by 16 % compared to a traditionally fitted Z-R relation. Rainfall estimates are improved at 90% of the gauge locations when the method is applied to the densely gauged Sydney Terrey Hills radar region. A copula based spatial interpolation method (SIR) is used to estimate rainfall from gauge observations at the radar pixel locations. The gauge-based SIR estimates have low uncertainty in areas with good gauge density, whilst the NPZR method provides more reliable rainfall estimates than the SIR method, particularly in the areas of low gauge density. The second contribution of the work is to merge the radar rainfall field with spatially interpolated gauge rainfall estimates. The two rainfall fields are combined using a temporally and spatially varying weighting scheme that can account for the strengths of each method. The weight for each time period at each location is calculated based on the expected estimation error of each method

  9. Models and estimation methods for clinical HIV-1 data

    NASA Astrophysics Data System (ADS)

    Verotta, Davide

    2005-12-01

    Clinical HIV-1 data include many individual factors, such as compliance to treatment, pharmacokinetics, variability in respect to viral dynamics, race, sex, income, etc., which might directly influence or be associated with clinical outcome. These factors need to be taken into account to achieve a better understanding of clinical outcome and mathematical models can provide a unifying framework to do so. The first objective of this paper is to demonstrate the development of comprehensive HIV-1 dynamics models that describe viral dynamics and also incorporate different factors influencing such dynamics. The second objective of this paper is to describe alternative estimation methods that can be applied to the analysis of data with such models. In particular, we consider: (i) simple but effective two-stage estimation methods, in which data from each patient are analyzed separately and summary statistics derived from the results, (ii) more complex nonlinear mixed effect models, used to pool all the patient data in a single analysis. Bayesian estimation methods are also considered, in particular: (iii) maximum posterior approximations, MAP, and (iv) Markov chain Monte Carlo, MCMC. Bayesian methods incorporate prior knowledge into the models, thus avoiding some of the model simplifications introduced when the data are analyzed using two-stage methods, or a nonlinear mixed effect framework. We demonstrate the development of the models and the different estimation methods using real AIDS clinical trial data involving patients receiving multiple drugs regimens.

  10. Sealing Ability of Orthograde MTA and CEM Cement in Apically Resected Roots Using Bacterial Leakage Method

    PubMed Central

    Moradi, Saeed; Disfani, Reza; Ghazvini, Kiarash; Lomee, Mahdi

    2013-01-01

    Introduction The aim of this in vitro study was to determine the sealing ability of orthograde ProRoot mineral trioxide aggregate (MTA) and calcium enriched mixture (CEM) cement as root-end filling materials. Materials and Methods Fifty four extracted single-rooted human teeth were used. The samples were randomly divided into 3 experimental groups. In group A and B, 4 mm of WMTA and CEM cement were placed in an orthograde manner and 3 mm of apices were resected after 24 hours. In group C the apical 3 mm of each root was resected and the root-end prepared with ultrasonic tips to a depth of 3 mm and subsequently, then filled with MTA. The apical sealing ability was performed with bacterial leakage method. Statistical analysis was carried out with Chi-square test. Results There were no significant differences in the extent of bacterial leakage between the three experimental groups (P>0.05). Conclusion Based on the limitations of this in vitro study, we concluded that MTA and CEM cement can be placed in an orthograde manner when there is a potential need for root-end surgery. PMID:23922571

  11. Evaluating the ability of current energy use assessment methods to study contrasting livestock production systems.

    PubMed

    Vigne, Mathieu; Vayssières, Jonathan; Lecomte, Philippe; Peyraud, Jean-Louis

    2012-12-15

    Environmental impact assessment of agriculture has received increased attention over recent decades, leading to development of numerous methods. Among them, three deal with energy use: Energy Analysis (EA), Ecological Footprint (EF) and Emergy synthesis (Em). Based on a review of 197 references applying them to a variety of agricultural systems, this paper evaluates their ability to assess energy use. While EF assesses energy use as land use via a global accounting approach in which energy is only one component of the assessment, EA and Em are energy-focused and appear more appropriate to highlight ways to increase energy-use efficiency. EA presents a clear methodology via fossil energy use and its associated impacts but does not consider all energy sources. With inclusion of natural and renewable resources, Em focuses on other energy resources, such as solar radiation and energy from labour, but does not present impact indicators nor establish a clear link between activities and their environmental impacts. Improvements of the EA and Em methods could increase their ability to perform realistic and unbiased energy analysis or the diversity of livestock systems encountered in the world. First, to consider all energy sources, as Em does, EA could include solar radiation received by farm surfaces and energy expenditure by humans and animals to accomplish farm operations. Second, boundaries of the studied system in EA and Em must include draft animals, humans and communal grazing lands. Third, special attention should be given to update and locally adapt energy coefficients and transformities.

  12. New method for estimating low-earth-orbit collision probabilities

    NASA Technical Reports Server (NTRS)

    Vedder, John D.; Tabor, Jill L.

    1991-01-01

    An unconventional but general method is described for estimating the probability of collision between an earth-orbiting spacecraft and orbital debris. This method uses a Monte Caralo simulation of the orbital motion of the target spacecraft and each discrete debris object to generate an empirical set of distances, each distance representing the separation between the spacecraft and the nearest debris object at random times. Using concepts from the asymptotic theory of extreme order statistics, an analytical density function is fitted to this set of minimum distances. From this function, it is possible to generate realistic collision estimates for the spacecraft.

  13. A stochastic optimization method to estimate the spatial distribution of a pathogen from a sample.

    PubMed

    Parnell, S; Gottwald, T R; Irey, M S; Luo, W; van den Bosch, F

    2011-10-01

    Information on the spatial distribution of plant disease can be utilized to implement efficient and spatially targeted disease management interventions. We present a pathogen-generic method to estimate the spatial distribution of a plant pathogen using a stochastic optimization process which is epidemiologically motivated. Based on an initial sample, the method simulates the individual spread processes of a pathogen between patches of host to generate optimized spatial distribution maps. The method was tested on data sets of Huanglongbing of citrus and was compared with a kriging method from the field of geostatistics using the well-established kappa statistic to quantify map accuracy. Our method produced accurate maps of disease distribution with kappa values as high as 0.46 and was able to outperform the kriging method across a range of sample sizes based on the kappa statistic. As expected, map accuracy improved with sample size but there was a high amount of variation between different random sample placements (i.e., the spatial distribution of samples). This highlights the importance of sample placement on the ability to estimate the spatial distribution of a plant pathogen and we thus conclude that further research into sampling design and its effect on the ability to estimate disease distribution is necessary. PMID:21916625

  14. Estimation Method of Body Temperature from Upper Arm Temperature

    NASA Astrophysics Data System (ADS)

    Suzuki, Arata; Ryu, Kazuteru; Kanai, Nobuyuki

    This paper proposes a method for estimation of a body temperature by using a relation between the upper arm temperature and the atmospheric temperature. Conventional method has measured by armpit or oral, because the body temperature from the body surface is influenced by the atmospheric temperature. However, there is a correlation between the body surface temperature and the atmospheric temperature. By using this correlation, the body temperature can estimated from the body surface temperature. Proposed method enables to measure body temperature by the temperature sensor that is embedded in the blood pressure monitor cuff. Therefore, simultaneous measurement of blood pressure and body temperature can be realized. The effectiveness of the proposed method is verified through the actual body temperature experiment. The proposed method might contribute to reduce the medical staff's workloads in the home medical care, and more.

  15. Objectivity and validity of EMG method in estimating anaerobic threshold.

    PubMed

    Kang, S-K; Kim, J; Kwon, M; Eom, H

    2014-08-01

    The purposes of this study were to verify and compare the performances of anaerobic threshold (AT) point estimates among different filtering intervals (9, 15, 20, 25, 30 s) and to investigate the interrelationships of AT point estimates obtained by ventilatory threshold (VT) and muscle fatigue thresholds using electromyographic (EMG) activity during incremental exercise on a cycle ergometer. 69 untrained male university students, yet pursuing regular exercise voluntarily participated in this study. The incremental exercise protocol was applied with a consistent stepwise increase in power output of 20 watts per minute until exhaustion. AT point was also estimated in the same manner using V-slope program with gas exchange parameters. In general, the estimated values of AT point-time computed by EMG method were more consistent across 5 filtering intervals and demonstrated higher correlations among themselves when compared with those values obtained by VT method. The results found in the present study suggest that the EMG signals could be used as an alternative or a new option in estimating AT point. Also the proposed computing procedure implemented in Matlab for the analysis of EMG signals appeared to be valid and reliable as it produced nearly identical values and high correlations with VT estimates. PMID:24988194

  16. A review of action estimation methods for galactic dynamics

    NASA Astrophysics Data System (ADS)

    Sanders, Jason L.; Binney, James

    2016-04-01

    We review the available methods for estimating actions, angles and frequencies of orbits in both axisymmetric and triaxial potentials. The methods are separated into two classes. Unless an orbit has been trapped by a resonance, convergent, or iterative, methods are able to recover the actions to arbitrarily high accuracy given sufficient computing time. Faster non-convergent methods rely on the potential being sufficiently close to a separable potential, and the accuracy of the action estimate cannot be improved through further computation. We critically compare the accuracy of the methods and the required computation time for a range of orbits in an axisymmetric multicomponent Galactic potential. We introduce a new method for estimating actions that builds on the adiabatic approximation of Schönrich & Binney and discuss the accuracy required for the actions, angles and frequencies using suitable distribution functions for the thin and thick discs, the stellar halo and a star stream. We conclude that for studies of the disc and smooth halo component of the Milky Way, the most suitable compromise between speed and accuracy is the Stäckel Fudge, whilst when studying streams the non-convergent methods do not offer sufficient accuracy and the most suitable method is computing the actions from an orbit integration via a generating function. All the software used in this study can be downloaded from https://github.com/jls713/tact.

  17. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    NASA Astrophysics Data System (ADS)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  18. Global parameter estimation methods for stochastic biochemical systems

    PubMed Central

    2010-01-01

    Background The importance of stochasticity in cellular processes having low number of molecules has resulted in the development of stochastic models such as chemical master equation. As in other modelling frameworks, the accompanying rate constants are important for the end-applications like analyzing system properties (e.g. robustness) or predicting the effects of genetic perturbations. Prior knowledge of kinetic constants is usually limited and the model identification routine typically includes parameter estimation from experimental data. Although the subject of parameter estimation is well-established for deterministic models, it is not yet routine for the chemical master equation. In addition, recent advances in measurement technology have made the quantification of genetic substrates possible to single molecular levels. Thus, the purpose of this work is to develop practical and effective methods for estimating kinetic model parameters in the chemical master equation and other stochastic models from single cell and cell population experimental data. Results Three parameter estimation methods are proposed based on the maximum likelihood and density function distance, including probability and cumulative density functions. Since stochastic models such as chemical master equations are typically solved using a Monte Carlo approach in which only a finite number of Monte Carlo realizations are computationally practical, specific considerations are given to account for the effect of finite sampling in the histogram binning of the state density functions. Applications to three practical case studies showed that while maximum likelihood method can effectively handle low replicate measurements, the density function distance methods, particularly the cumulative density function distance estimation, are more robust in estimating the parameters with consistently higher accuracy, even for systems showing multimodality. Conclusions The parameter estimation methodologies

  19. A method for reliability estimation of heterogeneous systems

    NASA Astrophysics Data System (ADS)

    Mihalache, Alin; Guérin, Fabrice; Barreau, Mihaela; Todoskoff, Alexis; Bacivarov, Ioan; Bacivarov, Angelica

    2009-01-01

    Reliability estimation is becoming an important issue of the design process of complex heterogeneous systems. The concept of reliability is frequently seen as being one of the least controlled points and for some as being the critical point. Since these systems are very complex to study, the evaluation of their reliability is extremely difficult. In this paper, we propose a global method to estimate the mechatronic system reliability using operating field data. Since we have a small amount of data, we use an estimation method called Bayesian Restoration Maximization (BRM) method, thus increasing the estimation accuracy. The BRM method needs to define some prior knowledge. For this purpose, we propose to define the prior distribution using a Monte-Carlo simulation based on stochastic Petri Nets (SPN) model and on the operating field data. The stochastic PN model describes the functional and dysfunctional behaviours. In this study, we deal with the case of n repairable systems until a deterministic censoring time (for example, this censoring time may be the warranty period of an ABS system). We consider repair as the replacement of the failing component by an identical one in the case of electronic and mechanical subsystem and in the case of software, the default is rectified on all the subsystems. We simulate the failures times and we compute the confidence interval. The proposed method allows reliability evaluating both for n mechatronic systems and for their different subsystems.

  20. A method of estimating optimal catchment model parameters

    NASA Astrophysics Data System (ADS)

    Ibrahim, Yaacob; Liong, Shie-Yui

    1993-09-01

    A review of a calibration method developed earlier (Ibrahim and Liong, 1992) is presented. The method generates optimal values for single events. It entails randomizing the calibration parameters over bounds such that a system response under consideration is bounded. Within the bounds, which are narrow and generated automatically, explicit response surface representation of the response is obtained using experimental design techniques and regression analysis. The optimal values are obtained by searching on the response surface for a point at which the predicted response is equal to the measured response and the value of the joint probability density function at that point in a transformed space is the highest. The method is demonstrated on a catchment in Singapore. The issue of global optimal values is addressed by applying the method on wider bounds. The results indicate that the optimal values arising from the narrow set of bounds are, indeed, global. Improvements which are designed to achieve comparably accurate estimates but with less expense are introduced. A linear response surface model is used. Two approximations of the model are studied. The first is to fit the model using data points generated from simple Monte Carlo simulation; the second is to approximate the model by a Taylor series expansion. Very good results are obtained from both approximations. Two methods of obtaining a single estimate from the individual event's estimates of the parameters are presented. The simulated and measured hydrographs of four verification storms using these estimates compare quite well.

  1. Comparison of methods of estimating body fat in normal subjects and cancer patients

    SciTech Connect

    Cohn, S.H.; Ellis, K.J.; Vartsky, D.; Sawitsky, A.; Gartenhaus, W.; Yasumura, S.; Vaswani, A.N.

    1981-12-01

    Total body fat can be indirectly estimated by the following noninvasive techniques: determination of lean body mass by measurement of body potassium or body water, and determination of density by underwater weighing or by skinfold measurements. The measurement of total body nitrogen by neutron activation provides another technique for estimating lean body mass and hence body fat. The nitrogen measurement can also be combined with the measurement of total body potassium in a two compartment model of the lean body mass from which another estimate of body fat can be derived. All of the above techniques are subject to various errors and are based on a number of assumptions, some of which are incompletely validated. These techniques were applied to a population of normal subjects and to a group of cancer patients. The advantages and disadvantages of each method are discussed in terms of their ability to estimate total body fat.

  2. The deposit size frequency method for estimating undiscovered uranium deposits

    USGS Publications Warehouse

    McCammon, R.B.; Finch, W.I.

    1993-01-01

    The deposit size frequency (DSF) method has been developed as a generalization of the method that was used in the National Uranium Resource Evaluation (NURE) program to estimate the uranium endowment of the United States. The DSF method overcomes difficulties encountered during the NURE program when geologists were asked to provide subjective estimates of (1) the endowed fraction of an area judged favorable (factor F) for the occurrence of undiscovered uranium deposits and (2) the tons of endowed rock per unit area (factor T) within the endowed fraction of the favorable area. Because the magnitudes of factors F and T were unfamiliar to nearly all of the geologists, most geologists responded by estimating the number of undiscovered deposits likely to occur within the favorable area and the average size of these deposits. The DSF method combines factors F and T into a single factor (F??T) that represents the tons of endowed rock per unit area of the undiscovered deposits within the favorable area. Factor F??T, provided by the geologist, is the estimated number of undiscovered deposits per unit area in each of a number of specified deposit-size classes. The number of deposit-size classes and the size interval of each class are based on the data collected from the deposits in known (control) areas. The DSF method affords greater latitude in making subjective estimates than the NURE method and emphasizes more of the everyday experience of exploration geologists. Using the DSF method, new assessments have been made for the "young, organic-rich" surficial uranium deposits in Washington and idaho and for the solution-collapse breccia pipe uranium deposits in the Grand Canyon region in Arizona and adjacent Utah. ?? 1993 Oxford University Press.

  3. Estimation of uncertainty for contour method residual stress measurements

    DOE PAGES

    Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; Hill, Michael R.

    2014-12-03

    This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulnessmore » of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).« less

  4. Estimation of uncertainty for contour method residual stress measurements

    SciTech Connect

    Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; Hill, Michael R.

    2014-12-03

    This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulness of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).

  5. Estimating Agricultural Water Use using the Operational Simplified Surface Energy Balance Evapotranspiration Estimation Method

    NASA Astrophysics Data System (ADS)

    Forbes, B. T.

    2015-12-01

    Due to the predominantly arid climate in Arizona, access to adequate water supply is vital to the economic development and livelihood of the State. Water supply has become increasingly important during periods of prolonged drought, which has strained reservoir water levels in the Desert Southwest over past years. Arizona's water use is dominated by agriculture, consuming about seventy-five percent of the total annual water demand. Tracking current agricultural water use is important for managers and policy makers so that current water demand can be assessed and current information can be used to forecast future demands. However, many croplands in Arizona are irrigated outside of areas where water use reporting is mandatory. To estimate irrigation withdrawals on these lands, we use a combination of field verification, evapotranspiration (ET) estimation, and irrigation system qualification. ET is typically estimated in Arizona using the Modified Blaney-Criddle method which uses meteorological data to estimate annual crop water requirements. The Modified Blaney-Criddle method assumes crops are irrigated to their full potential over the entire growing season, which may or may not be realistic. We now use the Operational Simplified Surface Energy Balance (SSEBop) ET data in a remote-sensing and energy-balance framework to estimate cropland ET. SSEBop data are of sufficient resolution (30m by 30m) for estimation of field-scale cropland water use. We evaluate our SSEBop-based estimates using ground-truth information and irrigation system qualification obtained in the field. Our approach gives the end user an estimate of crop consumptive use as well as inefficiencies in irrigation system performance—both of which are needed by water managers for tracking irrigated water use in Arizona.

  6. Detecting diversity: emerging methods to estimate species diversity.

    PubMed

    Iknayan, Kelly J; Tingley, Morgan W; Furnas, Brett J; Beissinger, Steven R

    2014-02-01

    Estimates of species richness and diversity are central to community and macroecology and are frequently used in conservation planning. Commonly used diversity metrics account for undetected species primarily by controlling for sampling effort. Yet the probability of detecting an individual can vary among species, observers, survey methods, and sites. We review emerging methods to estimate alpha, beta, gamma, and metacommunity diversity through hierarchical multispecies occupancy models (MSOMs) and multispecies abundance models (MSAMs) that explicitly incorporate observation error in the detection process for species or individuals. We examine advantages, limitations, and assumptions of these detection-based hierarchical models for estimating species diversity. Accounting for imperfect detection using these approaches has influenced conclusions of comparative community studies and creates new opportunities for testing theory. PMID:24315534

  7. Inverse method for estimating shear stress in machining

    NASA Astrophysics Data System (ADS)

    Burns, T. J.; Mates, S. P.; Rhorer, R. L.; Whitenton, E. P.; Basak, D.

    2016-01-01

    An inverse method is presented for estimating shear stress in the work material in the region of chip-tool contact along the rake face of the tool during orthogonal machining. The method is motivated by a model of heat generation in the chip, which is based on a two-zone contact model for friction along the rake face, and an estimate of the steady-state flow of heat into the cutting tool. Given an experimentally determined discrete set of steady-state temperature measurements along the rake face of the tool, it is shown how to estimate the corresponding shear stress distribution on the rake face, even when no friction model is specified.

  8. A method of complex background estimation in astronomical images

    NASA Astrophysics Data System (ADS)

    Popowicz, A.; Smolka, B.

    2015-09-01

    In this paper, we present a novel approach to the estimation of strongly varying backgrounds in astronomical images by means of small-objects removal and subsequent missing pixels interpolation. The method is based on the analysis of a pixel local neighbourhood and utilizes the morphological distance transform. In contrast to popular background-estimation techniques, our algorithm allows for accurate extraction of complex structures, like galaxies or nebulae. Moreover, it does not require multiple tuning parameters, since it relies on physical properties of CCD image sensors - the gain and the readout noise characteristics. The comparison with other widely used background estimators revealed higher accuracy of the proposed technique. The superiority of the novel method is especially significant for the most challenging fluctuating backgrounds. The size of filtered-out objects is tunable; therefore, the algorithm may eliminate a wide range of foreground structures, including the dark current impulses, cosmic rays or even entire galaxies in deep field images.

  9. Estimation of Melanin and Hemoglobin Using Spectral Reflectance Images Reconstructed from a Digital RGB Image by the Wiener Estimation Method

    PubMed Central

    Nishidate, Izumi; Maeda, Takaaki; Niizeki, Kyuichi; Aizu, Yoshihisa

    2013-01-01

    A multi-spectral diffuse reflectance imaging method based on a single snap shot of Red-Green-Blue images acquired with the exposure time of 65 ms (15 fps) was investigated for estimating melanin concentration, blood concentration, and oxygen saturation in human skin tissue. The technique utilizes the Wiener estimation method to deduce spectral reflectance images instantaneously from an RGB image. Using the resultant absorbance spectrum as a response variable and the extinction coefficients of melanin, oxygenated hemoglobin and deoxygenated hemoglobin as predictor variables, multiple regression analysis provides regression coefficients. Concentrations of melanin and total blood are then determined from the regression coefficients using conversion vectors that are numerically deduced in advance by the Monte Carlo simulations for light transport in skin. Oxygen saturation is obtained directly from the regression coefficients. Experiments with a tissue-like agar gel phantom validated the method. In vivo experiments on fingers during upper limb occlusion demonstrated the ability of the method to evaluate physiological reactions of human skin. PMID:23783740

  10. Optimal Input Signal Design for Data-Centric Estimation Methods

    PubMed Central

    Deshpande, Sunil; Rivera, Daniel E.

    2013-01-01

    Data-centric estimation methods such as Model-on-Demand and Direct Weight Optimization form attractive techniques for estimating unknown functions from noisy data. These methods rely on generating a local function approximation from a database of regressors at the current operating point with the process repeated at each new operating point. This paper examines the design of optimal input signals formulated to produce informative data to be used by local modeling procedures. The proposed method specifically addresses the distribution of the regressor vectors. The design is examined for a linear time-invariant system under amplitude constraints on the input. The resulting optimization problem is solved using semidefinite relaxation methods. Numerical examples show the benefits in comparison to a classical PRBS input design. PMID:24317042

  11. Inertial sensor-based methods in walking speed estimation: a systematic review.

    PubMed

    Yang, Shuozhi; Li, Qingguo

    2012-01-01

    Self-selected walking speed is an important measure of ambulation ability used in various clinical gait experiments. Inertial sensors, i.e., accelerometers and gyroscopes, have been gradually introduced to estimate walking speed. This research area has attracted a lot of attention for the past two decades, and the trend is continuing due to the improvement of performance and decrease in cost of the miniature inertial sensors. With the intention of understanding the state of the art of current development in this area, a systematic review on the exiting methods was done in the following electronic engines/databases: PubMed, ISI Web of Knowledge, SportDiscus and IEEE Xplore. Sixteen journal articles and papers in proceedings focusing on inertial sensor based walking speed estimation were fully reviewed. The existing methods were categorized by sensor specification, sensor attachment location, experimental design, and walking speed estimation algorithm.

  12. Inertial Sensor-Based Methods in Walking Speed Estimation: A Systematic Review

    PubMed Central

    Yang, Shuozhi; Li, Qingguo

    2012-01-01

    Self-selected walking speed is an important measure of ambulation ability used in various clinical gait experiments. Inertial sensors, i.e., accelerometers and gyroscopes, have been gradually introduced to estimate walking speed. This research area has attracted a lot of attention for the past two decades, and the trend is continuing due to the improvement of performance and decrease in cost of the miniature inertial sensors. With the intention of understanding the state of the art of current development in this area, a systematic review on the exiting methods was done in the following electronic engines/databases: PubMed, ISI Web of Knowledge, SportDiscus and IEEE Xplore. Sixteen journal articles and papers in proceedings focusing on inertial sensor based walking speed estimation were fully reviewed. The existing methods were categorized by sensor specification, sensor attachment location, experimental design, and walking speed estimation algorithm. PMID:22778632

  13. Boundary estimation method for ultrasonic 3D imaging

    NASA Astrophysics Data System (ADS)

    Ohashi, Gosuke; Ohya, Akihisa; Natori, Michiya; Nakajima, Masato

    1993-09-01

    The authors developed a new method for automatically and efficiently estimating the boundaries of soft tissue and amniotic fluid and to obtain a fine three dimensional image of the fetus from information given by ultrasonic echo images. The aim of this boundary estimation is to provide clear three dimensional images by shading the surface of the fetus and uterine wall using Lambert shading method. Normally there appears a random granular pattern called 'speckle' on an ultrasonic echo image. Therefore, it is difficult to estimate the soft tissue boundary satisfactorily via a simple method such as threshold value processing. Accordingly, the authors devised a method for classifying attributes into three categories using the neural network: soft tissue, amniotic and boundary. The shape of the grey level histogram was the standard for judgment, made by referring to the peripheral region of the voxel. Its application to the clinical data has shown a fine estimation of the boundary between the fetus or the uterine wall and the amniotic, enabling the details of the three dimensional structure to be observed.

  14. Characterization of optical traps using on-line estimation methods

    NASA Astrophysics Data System (ADS)

    Gorman, Jason J.; LeBrun, Thomas W.; Balijepalli, Arvind; Gagnon, Cedric; Lee, Dongjin

    2005-08-01

    System identification methods are presented for the estimation of the characteristic frequency of an optically trapped particle. These methods are more amenable to automated on-line measurements and are believed to be less prone to erroneous results compared to techniques based on thermal noise analysis. Optical tweezers have been shown to be an effective tool in measuring the complex interactions of micro-scale particles with piconewton resolution. However, the accuracy of the measurements depends heavily on knowledge of the trap stiffness and the viscous drag coefficient for the trapped particle. The most commonly referenced approach to measuring the trap stiffness is the power spectrum method, which provides the characteristic frequency for the trap based on the roll-off of the frequency response of a trapped particle excited by thermal fluctuations. However, the reliance on thermal fluctuations to excite the trapping dynamics results in a large degree of uncertainty in the estimated characteristic frequency. These issues are addressed by two parameter estimation methods which can be implemented on-line for fast trap characterization. The first is a frequency domain system identification approach which combines swept-sine frequency testing with a least-squares transfer function fitting algorithm. The second is a recursive least-squares parameter estimation scheme. The algorithms and results from simulation studies are discussed in detail.

  15. A study of methods to estimate debris flow velocity

    USGS Publications Warehouse

    Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.

    2008-01-01

    Debris flow velocities are commonly back-calculated from superelevation events which require subjective estimates of radii of curvature of bends in the debris flow channel or predicted using flow equations that require the selection of appropriate rheological models and material property inputs. This research investigated difficulties associated with the use of these conventional velocity estimation methods. Radii of curvature estimates were found to vary with the extent of the channel investigated and with the scale of the media used, and back-calculated velocities varied among different investigated locations along a channel. Distinct populations of Bingham properties were found to exist between those measured by laboratory tests and those back-calculated from field data; thus, laboratory-obtained values would not be representative of field-scale debris flow behavior. To avoid these difficulties with conventional methods, a new preliminary velocity estimation method is presented that statistically relates flow velocity to the channel slope and the flow depth. This method presents ranges of reasonable velocity predictions based on 30 previously measured velocities. ?? 2008 Springer-Verlag.

  16. Accurate photometric redshift probability density estimation - method comparison and application

    NASA Astrophysics Data System (ADS)

    Rau, Markus Michael; Seitz, Stella; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben

    2015-10-01

    We introduce an ordinal classification algorithm for photometric redshift estimation, which significantly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, which can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitude less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular neural network code (ANNZ). In our use case, this improvement reaches 50 per cent for high-redshift objects (z ≥ 0.75). We show that using these more accurate photometric redshift PDFs will lead to a reduction in the systematic biases by up to a factor of 4, when compared with less accurate PDFs obtained from commonly used methods. The cosmological analyses we examine and find improvement upon are the following: gravitational lensing cluster mass estimates, modelling of angular correlation functions and modelling of cosmic shear correlation functions.

  17. Developing methods for timely and relevant mission impact estimation

    NASA Astrophysics Data System (ADS)

    Grimaila, Michael R.; Fortson, Larry W., Jr.; Sutton, Janet L.; Mills, Robert F.

    2009-05-01

    Military organizations embed information systems and networking technologies into their core mission processes as a means to increase operational efficiency, improve decision making quality, and shorten the "kill chain". Unfortunately, this dependence can place the mission at risk when the loss or degradation of the confidentiality, integrity, availability, non-repudiation, or authenticity of a critical information resource or flow occurs. Since the accuracy, conciseness, and timeliness of the information used in command decision making processes impacts the quality of these decisions, and hence, the operational mission outcome; it is imperative to explicitly recognize, quantify, and document critical missioninformation dependencies in order to gain a true appreciation of operational risk. We conjecture what is needed is a structured process to provide decision makers with real-time awareness of the status of critical information resources and timely notification of estimated mission impact, from the time an information incident is declared, until the incident is fully remediated. In this paper, we discuss our initial research towards the development of a mission impact estimation engine which fuses information from subject matter experts, historical mission impacts, and explicit mission models to provide the ability to estimate the mission impacts resulting from an information incident in real-time.

  18. Stress intensity estimates by a computer assisted photoelastic method

    NASA Technical Reports Server (NTRS)

    Smith, C. W.

    1977-01-01

    Following an introductory history, the frozen stress photoelastic method is reviewed together with analytical and experimental aspects of cracks in photoelastic models. Analytical foundations are then presented upon which a computer assisted frozen stress photoelastic technique is based for extracting estimates of stress intensity factors from three-dimensional cracked body problems. The use of the method is demonstrated for two currently important three-dimensional crack problems.

  19. Nonparametric methods for drought severity estimation at ungauged sites

    NASA Astrophysics Data System (ADS)

    Sadri, S.; Burn, D. H.

    2012-12-01

    The objective in frequency analysis is, given extreme events such as drought severity or duration, to estimate the relationship between that event and the associated return periods at a catchment. Neural networks and other artificial intelligence approaches in function estimation and regression analysis are relatively new techniques in engineering, providing an attractive alternative to traditional statistical models. There are, however, few applications of neural networks and support vector machines in the area of severity quantile estimation for drought frequency analysis. In this paper, we compare three methods for this task: multiple linear regression, radial basis function neural networks, and least squares support vector regression (LS-SVR). The area selected for this study includes 32 catchments in the Canadian Prairies. From each catchment drought severities are extracted and fitted to a Pearson type III distribution, which act as observed values. For each method-duration pair, we use a jackknife algorithm to produce estimated values at each site. The results from these three approaches are compared and analyzed, and it is found that LS-SVR provides the best quantile estimates and extrapolating capacity.

  20. Three Different Methods of Estimating LAI in a Small Watershed

    NASA Astrophysics Data System (ADS)

    Speckman, H. N.; Ewers, B. E.; Beverly, D.

    2015-12-01

    Leaf area index (LAI) is a critical input of models that improve predictive understanding of ecology, hydrology, and climate change. Multiple techniques exist to quantify LAI, most of which are labor intensive, and all often fail to converge on similar estimates. . Recent large-scale bark beetle induced mortality greatly altered LAI, which is now dominated by younger and more metabolically active trees compared to the pre-beetle forest. Tree mortality increases error in optical LAI estimates due to the lack of differentiation between live and dead branches in dense canopy. Our study aims to quantify LAI using three different LAI methods, and then to compare the techniques to each other and topographic drivers to develop an effective predictive model of LAI. This study focuses on quantifying LAI within a small (~120 ha) beetle infested watershed in Wyoming's Snowy Range Mountains. The first technique estimated LAI using in-situ hemispherical canopy photographs that were then analyzed with Hemisfer software. The second LAI estimation technique was use of the Kaufmann 1982 allometrerics from forest inventories conducted throughout the watershed, accounting for stand basal area, species composition, and the extent of bark beetle driven mortality. The final technique used airborne light detection and ranging (LIDAR) first DMS returns, which were used to estimating canopy heights and crown area. LIDAR final returns provided topographical information and were then ground-truthed during forest inventories. Once data was collected, a fractural analysis was conducted comparing the three methods. Species composition was driven by slope position and elevation Ultimately the three different techniques provided very different estimations of LAI, but each had their advantage: estimates from hemisphere photos were well correlated with SWE and snow depth measurements, forest inventories provided insight into stand health and composition, and LIDAR were able to quickly and

  1. A New Method for Deriving Global Estimates of Maternal Mortality.

    PubMed

    Wilmoth, John R; Mizoguchi, Nobuko; Oestergaard, Mikkel Z; Say, Lale; Mathers, Colin D; Zureick-Brown, Sarah; Inoue, Mie; Chou, Doris

    2012-07-13

    Maternal mortality is widely regarded as a key indicator of population health and of social and economic development. Its levels and trends are monitored closely by the United Nations and others, inspired in part by the UN's Millennium Development Goals (MDGs), which call for a three-fourths reduction in the maternal mortality ratio between 1990 and 2015. Unfortunately, the empirical basis for such monitoring remains quite weak, requiring the use of statistical models to obtain estimates for most countries. In this paper we describe a new method for estimating global levels and trends in maternal mortality. For countries lacking adequate data for direct calculation of estimates, we employed a parametric model that separates maternal deaths related to HIV/AIDS from all others. For maternal deaths unrelated to HIV/AIDS, the model consists of a hierarchical linear regression with three predictors and variable intercepts for both countries and regions. The uncertainty of estimates was assessed by simulating the estimation process, accounting for variability both in the data and in other model inputs. The method was used to obtain the most recent set of UN estimates, published in September 2010. Here, we provide a concise description and explanation of the approach, including a new analysis of the components of variability reflected in the uncertainty intervals. Final estimates provide evidence of a more rapid decline in the global maternal mortality ratio than suggested by previous work, including another study published in April 2010. We compare findings from the two recent studies and discuss topics for further research to help resolve differences. PMID:24416714

  2. Comparison of Methods for Estimating Low Flow Characteristics of Streams

    USGS Publications Warehouse

    Tasker, Gary D.

    1987-01-01

    Four methods for estimating the 7-day, 10-year and 7-day, 20-year low flows for streams are compared by the bootstrap method. The bootstrap method is a Monte Carlo technique in which random samples are drawn from an unspecified sampling distribution defined from observed data. The nonparametric nature of the bootstrap makes it suitable for comparing methods based on a flow series for which the true distribution is unknown. Results show that the two methods based on hypothetical distribution (Log-Pearson III and Weibull) had lower mean square errors than did the G. E. P. Box-D. R. Cox transformation method or the Log-W. C. Boughton method which is based on a fit of plotting positions.

  3. Statistical estimation of mineral age by K-Ar method

    SciTech Connect

    Vistelius, A.B.; Drubetzkoy, E.R.; Faas, A.V. )

    1989-11-01

    Statistical estimation of age of {sup 40}Ar/{sup 40}K ratios may be considered a result of convolution of uniform and normal distributions with different weights for different minerals. Data from Gul'shad Massif (Nearbalkhash, Kazakhstan, USSR) indicate that {sup 40}Ar/{sup 40}K ratios reflecting the intensity of geochemical processes can be resolved using convolutions. Loss of {sup 40}Ar in biotites is shown whereas hornblende retained the original content of {sup 40}Ar throughout the geological history of the massif. Results demonstrate that different estimation methods must be used for different minerals and different rocks when radiometric ages are employed for dating.

  4. Estimation of Defect's Geometric Parameters with a Thermal Method

    NASA Astrophysics Data System (ADS)

    Protasov, A.; Sineglazov, V.

    2003-03-01

    The problem of estimation of flaws' parameters has been realized in two stages. At the first stage, it has been estimated relationship between temperature difference of a heated sample's surface and geometrical parameters of the flaw. For this purpose we have solved a direct heat conduction problem for various combination of the geometrical sizes of the flaw. At the second stage, we have solved an inverse heat conduction problem using the H - infinity method of identification. The results have shown good convergence to real parameters.

  5. A new method for estimating growth transition matrices.

    PubMed

    Hillary, R M

    2011-03-01

    The vast majority of population models work using age or stage not length but there are many cases where animals cannot be aged sensibly or accurately. For these cases length-based models form the logical alternative but there has been little work done to develop and compare different methods of estimating growth transition matrices to be used in such models. This article demonstrates how a consistent Bayesian framework for estimating growth parameters and a novel method for constructing length transition matrices accounts for variation in growth in a clear and consistent manner and avoids potential subjective choices required using more established methods. The inclusion of the resultant growth uncertainty in population assessment models and the potential impact on management decisions is also addressed.

  6. Noninvasive method of estimating human newborn regional cerebral blood flow

    SciTech Connect

    Younkin, D.P.; Reivich, M.; Jaggi, J.; Obrist, W.; Delivoria-Papadopoulos, M.

    1982-12-01

    A noninvasive method of estimating regional cerebral blood flow (rCBF) in premature and full-term babies has been developed. Based on a modification of the /sup 133/Xe inhalation rCBF technique, this method uses eight extracranial NaI scintillation detectors and an i.v. bolus injection of /sup 133/Xe (approximately 0.5 mCi/kg). Arterial xenon concentration was estimated with an external chest detector. Cerebral blood flow was measured in 15 healthy, neurologically normal premature infants. Using Obrist's method of two-compartment analysis, normal values were calculated for flow in both compartments, relative weight and fractional flow in the first compartment (gray matter), initial slope of gray matter blood flow, mean cerebral blood flow, and initial slope index of mean cerebral blood flow. The application of this technique to newborns, its relative advantages, and its potential uses are discussed.

  7. An aerial survey method to estimate sea otter abundance

    USGS Publications Warehouse

    Bodkin, J.L.; Udevitz, M.S.

    1999-01-01

    Sea otters (Enhydra lutris) occur in shallow coastal habitats and can be highly visible on the sea surface. They generally rest in groups and their detection depends on factors that include sea conditions, viewing platform, observer technique and skill, distance, habitat and group size. While visible on the surface, they are difficult to see while diving and may dive in response to an approaching survey platform. We developed and tested an aerial survey method that uses intensive searches within portions of strip transects to adjust for availability and sightability biases. Correction factors are estimated independently for each survey and observer. In tests of our method using shore-based observers, we estimated detection probabilities of 0.52-0.72 in standard strip-transects and 0.96 in intensive searches. We used the survey method in Prince William Sound, Alaska to estimate a sea otter population size of 9,092 (SE = 1422). The new method represents an improvement over various aspects of previous methods, but additional development and testing will be required prior to its broad application.

  8. A new analytical method for groundwater recharge and discharge estimation

    NASA Astrophysics Data System (ADS)

    Liang, Xiuyu; Zhang, You-Kuan

    2012-07-01

    SummaryA new analytical method was proposed for groundwater recharge and discharge estimation in an unconfined aquifer. The method is based on an analytical solution to the Boussinesq equation linearized in terms of h2, where h is the water table elevation, with a time-dependent source term. The solution derived was validated with numerical simulation and was shown to be a better approximation than an existing solution to the Boussinesq equation linearized in terms of h. By calibrating against the observed water levels in a monitoring well during a period of 100 days, we shown that the method proposed in this study can be used to estimate daily recharge (R) and evapotranspiration (ET) as well as the lateral drainage. It was shown that the total R was reasonably estimated with a water-table fluctuation (WTF) method if the water table measurements away from a fixed-head boundary were used, but the total ET was overestimated and the total net recharge was underestimated because of the lack of consideration of lateral drainage and aquifer storage in the WTF method.

  9. Method to Estimate the Dissolved Air Content in Hydraulic Fluid

    NASA Technical Reports Server (NTRS)

    Hauser, Daniel M.

    2011-01-01

    In order to verify the air content in hydraulic fluid, an instrument was needed to measure the dissolved air content before the fluid was loaded into the system. The instrument also needed to measure the dissolved air content in situ and in real time during the de-aeration process. The current methods used to measure the dissolved air content require the fluid to be drawn from the hydraulic system, and additional offline laboratory processing time is involved. During laboratory processing, there is a potential for contamination to occur, especially when subsaturated fluid is to be analyzed. A new method measures the amount of dissolved air in hydraulic fluid through the use of a dissolved oxygen meter. The device measures the dissolved air content through an in situ, real-time process that requires no additional offline laboratory processing time. The method utilizes an instrument that measures the partial pressure of oxygen in the hydraulic fluid. By using a standardized calculation procedure that relates the oxygen partial pressure to the volume of dissolved air in solution, the dissolved air content is estimated. The technique employs luminescent quenching technology to determine the partial pressure of oxygen in the hydraulic fluid. An estimated Henry s law coefficient for oxygen and nitrogen in hydraulic fluid is calculated using a standard method to estimate the solubility of gases in lubricants. The amount of dissolved oxygen in the hydraulic fluid is estimated using the Henry s solubility coefficient and the measured partial pressure of oxygen in solution. The amount of dissolved nitrogen that is in solution is estimated by assuming that the ratio of dissolved nitrogen to dissolved oxygen is equal to the ratio of the gas solubility of nitrogen to oxygen at atmospheric pressure and temperature. The technique was performed at atmospheric pressure and room temperature. The technique could be theoretically carried out at higher pressures and elevated

  10. Dental age estimation using Willems method: A digital orthopantomographic study

    PubMed Central

    Mohammed, Rezwana Begum; Krishnamraju, P. V.; Prasanth, P. S.; Sanghvi, Praveen; Lata Reddy, M. Asha; Jyotsna, S.

    2014-01-01

    In recent years, age estimation has become increasingly important in living people for a variety of reasons, including identifying criminal and legal responsibility, and for many other social events such as a birth certificate, marriage, beginning a job, joining the army, and retirement. Objectives: The aim of this study was to assess the developmental stages of left seven mandibular teeth for estimation of dental age (DA) in different age groups and to evaluate the possible correlation between DA and chronological age (CA) in South Indian population using Willems method. Materials and Methods: Digital Orthopantomogram of 332 subjects (166 males, 166 females) who fit the study and the criteria were obtained. Assessment of mandibular teeth (from central incisor to the second molar on left quadrant) development was undertaken and DA was assessed using Willems method. Results and Discussion: The present study showed a significant correlation between DA and CA in both males (r = 0.71 and females (r = 0.88). The overall mean difference between the estimated DA and CA for males was 0.69 ± 2.14 years (P < 0.001) while for females, it was 0.08 ± 1.34 years (P > 0.05). Willems method underestimated the mean age of males by 0.69 years and females by 0.08 years and showed that females mature earlier than males in selected population. The mean difference between DA and CA according to Willems method was 0.39 years and is statistically significant (P < 0.05). Conclusion: This study showed significant relation between DA and CA. Thus, digital radiographic assessment of mandibular teeth development can be used to generate mean DA using Willems method and also the estimated age range for an individual of unknown CA. PMID:25191076

  11. Methods of Mmax Estimation East of the Rocky Mountains

    USGS Publications Warehouse

    Wheeler, Russell L.

    2009-01-01

    Several methods have been used to estimate the magnitude of the largest possible earthquake (Mmax) in parts of the Central and Eastern United States and adjacent Canada (CEUSAC). Each method has pros and cons. The largest observed earthquake in a specified area provides an unarguable lower bound on Mmax in the area. Beyond that, all methods are undermined by the enigmatic nature of geologic controls on the propagation of large CEUSAC ruptures. Short historical-seismicity records decrease the defensibility of several methods that are based on characteristics of small areas in most of CEUSAC. Methods that use global tectonic analogs of CEUSAC encounter uncertainties in understanding what 'analog' means. Five of the methods produce results that are inconsistent with paleoseismic findings from CEUSAC seismic zones or individual active faults.

  12. Estimation of quality factors by energy ratio method

    NASA Astrophysics Data System (ADS)

    Wang, Zong-Jun; Cao, Si-Yuan; Zhang, Hao-Ran; Qu, Ying-Ming; Yuan, Dian; Yang, Jin-Hao; Shao, Guan-Ming

    2015-03-01

    The quality factor Q, which reflects the energy attenuation of seismic waves in subsurface media, is a diagnostic tool for hydrocarbon detection and reservoir characterization. In this paper, we propose a new Q extraction method based on the energy ratio before and after the wavelet attenuation, named the energy-ratio method (ERM). The proposed method uses multipoint signal data in the time domain to estimate the wavelet energy without invoking the source wavelet spectrum, which is necessary in conventional Q extraction methods, and is applicable to any source wavelet spectrum; however, it requires high-precision seismic data. Forward zero-offset VSP modeling suggests that the ERM can be used for reliable Q inversion after nonintrinsic attenuation (geometric dispersion, reflection, and transmission loss) compensation. The application to real zero-offset VSP data shows that the Q values extracted by the ERM and spectral ratio methods are identical, which proves the reliability of the new method.

  13. Point estimation of simultaneous methods for solving polynomial equations

    NASA Astrophysics Data System (ADS)

    Petkovic, Miodrag S.; Petkovic, Ljiljana D.; Rancic, Lidija Z.

    2007-08-01

    The construction of computationally verifiable initial conditions which provide both the guaranteed and fast convergence of the numerical root-finding algorithm is one of the most important problems in solving nonlinear equations. Smale's "point estimation theory" from 1981 was a great advance in this topic; it treats convergence conditions and the domain of convergence in solving an equation f(z)=0 using only the information of f at the initial point z0. The study of a general problem of the construction of initial conditions of practical interest providing guaranteed convergence is very difficult, even in the case of algebraic polynomials. In the light of Smale's point estimation theory, an efficient approach based on some results concerning localization of polynomial zeros and convergent sequences is applied in this paper to iterative methods for the simultaneous determination of simple zeros of polynomials. We state new, improved initial conditions which provide the guaranteed convergence of frequently used simultaneous methods for solving algebraic equations: Ehrlich-Aberth's method, Ehrlich-Aberth's method with Newton's correction, Borsch-Supan's method with Weierstrass' correction and Halley-like (or Wang-Zheng) method. The introduced concept offers not only a clear insight into the convergence analysis of sequences generated by the considered methods, but also explicitly gives their order of convergence. The stated initial conditions are of significant practical importance since they are computationally verifiable; they depend only on the coefficients of a given polynomial, its degree n and initial approximations to polynomial zeros.

  14. The Lyapunov dimension and its estimation via the Leonov method

    NASA Astrophysics Data System (ADS)

    Kuznetsov, N. V.

    2016-06-01

    Along with widely used numerical methods for estimating and computing the Lyapunov dimension there is an effective analytical approach, proposed by G.A. Leonov in 1991. The Leonov method is based on the direct Lyapunov method with special Lyapunov-like functions. The advantage of the method is that it allows one to estimate the Lyapunov dimension of invariant sets without localization of the set in the phase space and, in many cases, to get effectively an exact Lyapunov dimension formula. In this work the invariance of the Lyapunov dimension with respect to diffeomorphisms and its connection with the Leonov method are discussed. For discrete-time dynamical systems an analog of Leonov method is suggested. In a simple but rigorous way, here it is presented the connection between the Leonov method and the key related works: Kaplan and Yorke (the concept of the Lyapunov dimension, 1979), Douady and Oesterlé (upper bounds of the Hausdorff dimension via the Lyapunov dimension of maps, 1980), Constantin, Eden, Foiaş, and Temam (upper bounds of the Hausdorff dimension via the Lyapunov exponents and Lyapunov dimension of dynamical systems, 1985-90), and the numerical calculation of the Lyapunov exponents and dimension.

  15. Estimating the extreme low-temperature event using nonparametric methods

    NASA Astrophysics Data System (ADS)

    D'Silva, Anisha

    This thesis presents a new method of estimating the one-in-N low temperature threshold using a non-parametric statistical method called kernel density estimation applied to daily average wind-adjusted temperatures. We apply our One-in-N Algorithm to local gas distribution companies (LDCs), as they have to forecast the daily natural gas needs of their consumers. In winter, demand for natural gas is high. Extreme low temperature events are not directly related to an LDCs gas demand forecasting, but knowledge of extreme low temperatures is important to ensure that an LDC has enough capacity to meet customer demands when extreme low temperatures are experienced. We present a detailed explanation of our One-in-N Algorithm and compare it to the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution. We show that our One-in-N Algorithm estimates the one-in- N low temperature threshold more accurately than the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution according to root mean square error (RMSE) measure at a 5% level of significance. The One-in- N Algorithm is tested by counting the number of times the daily average wind-adjusted temperature is less than or equal to the one-in- N low temperature threshold.

  16. Vegetation index methods for estimating evapotranspiration by remote sensing

    USGS Publications Warehouse

    Glenn, Edward P.; Nagler, Pamela L.; Huete, Alfredo R.

    2010-01-01

    Evapotranspiration (ET) is the largest term after precipitation in terrestrial water budgets. Accurate estimates of ET are needed for numerous agricultural and natural resource management tasks and to project changes in hydrological cycles due to potential climate change. We explore recent methods that combine vegetation indices (VI) from satellites with ground measurements of actual ET (ETa) and meteorological data to project ETa over a wide range of biome types and scales of measurement, from local to global estimates. The majority of these use time-series imagery from the Moderate Resolution Imaging Spectrometer on the Terra satellite to project ET over seasons and years. The review explores the theoretical basis for the methods, the types of ancillary data needed, and their accuracy and limitations. Coefficients of determination between modeled ETa and measured ETa are in the range of 0.45–0.95, and root mean square errors are in the range of 10–30% of mean ETa values across biomes, similar to methods that use thermal infrared bands to estimate ETa and within the range of accuracy of the ground measurements by which they are calibrated or validated. The advent of frequent-return satellites such as Terra and planed replacement platforms, and the increasing number of moisture and carbon flux tower sites over the globe, have made these methods feasible. Examples of operational algorithms for ET in agricultural and natural ecosystems are presented. The goal of the review is to enable potential end-users from different disciplines to adapt these methods to new applications that require spatially-distributed ET estimates.

  17. Visuospatial ability, accuracy of size estimation, and bulimic disturbance in a noneating-disordered college sample: a neuropsychological analysis.

    PubMed

    Thompson, J K; Spana, R E

    1991-08-01

    The relationship between visuospatial ability and size accuracy in perception was assessed in 69 normal college females. In general, correlations indicated small associations between visuospatial defects and size overestimation and little relationship between visuospatial ability and level of bulimic disturbance. Implications for research on the size overestimation of body image are addressed.

  18. Impedance-estimation methods, modeling methods, articles of manufacture, impedance-modeling devices, and estimated-impedance monitoring systems

    DOEpatents

    Richardson, John G.

    2009-11-17

    An impedance estimation method includes measuring three or more impedances of an object having a periphery using three or more probes coupled to the periphery. The three or more impedance measurements are made at a first frequency. Three or more additional impedance measurements of the object are made using the three or more probes. The three or more additional impedance measurements are made at a second frequency different from the first frequency. An impedance of the object at a point within the periphery is estimated based on the impedance measurements and the additional impedance measurements.

  19. Comparison of the performance of two methods for height estimation.

    PubMed

    Edelman, Gerda; Alberink, Ivo; Hoogeboom, Bart

    2010-03-01

    In the case study, two methods of performing body height measurements in images are compared based on projective geometry and 3D modeling of the crime scene. Accuracy and stability of height estimations are tested using reconstruction images of test persons of known height. Given unchanged camera settings, predictions of both methods are accurate. However, as the camera had been moved in the case, new vanishing points and camera matches had to be created for the reconstruction images. 3D modeling still yielded accurate and stable estimations. Projective geometry produced incorrect predictions for test persons and unstable intervals for questioned persons. The latter is probably caused by the straight lines in the field of view being hard to discern. With the quality of material presented, which is representative for our case practice, using vanishing points may thus yield unstable results. The results underline the importance of performing validation experiments in casework. PMID:20158593

  20. Estimating surface acoustic impedance with the inverse method.

    PubMed

    Piechowicz, Janusz

    2011-01-01

    Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics. PMID:21939599

  1. Adaptive error covariances estimation methods for ensemble Kalman filters

    SciTech Connect

    Zhen, Yicun; Harlim, John

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.

  2. Uncertainty in streamflow records - a comparison of multiple estimation methods

    NASA Astrophysics Data System (ADS)

    Kiang, Julie; Gazoorian, Chris; Mason, Robert; Le Coz, Jerome; Renard, Benjamin; Mansanarez, Valentin; McMillan, Hilary; Westerberg, Ida; Petersen-Øverleir, Asgeir; Reitan, Trond; Sikorska, Anna; Siebert, Jan; Coxon, Gemma; Freer, Jim; Belleville, Arnaud; Hauet, Alexandre

    2016-04-01

    Stage-discharge rating curves are used to relate streamflow discharge to continuously measured river stage readings in order to create a continuous record of streamflow discharge. The stage-discharge relationship is estimated and refined using discrete streamflow gaugings over time, during which both the discharge and stage are measured. The resulting rating curve has uncertainty due to multiple factors including the curve-fitting process, assumptions on the form of the model used, the changeable nature of natural channels, and the approaches used to extrapolate the rating equation beyond available observations. A number of different methods have been proposed for estimating rating curve uncertainty, differing in mathematical rigour, in the assumptions made about the component errors, and in the information required to implement the method at any given site. This study compares several methods that range from simple LOWESS fits to more complicated Bayesian methods that consider hydraulic principles directly. We evaluate these different methods when applied to a single gauging station using the same information (channel characteristics, hydrographs, and streamflow gaugings). We quantify the resultant spread of the stage-discharge curves and compare the level of uncertainty attributed to the streamflow record by the different methods..

  3. Methods of evaluating the spermatogenic ability of male raccoons (Procyon lotor).

    PubMed

    Uno, Taiki; Kato, Takuya; Seki, Yoshikazu; Kawakami, Eiichi; Hayama, Shin-ichi

    2014-01-01

    Feral raccoons (Procyon lotor) have been growing in number in Japan, and they are becoming a problematic invasive species. Consequently, they are commonly captured and killed in pest control programs. For effective population control of feral raccoons, it is necessary to understand their reproductive physiology and ecology. Although the reproductive traits of female raccoons are well known, those of the males are not well understood because specialized knowledge and facilities are required to study them. In this study, we first used a simple evaluation method to assess spermatogenesis and presence of spermatozoa in the tail of the epididymis of feral male raccoons by histologically examining the testis and epididymis. We then evaluated the possibility of using 7 variables-body weight, body length, body mass index, testicular weight, epididymal weight, testicular size and gonadosomatic index (GSI)-to estimate spermatogenesis and presence of spermatozoa in the tail of the epididymis. GSI and body weight were chosen as criteria for spermatogenesis, and GSI was chosen as the criterion for presence of spermatozoa in the tail of the epididymis. Because GSI is calculated from body weight and testicular weight, this model should be able to be used to estimate the reproductive state of male raccoons regardless of season and age when just these two parameters are known. In this study, GSI was demonstrated to be an index of reproductive state in male raccoons. To our knowledge, this is the first report of such a use for GSI in a member of the Carnivora.

  4. Estimation of race admixture--a new method.

    PubMed

    Chakraborty, R

    1975-05-01

    The contribution of a parental population in the gene pool of a hybrid population which arose by hybridization with one or more other populations is estimated here at the population level from the probability of gene identity. The dynamics of accumulation of such admixture is studied incorporating the fluctuations due to finite size of the hybrid population. The method is illustrated with data on admixture in Cherokee Indians. PMID:1146991

  5. A Sensitivity Analysis of a Thin Film Conductivity Estimation Method

    SciTech Connect

    McMasters, Robert L; Dinwiddie, Ralph Barton

    2010-01-01

    An analysis method was developed for determining the thermal conductivity of a thin film on a substrate of known thermal properties using the flash diffusivity method. In order to determine the thermal conductivity of the film using this method, the volumetric heat capacity of the film must be known, as determined in a separate experiment. Additionally, the thermal properties of the substrate must be known, including conductivity and volumetric heat capacity. The ideal conditions for the experiment are a low conductivity film adhered to a higher conductivity substrate. As the film becomes thinner with respect to the substrate or, as the conductivity of the film approaches that of the substrate, the estimation of thermal conductivity of the film becomes more difficult. The present research examines the effect of inaccuracies in the known parameters on the estimation of the parameter of interest, the thermal conductivity of the film. As such, perturbations are introduced into the other parameters in the experiment, which are assumed to be known, to find the effect on the estimated thermal conductivity of the film. A baseline case is established with the following parameters: Substrate thermal conductivity 1.0 W/m-K Substrate volumetric heat capacity 106 J/m3-K Substrate thickness 0.8 mm Film thickness 0.2 mm Film volumetric heat capacity 106 J/m3-K Film thermal conductivity 0.01 W/m-K Convection coefficient 20 W/m2-K Magnitude of heat absorbed during the flash 1000 J/m2 Each of these parameters, with the exception of film thermal conductivity, the parameter of interest, is varied from its baseline value, in succession, and placed into a synthetic experimental data file. Each of these data files is individually analyzed by the program to determine the effect on the estimated film conductivity, thus quantifying the vulnerability of the method to measurement errors.

  6. A robust method for estimating landfill methane emissions.

    PubMed

    Figueroa, Veronica K; Mackie, Kevin R; Guarriello, Nick; Cooper, C David

    2009-08-01

    Because municipal solid waste (MSW) landfills emit significant amounts of methane, a potent greenhouse gas, there is considerable interest in quantifying surficial methane emissions from landfills. The authors present a method to estimate methane emissions, using ambient air volatile organic compound (VOC) measurements taken above the surface of the landfill. Using a hand-held monitor, hundreds of VOC concentrations can be taken easily in a day, and simple meteorological data can be recorded at the same time. The standard Gaussian dispersion equations are inverted and solved by matrix methods to determine the methane emission rates at hundreds of point locations throughout a MSW landfill. These point emission rates are then summed to give the total landfill emission rate. This method is tested on a central Florida MSW landfill using data from 3 different days, taken 6 and 12 months apart. A sensitivity study is conducted, and the emission estimates are most sensitive to the input meteorological parameters of wind speed and stability class. Because of the many measurements that are used, the results are robust. When the emission estimates were used as inputs into a dispersion model, a reasonable scatterplot fit of the individual concentration measurement data resulted. PMID:19728486

  7. Improving stochastic estimates with inference methods: calculating matrix diagonals.

    PubMed

    Selig, Marco; Oppermann, Niels; Ensslin, Torsten A

    2012-02-01

    Estimating the diagonal entries of a matrix, that is not directly accessible but only available as a linear operator in the form of a computer routine, is a common necessity in many computational applications, especially in image reconstruction and statistical inference. Here, methods of statistical inference are used to improve the accuracy or the computational costs of matrix probing methods to estimate matrix diagonals. In particular, the generalized Wiener filter methodology, as developed within information field theory, is shown to significantly improve estimates based on only a few sampling probes, in cases in which some form of continuity of the solution can be assumed. The strength, length scale, and precise functional form of the exploited autocorrelation function of the matrix diagonal is determined from the probes themselves. The developed algorithm is successfully applied to mock and real world problems. These performance tests show that, in situations where a matrix diagonal has to be calculated from only a small number of computationally expensive probes, a speedup by a factor of 2 to 10 is possible with the proposed method.

  8. Geometric estimation method for x-ray digital intraoral tomosynthesis

    NASA Astrophysics Data System (ADS)

    Li, Liang; Yang, Yao; Chen, Zhiqiang

    2016-06-01

    It is essential for accurate image reconstruction to obtain a set of parameters that describes the x-ray scanning geometry. A geometric estimation method is presented for x-ray digital intraoral tomosynthesis (DIT) in which the detector remains stationary while the x-ray source rotates. The main idea is to estimate the three-dimensional (3-D) coordinates of each shot position using at least two small opaque balls adhering to the detector surface as the positioning markers. From the radiographs containing these balls, the position of each x-ray focal spot can be calculated independently relative to the detector center no matter what kind of scanning trajectory is used. A 3-D phantom which roughly simulates DIT was designed to evaluate the performance of this method both quantitatively and qualitatively in the sense of mean square error and structural similarity. Results are also presented for real data acquired with a DIT experimental system. These results prove the validity of this geometric estimation method.

  9. The segmented-beat modulation method for ECG estimation.

    PubMed

    Agostinelli, A; Giuliani, C; Fioretti, S; Di Nardo, F; Burattini, L

    2015-08-01

    Electrocardiographic (ECG) tracings corrupted by noise with frequency components in the ECG frequency band, may result useless unless appropriately processed. The estimation of the clean ECG from such recordings, however, is quite challenging; being linear filtering inappropriate. In the common situations in which the R peaks are detectable, template-based techniques have been proposed to estimate the ECG by a template-beat concatenation. However, such techniques have the major limit of not being able to reproduce physiological heart-rate and morphological variability. Thus, the aim of the present study was to propose the segmented-beat modulation method (SBMM) as the technique that overcomes such limit. The SBMM is an improved template-based technique that provides good-quality estimations of ECG tracings characterized by some heart-rate and morphological variability. It segments the template ECG beat into QRS and TUP segments and then, before concatenation, it applies a modulation/demodulation process to the TUP-segment so that the estimated-beat duration and morphology adjust to those of the corresponding original-beat. To test its performance, the SBMM was applied to 19 ECG tracings from normal subjects. There were no errors in estimating the R peak location, and the errors in the QRS and TUP segments were low (≤65 μV and ≤30 μV, respectively), with the former ones being significantly higher than the latter ones. Eventually, TUP errors tended to increase with increasing heart-rate variability (correlation coefficient: 0.59, P<;10(-2)). In conclusion, the new SBMM proved to be a useful tool for providing good-quality ECG estimations of tracings characterized by heart-rate and morphological variability.

  10. SCoPE: an efficient method of Cosmological Parameter Estimation

    SciTech Connect

    Das, Santanu; Souradeep, Tarun E-mail: tarun@iucaa.ernet.in

    2014-07-01

    Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of the chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data.

  11. Methods for estimating low-flow statistics for Massachusetts streams

    USGS Publications Warehouse

    Ries, Kernell G.; Friesz, Paul J.

    2000-01-01

    Methods and computer software are described in this report for determining flow duration, low-flow frequency statistics, and August median flows. These low-flow statistics can be estimated for unregulated streams in Massachusetts using different methods depending on whether the location of interest is at a streamgaging station, a low-flow partial-record station, or an ungaged site where no data are available. Low-flow statistics for streamgaging stations can be estimated using standard U.S. Geological Survey methods described in the report. The MOVE.1 mathematical method and a graphical correlation method can be used to estimate low-flow statistics for low-flow partial-record stations. The MOVE.1 method is recommended when the relation between measured flows at a partial-record station and daily mean flows at a nearby, hydrologically similar streamgaging station is linear, and the graphical method is recommended when the relation is curved. Equations are presented for computing the variance and equivalent years of record for estimates of low-flow statistics for low-flow partial-record stations when either a single or multiple index stations are used to determine the estimates. The drainage-area ratio method or regression equations can be used to estimate low-flow statistics for ungaged sites where no data are available. The drainage-area ratio method is generally as accurate as or more accurate than regression estimates when the drainage-area ratio for an ungaged site is between 0.3 and 1.5 times the drainage area of the index data-collection site. Regression equations were developed to estimate the natural, long-term 99-, 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, and 50-percent duration flows; the 7-day, 2-year and the 7-day, 10-year low flows; and the August median flow for ungaged sites in Massachusetts. Streamflow statistics and basin characteristics for 87 to 133 streamgaging stations and low-flow partial-record stations were used to develop the equations. The

  12. The composite method: An improved method for stream-water solute load estimation

    USGS Publications Warehouse

    Aulenbach, Brent T.; Hooper, R.P.

    2006-01-01

    The composite method is an alternative method for estimating stream-water solute loads, combining aspects of two commonly used methods: the regression-model method (which is used by the composite method to predict variations in concentrations between collected samples) and a period-weighted approach (which is used by the composite method to apply the residual concentrations from the regression model over time). The extensive dataset collected at the outlet of the Panola Mountain Research Watershed (PMRW) near Atlanta, Georgia, USA, was used in data analyses for illustrative purposes. A bootstrap (subsampling) experiment (using the composite method and the PMRW dataset along with various fixed-interval and large storm sampling schemes) obtained load estimates for the 8-year study period with a magnitude of the bias of less than 1%, even for estimates that included the fewest number of samples. Precisions were always <2% on a study period and annual basis, and <2% precisions were obtained for quarterly and monthly time intervals for estimates that had better sampling. The bias and precision of composite-method load estimates varies depending on the variability in the regression-model residuals, how residuals systematically deviated from the regression model over time, sampling design, and the time interval of the load estimate. The regression-model method did not estimate loads precisely during shorter time intervals, from annually to monthly, because the model could not explain short-term patterns in the observed concentrations. Load estimates using the period-weighted approach typically are biased as a result of sampling distribution and are accurate only with extensive sampling. The formulation of the composite method facilitates exploration of patterns (trends) contained in the unmodelled portion of the load. Published in 2006 by John Wiley & Sons, Ltd.

  13. Evaluation of estimation methods for organic carbon normalized sorption coefficients

    USGS Publications Warehouse

    Baker, James R.; Mihelcic, James R.; Luehrs, Dean C.; Hickey, James P.

    1997-01-01

    A critically evaluated set of 94 soil water partition coefficients normalized to soil organic carbon content (Koc) is presented for 11 classes of organic chemicals. This data set is used to develop and evaluate Koc estimation methods using three different descriptors. The three types of descriptors used in predicting Koc were octanol/water partition coefficient (Kow), molecular connectivity (mXt) and linear solvation energy relationships (LSERs). The best results were obtained estimating Koc from Kow, though a slight improvement in the correlation coefficient was obtained by using a two-parameter regression with Kow and the third order difference term from mXt. Molecular connectivity correlations seemed to be best suited for use with specific chemical classes. The LSER provided a better fit than mXt but not as good as the correlation with Koc. The correlation to predict Koc from Kow was developed for 72 chemicals; log Koc = 0.903* log Kow + 0.094. This correlation accounts for 91% of the variability in the data for chemicals with log Kow ranging from 1.7 to 7.0. The expression to determine the 95% confidence interval on the estimated Koc is provided along with an example for two chemicals of different hydrophobicity showing the confidence interval of the retardation factor determined from the estimated Koc. The data showed that Koc is not likely to be applicable for chemicals with log Kow < 1.7. Finally, the Koc correlation developed using Kow as a descriptor was compared with three nonclass-specific correlations and two 'commonly used' class-specific correlations to determine which method(s) are most suitable.

  14. Method to estimate center of rigidity using vibration recordings

    USGS Publications Warehouse

    Safak, Erdal; Celebi, Mehmet

    1990-01-01

    A method to estimate the center of rigidity of buildings by using vibration recordings is presented. The method is based on the criterion that the coherence of translational motions with the rotational motion is minimum at the center of rigidity. Since the coherence is a function of frequency, a gross but frequency-independent measure of the coherency is defined as the integral of the coherence function over the frequency. The center of rigidity is determined by minimizing this integral. The formulation is given for two-dimensional motions. Two examples are presented for the method; a rectangular building with ambient-vibration recordings, and a triangular building with earthquake-vibration recordings. Although the examples given are for buildings, the method can be applied to any structure with two-dimensional motions.

  15. Estimates of tropical bromoform emissions using an inversion method

    NASA Astrophysics Data System (ADS)

    Ashfold, M. J.; Harris, N. R. P.; Manning, A. J.; Robinson, A. D.; Warwick, N. J.; Pyle, J. A.

    2013-08-01

    Bromine plays an important role in ozone chemistry in both the troposphere and stratosphere. When measured by mass, bromoform (CHBr3) is thought to be the largest organic source of bromine to the atmosphere. While seaweed and phytoplankton are known to be dominant sources, the size and the geographical distribution of CHBr3 emissions remains uncertain. Particularly little is known about emissions from the Maritime Continent, which have usually been assumed to be large, and which appear to be especially likely to reach the stratosphere. In this study we aim to use the first multi-annual set of CHBr3 measurements from this region, and an inversion method, to reduce this uncertainty. We find that local measurements of a short-lived gas like CHBr3 can only be used to constrain emissions from a relatively small, sub-regional domain. We then obtain detailed estimates of both the distribution and magnitude of CHBr3 emissions within this area. Our estimates appear to be relatively insensitive to the assumptions inherent in the inversion process. We extrapolate this information to produce estimated emissions for the entire tropics (defined as 20° S-20° N) of 225 GgCHBr3 y-1. This estimate is consistent with other recent studies, and suggests that CHBr3 emissions in the coastline-rich Maritime Continent may not be stronger than emissions in other parts of the tropics.

  16. Reliability of field methods for estimating body fat.

    PubMed

    Loenneke, Jeremy P; Barnes, Jeremy T; Wilson, Jacob M; Lowery, Ryan P; Isaacs, Melissa N; Pujol, Thomas J

    2013-09-01

    When health professionals measure the fitness levels of clients, body composition is usually estimated. In practice, the reliability of the measurement may be more important than the actual validity, as reliability determines how much change is needed to be considered meaningful. Therefore, the purpose of this study was to determine the reliability of two bioelectrical impedance analysis (BIA) devices (in athlete and non-athlete mode) and compare that to 3-site skinfold (SKF) readings. Twenty-one college students attended the laboratory on two occasions and had their measurements taken in the following order: body mass, height, SKF, Tanita body fat-350 (BF-350) and Omron HBF-306C. There were no significant pairwise differences between Visit 1 and Visit 2 for any of the estimates (P>0.05). The Pearson product correlations ranged from r = 0.933 for HBF-350 in the athlete mode (A) to r = 0.994 for SKF. The ICC's ranged from 0.93 for HBF-350(A) to 0.992 for SKF, and the MD's ranged from 1.8% for SKF to 5.1% for BF-350(A). The current study found that SKF and HBF-306C(A) were the most reliable (<2%) methods of estimating BF%, with the other methods (BF-350, BF-350(A), HBF-306C) producing minimal differences greater than 2%. In conclusion, the SKF method presented with the best reliability because of its low minimal difference, suggesting this method may be the best field method to track changes over time if you have an experienced tester. However, if technical error is a concern, the practitioner may use the HBF-306C(A) because it had a minimal difference value comparable to SKF. PMID:23701358

  17. A Monte Carlo Simulation Investigating the Validity and Reliability of Ability Estimation in Item Response Theory with Speeded Computer Adaptive Tests

    ERIC Educational Resources Information Center

    Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M.

    2010-01-01

    Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…

  18. The Effects on Parameter Estimation of Correlated Dimensions and a Differentiated Ability in a Two-Dimensional, Two-Parameter Item Response Model.

    ERIC Educational Resources Information Center

    Batley, Rose-Marie; Boss, Marvin W.

    The purpose of this study was to assess the effects of correlated dimensions and differential ability on one dimension on parameter estimation when using a two-dimensional item response theory model. Multidimensional analysis of simulated two-dimensional item response data fitting the M2PL model of M. D. Reckase (1985, 1986) was conducted using…

  19. A Novel Method Testing the Ability to Imitate Composite Emotional Expressions Reveals an Association with Empathy

    PubMed Central

    Williams, Justin H. G.; Nicolson, Andrew T. A.; Clephan, Katie J.; de Grauw, Haro; Perrett, David I.

    2013-01-01

    Social communication relies on intentional control of emotional expression. Its variability across cultures suggests important roles for imitation in developing control over enactment of subtly different facial expressions and therefore skills in emotional communication. Both empathy and the imitation of an emotionally communicative expression may rely on a capacity to share both the experience of an emotion and the intention or motor plan associated with its expression. Therefore, we predicted that facial imitation ability would correlate with empathic traits. We built arrays of visual stimuli by systematically blending three basic emotional expressions in controlled proportions. Raters then assessed accuracy of imitation by reconstructing the same arrays using photographs of participants’ attempts at imitations of the stimuli. Accuracy was measured as the mean proximity of the participant photographs to the target stimuli in the array. Levels of performance were high, and rating was highly reliable. More empathic participants, as measured by the empathy quotient (EQ), were better facial imitators and, in particular, performed better on the more complex, blended stimuli. This preliminary study offers a simple method for the measurement of facial imitation accuracy and supports the hypothesis that empathic functioning may utilise motor control mechanisms which are also used for emotional expression. PMID:23626756

  20. A Method to Determine the Ability of Drugs to Diffuse through the Blood- Brain Barrier

    NASA Astrophysics Data System (ADS)

    Seelig, Anna; Gottschlich, Rudolf; Devant, Ralf M.

    1994-01-01

    A method has been devised for predicting the ability of drugs to cross the blood-brain barrier. The criteria depend on the amphiphilic properties of a drug as reflected in its surface activity. The assessment was made with various drugs that either penetrate or do not penetrate the blood-brain barrier. The surface activity of these drugs was quantified by their Gibbs adsorption isotherms in terms of three parameters: (i) the onset of surface activity, (ii) the critical micelle concentration, and (iii) the surface area requirement of the drug at the air/water interface. A calibration diagram is proposed in which the critical micelle concentration is plotted against the concentration required for the onset of surface activity. Three different regions are easily distinguished in this diagram: a region of very hydrophobic drugs which fail to enter the central nervous system because they remain adsorbed to the membrane, a central area of less hydrophobic drugs which can cross the blood-brain barrier, and a region of relatively hydrophilic drugs which do not cross the blood-brain barrier unless applied at high concentrations. This diagram can be used to predict reliably the central nervous system permeability of an unknown compound from a simple measurement of its Gibbs adsorption isotherm.

  1. Estimating Contraceptive Prevalence Using Logistics Data for Short-Acting Methods: Analysis Across 30 Countries

    PubMed Central

    Cunningham, Marc; Brown, Niquelle; Sacher, Suzy; Hatch, Benjamin; Inglis, Andrew; Aronovich, Dana

    2015-01-01

    -based model for condoms, models were able to estimate public-sector prevalence rates for each short-acting method to within 2 percentage points in at least 85% of countries. Conclusions: Public-sector contraceptive logistics data are strongly correlated with public-sector prevalence rates for short-acting methods, demonstrating the quality of current logistics data and their ability to provide relatively accurate prevalence estimates. The models provide a starting point for generating interim estimates of contraceptive use when timely survey data are unavailable. All models except the condoms CYP model performed well; the regression models were most accurate but the CYP model offers the simplest calculation method. Future work extending the research to other modern methods, relating subnational logistics data with prevalence rates, and tracking that relationship over time is needed. PMID:26374805

  2. A Generalized, Likelihood-Free Method for Posterior Estimation

    PubMed Central

    Turner, Brandon M.; Sederberg, Per B.

    2014-01-01

    Recent advancements in Bayesian modeling have allowed for likelihood-free posterior estimation. Such estimation techniques are crucial to the understanding of simulation-based models, whose likelihood functions may be difficult or even impossible to derive. However, current approaches are limited by their dependence on sufficient statistics and/or tolerance thresholds. In this article, we provide a new approach that requires no summary statistics, error terms, or thresholds, and is generalizable to all models in psychology that can be simulated. We use our algorithm to fit a variety of cognitive models with known likelihood functions to ensure the accuracy of our approach. We then apply our method to two real-world examples to illustrate the types of complex problems our method solves. In the first example, we fit an error-correcting criterion model of signal detection, whose criterion dynamically adjusts after every trial. We then fit two models of choice response time to experimental data: the Linear Ballistic Accumulator model, which has a known likelihood, and the Leaky Competing Accumulator model whose likelihood is intractable. The estimated posterior distributions of the two models allow for direct parameter interpretation and model comparison by means of conventional Bayesian statistics – a feat that was not previously possible. PMID:24258272

  3. A projection and density estimation method for knowledge discovery.

    PubMed

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.

  4. Methods for cost estimation in software project management

    NASA Astrophysics Data System (ADS)

    Briciu, C. V.; Filip, I.; Indries, I. I.

    2016-02-01

    The speed in which the processes used in software development field have changed makes it very difficult the task of forecasting the overall costs for a software project. By many researchers, this task has been considered unachievable, but there is a group of scientist for which this task can be solved using the already known mathematical methods (e.g. multiple linear regressions) and the new techniques as genetic programming and neural networks. The paper presents a solution for building a model for the cost estimation models in the software project management using genetic algorithms starting from the PROMISE datasets related COCOMO 81 model. In the first part of the paper, a summary of the major achievements in the research area of finding a model for estimating the overall project costs is presented together with the description of the existing software development process models. In the last part, a basic proposal of a mathematical model of a genetic programming is proposed including here the description of the chosen fitness function and chromosome representation. The perspective of model described it linked with the current reality of the software development considering as basis the software product life cycle and the current challenges and innovations in the software development area. Based on the author's experiences and the analysis of the existing models and product lifecycle it was concluded that estimation models should be adapted with the new technologies and emerging systems and they depend largely by the chosen software development method.

  5. A Projection and Density Estimation Method for Knowledge Discovery

    PubMed Central

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675

  6. A method to estimate groundwater depletion from confining layers

    USGS Publications Warehouse

    Konikow, L.F.; Neuzil, C.E.

    2007-01-01

    Although depletion of storage in low-permeability confining layers is the source of much of the groundwater produced from many confined aquifer systems, it is all too frequently overlooked or ignored. This makes effective management of groundwater resources difficult by masking how much water has been derived from storage and, in some cases, the total amount of water that has been extracted from an aquifer system. Analyzing confining layer storage is viewed as troublesome because of the additional computational burden and because the hydraulic properties of confining layers are poorly known. In this paper we propose a simplified method for computing estimates of confining layer depletion, as well as procedures for approximating confining layer hydraulic conductivity (K) and specific storage (Ss) using geologic information. The latter makes the technique useful in developing countries and other settings where minimal data are available or when scoping calculations are needed. As such, our approach may be helpful for estimating the global transfer of groundwater to surface water. A test of the method on a synthetic system suggests that the computational errors will generally be small. Larger errors will probably result from inaccuracy in confining layer property estimates, but these may be no greater than errors in more sophisticated analyses. The technique is demonstrated by application to two aquifer systems: the Dakota artesian aquifer system in South Dakota and the coastal plain aquifer system in Virginia. In both cases, depletion from confining layers was substantially larger than depletion from the aquifers.

  7. Causes and methods to estimate cryptic sources of fishing mortality.

    PubMed

    Gilman, E; Suuronen, P; Hall, M; Kennelly, S

    2013-10-01

    Cryptic, not readily detectable, components of fishing mortality are not routinely accounted for in fisheries management because of a lack of adequate data, and for some components, a lack of accurate estimation methods. Cryptic fishing mortalities can cause adverse ecological effects, are a source of wastage, reduce the sustainability of fishery resources and, when unaccounted for, can cause errors in stock assessments and population models. Sources of cryptic fishing mortality are (1) pre-catch losses, where catch dies from the fishing operation but is not brought onboard when the gear is retrieved, (2) ghost-fishing mortality by fishing gear that was abandoned, lost or discarded, (3) post-release mortality of catch that is retrieved and then released alive but later dies as a result of stress and injury sustained from the fishing interaction, (4) collateral mortalities indirectly caused by various ecological effects of fishing and (5) losses due to synergistic effects of multiple interacting sources of stress and injury from fishing operations, or from cumulative stress and injury caused by repeated sub-lethal interactions with fishing operations. To fill a gap in international guidance on best practices, causes and methods for estimating each component of cryptic fishing mortality are described, and considerations for their effective application are identified. Research priorities to fill gaps in understanding the causes and estimating cryptic mortality are highlighted. PMID:24090548

  8. Causes and methods to estimate cryptic sources of fishing mortality.

    PubMed

    Gilman, E; Suuronen, P; Hall, M; Kennelly, S

    2013-10-01

    Cryptic, not readily detectable, components of fishing mortality are not routinely accounted for in fisheries management because of a lack of adequate data, and for some components, a lack of accurate estimation methods. Cryptic fishing mortalities can cause adverse ecological effects, are a source of wastage, reduce the sustainability of fishery resources and, when unaccounted for, can cause errors in stock assessments and population models. Sources of cryptic fishing mortality are (1) pre-catch losses, where catch dies from the fishing operation but is not brought onboard when the gear is retrieved, (2) ghost-fishing mortality by fishing gear that was abandoned, lost or discarded, (3) post-release mortality of catch that is retrieved and then released alive but later dies as a result of stress and injury sustained from the fishing interaction, (4) collateral mortalities indirectly caused by various ecological effects of fishing and (5) losses due to synergistic effects of multiple interacting sources of stress and injury from fishing operations, or from cumulative stress and injury caused by repeated sub-lethal interactions with fishing operations. To fill a gap in international guidance on best practices, causes and methods for estimating each component of cryptic fishing mortality are described, and considerations for their effective application are identified. Research priorities to fill gaps in understanding the causes and estimating cryptic mortality are highlighted.

  9. Comparative study of age estimation using dentinal translucency by digital and conventional methods

    PubMed Central

    Bommannavar, Sushma; Kulkarni, Meena

    2015-01-01

    Introduction: Estimating age using the dentition plays a significant role in identification of the individual in forensic cases. Teeth are one of the most durable and strongest structures in the human body. The morphology and arrangement of teeth vary from person-to-person and is unique to an individual as are the fingerprints. Therefore, the use of dentition is the method of choice in the identification of the unknown. Root dentin translucency is considered to be one of the best parameters for dental age estimation. Traditionally, root dentin translucency was measured using calipers. Recently, the use of custom built software programs have been proposed for the same. Objectives: The present study describes a method to measure root dentin translucency on sectioned teeth using a custom built software program Adobe Photoshop 7.0 version (Adobe system Inc, Mountain View California). Materials and Methods: A total of 50 single rooted teeth were sectioned longitudinally to derive a 0.25 mm uniform thickness and the root dentin translucency was measured using digital and caliper methods and compared. The Gustafson's morphohistologic approach is used in this study. Results: Correlation coefficients of translucency measurements to age were statistically significant for both the methods (P < 0.125) and linear regression equations derived from both methods revealed better ability of the digital method to assess age. Conclusion: The custom built software program used in the present study is commercially available and widely used image editing software. Furthermore, this method is easy to use and less time consuming. The measurements obtained using this method are more precise and thus help in more accurate age estimation. Considering these benefits, the present study recommends the use of digital method to assess translucency for age estimation. PMID:25709325

  10. Estimation of regionalized compositions: A comparison of three methods

    USGS Publications Warehouse

    Pawlowsky, V.; Olea, R.A.; Davis, J.C.

    1995-01-01

    A regionalized composition is a random vector function whose components are positive and sum to a constant at every point of the sampling region. Consequently, the components of a regionalized composition are necessarily spatially correlated. This spatial dependence-induced by the constant sum constraint-is a spurious spatial correlation and may lead to misinterpretations of statistical analyses. Furthermore, the cross-covariance matrices of the regionalized composition are singular, as is the coefficient matrix of the cokriging system of equations. Three methods of performing estimation or prediction of a regionalized composition at unsampled points are discussed: (1) the direct approach of estimating each variable separately; (2) the basis method, which is applicable only when a random function is available that can he regarded as the size of the regionalized composition under study; (3) the logratio approach, using the additive-log-ratio transformation proposed by J. Aitchison, which allows statistical analysis of compositional data. We present a brief theoretical review of these three methods and compare them using compositional data from the Lyons West Oil Field in Kansas (USA). It is shown that, although there are no important numerical differences, the direct approach leads to invalid results, whereas the basis method and the additive-log-ratio approach are comparable. ?? 1995 International Association for Mathematical Geology.

  11. Estimating Bacterial Diversity for Ecological Studies: Methods, Metrics, and Assumptions

    PubMed Central

    Birtel, Julia; Walser, Jean-Claude; Pichon, Samuel; Bürgmann, Helmut; Matthews, Blake

    2015-01-01

    Methods to estimate microbial diversity have developed rapidly in an effort to understand the distribution and diversity of microorganisms in natural environments. For bacterial communities, the 16S rRNA gene is the phylogenetic marker gene of choice, but most studies select only a specific region of the 16S rRNA to estimate bacterial diversity. Whereas biases derived from from DNA extraction, primer choice and PCR amplification are well documented, we here address how the choice of variable region can influence a wide range of standard ecological metrics, such as species richness, phylogenetic diversity, β-diversity and rank-abundance distributions. We have used Illumina paired-end sequencing to estimate the bacterial diversity of 20 natural lakes across Switzerland derived from three trimmed variable 16S rRNA regions (V3, V4, V5). Species richness, phylogenetic diversity, community composition, β-diversity, and rank-abundance distributions differed significantly between 16S rRNA regions. Overall, patterns of diversity quantified by the V3 and V5 regions were more similar to one another than those assessed by the V4 region. Similar results were obtained when analyzing the datasets with different sequence similarity thresholds used during sequences clustering and when the same analysis was used on a reference dataset of sequences from the Greengenes database. In addition we also measured species richness from the same lake samples using ARISA Fingerprinting, but did not find a strong relationship between species richness estimated by Illumina and ARISA. We conclude that the selection of 16S rRNA region significantly influences the estimation of bacterial diversity and species distributions and that caution is warranted when comparing data from different variable regions as well as when using different sequencing techniques. PMID:25915756

  12. Estimating Return on Investment in Translational Research: Methods and Protocols

    PubMed Central

    Trochim, William; Dilts, David M.; Kirk, Rosalind

    2014-01-01

    Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health and its Clinical and Translational Awards (CTSA). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This paper provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities. PMID:23925706

  13. Estimating return on investment in translational research: methods and protocols.

    PubMed

    Grazier, Kyle L; Trochim, William M; Dilts, David M; Kirk, Rosalind

    2013-12-01

    Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health (NIH) and its Clinical and Translational Awards (CTSAs). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program, and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This article provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities.

  14. Ability of LANDSAT-8 Oli Derived Texture Metrics in Estimating Aboveground Carbon Stocks of Coppice Oak Forests

    NASA Astrophysics Data System (ADS)

    Safari, A.; Sohrabi, H.

    2016-06-01

    The role of forests as a reservoir for carbon has prompted the need for timely and reliable estimation of aboveground carbon stocks. Since measurement of aboveground carbon stocks of forests is a destructive, costly and time-consuming activity, aerial and satellite remote sensing techniques have gained many attentions in this field. Despite the fact that using aerial data for predicting aboveground carbon stocks has been proved as a highly accurate method, there are challenges related to high acquisition costs, small area coverage, and limited availability of these data. These challenges are more critical for non-commercial forests located in low-income countries. Landsat program provides repetitive acquisition of high-resolution multispectral data, which are freely available. The aim of this study was to assess the potential of multispectral Landsat 8 Operational Land Imager (OLI) derived texture metrics in quantifying aboveground carbon stocks of coppice Oak forests in Zagros Mountains, Iran. We used four different window sizes (3×3, 5×5, 7×7, and 9×9), and four different offsets ([0,1], [1,1], [1,0], and [1,-1]) to derive nine texture metrics (angular second moment, contrast, correlation, dissimilar, entropy, homogeneity, inverse difference, mean, and variance) from four bands (blue, green, red, and infrared). Totally, 124 sample plots in two different forests were measured and carbon was calculated using species-specific allometric models. Stepwise regression analysis was applied to estimate biomass from derived metrics. Results showed that, in general, larger size of window for deriving texture metrics resulted models with better fitting parameters. In addition, the correlation of the spectral bands for deriving texture metrics in regression models was ranked as b4>b3>b2>b5. The best offset was [1,-1]. Amongst the different metrics, mean and entropy were entered in most of the regression models. Overall, different models based on derived texture metrics

  15. A novel state of health estimation method of Li-ion battery using group method of data handling

    NASA Astrophysics Data System (ADS)

    Wu, Ji; Wang, Yujie; Zhang, Xu; Chen, Zonghai

    2016-09-01

    In this paper, the control theory is applied to assist the estimation of state of health (SoH) which is a key parameter to battery management. Battery can be treated as a system, and the internal state, e.g. SoH, can be observed through certain system output data. Based on the philosophy of human health and athletic ability estimation, variables from a specific process, which is a constant current charge subprocess, are obtained to depict battery SoH. These variables are selected according to the differential geometric analysis of battery terminal voltage curves. Moreover, the relationship between the differential geometric properties and battery SoH is modelled by the group method of data handling (GMDH) polynomial neural network. Thus, battery SoH can be estimated by GMDH with inputs of voltage curve properties. Experiments have been conducted on different types of Li-ion battery, and the results show that the proposed method is valid for SoH estimation.

  16. Some Features of the Sampling Distribution of the Ability Estimate in Computerized Adaptive Testing According to Two Stopping Rules.

    ERIC Educational Resources Information Center

    Blais, Jean-Guy; Raiche, Gilles

    This paper examines some characteristics of the statistics associated with the sampling distribution of the proficiency level estimate when the Rasch model is used. These characteristics allow the judgment of the meaning to be given to the proficiency level estimate obtained in adaptive testing, and as a consequence, they can illustrate the…

  17. The Graphical Display of Simulation Results, with Applications to the Comparison of Robust IRT Estimators of Ability.

    ERIC Educational Resources Information Center

    Thissen, David; Wainer, Howard

    Simulation studies of the performance of (potentially) robust statistical estimation produce large quantities of numbers in the form of performance indices of the various estimators under various conditions. This report presents a multivariate graphical display used to aid in the digestion of the plentiful results in a current study of Item…

  18. Spectrophotometric estimation of tamsulosin hydrochloride by acid-dye method

    PubMed Central

    Shrivastava, Alankar; Saxena, Prachi; Gupta, Vipin B.

    2011-01-01

    A new spectrophotometric method for the estimation of tamsulosin hydrochloride in pharmaceutical dosage forms has been developed and validated. The method is based on reaction between drug and bromophenol blue and complex was measured at 421 nm. The slope, intercept and correlation coefficient was found to be 0.054, -0.020 and 0.999, respectively. Method was validated in terms of specificity, linearity, range, precision and accuracy. The developed method can be used to determine drug in both tablet and capsule formulations. Reaction was optimized using three parameters i.e., concentration of the dye, pH of the buffer, volume of the buffer and shaking time. Maximum stability of the chromophore was achieved by using pH 2 and 2 ml volume of buffer. Shaking time kept was 2 min and concentration of the dye used was 2 ml of 0.05% w/v solution. Method was validated in terms of linearity, precision, range, accuracy, LOD and LOQ and stochiometry of the method was also established using Mole ratio and Job's method of continuous variation. The dye benzonoid form (blue color) of dye ionized into quinonoid form (purple color) in presence of buffer and reacts with protonated form of drug in 1:1 ratio and forms an ion-pair complex (yellow color). PMID:23781431

  19. Estimating Fuel Cycle Externalities: Analytical Methods and Issues, Report 2

    SciTech Connect

    Barnthouse, L.W.; Cada, G.F.; Cheng, M.-D.; Easterly, C.E.; Kroodsma, R.L.; Lee, R.; Shriner, D.S.; Tolbert, V.R.; Turner, R.S.

    1994-07-01

    that also have not been fully addressed. This document contains two types of papers that seek to fill part of this void. Some of the papers describe analytical methods that can be applied to one of the five steps of the damage function approach. The other papers discuss some of the complex issues that arise in trying to estimate externalities. This report, the second in a series of eight reports, is part of a joint study by the U.S. Department of Energy (DOE) and the Commission of the European Communities (EC)* on the externalities of fuel cycles. Most of the papers in this report were originally written as working papers during the initial phases of this study. The papers provide descriptions of the (non-radiological) atmospheric dispersion modeling that the study uses; reviews much of the relevant literature on ecological and health effects, and on the economic valuation of those impacts; contains several papers on some of the more complex and contentious issues in estimating externalities; and describes a method for depicting the quality of scientific information that a study uses. The analytical methods and issues that this report discusses generally pertain to more than one of the fuel cycles, though not necessarily to all of them. The report is divided into six parts, each one focusing on a different subject area.

  20. Streamflow-Characteristic Estimation Methods for Unregulated Streams of Tennessee

    USGS Publications Warehouse

    Law, George S.; Tasker, Gary D.; Ladd, David E.

    2009-01-01

    Streamflow-characteristic estimation methods for unregulated rivers and streams of Tennessee were developed by the U.S. Geological Survey in cooperation with the Tennessee Department of Environment and Conservation. Streamflow estimates are provided for 1,224 stream sites. Streamflow characteristics include the 7-consecutive-day, 10-year recurrence-interval low flow, the 30-consecutive-day, 5-year recurrence-interval low flow, the mean annual and mean summer flows, and the 99.5-, 99-, 98-, 95-, 90-, 80-, 70-, 60-, 50-, 40-, 30-, 20-, and 10-percent flow durations. Estimation methods include regional regression (RRE) equations and the region-of-influence (ROI) method. Both methods use zero-flow probability screening to estimate zero-flow quantiles. A low flow and flow duration (LFFD) computer program (TDECv301) performs zero-flow screening and calculation of nonzero-streamflow characteristics using the RRE equations and ROI method and provides quality measures including the 90-percent prediction interval and equivalent years of record. The U.S. Geological Survey StreamStats geographic information system automates the calculation of basin characteristics and streamflow characteristics. In addition, basin characteristics can be manually input to the stand-alone version of the computer program (TDECv301) to calculate streamflow characteristics in Tennessee. The RRE equations were computed using multivariable regression analysis. The two regions used for this study, the western part of the State (West) and the central and eastern part of the State (Central+East), are separated by the Tennessee River as it flows south to north from Hardin County to Stewart County. The West region uses data from 124 of the 1,224 streamflow sites, and the Central+East region uses data from 893 of the 1,224 streamflow sites. The study area also includes parts of the adjacent States of Georgia, North Carolina, Virginia, Alabama, Kentucky, and Mississippi. Total drainage area, a geology

  1. A Quantitative Method for Estimating Probable Public Costs of Hurricanes.

    PubMed

    BOSWELL; DEYLE; SMITH; BAKER

    1999-04-01

    / A method is presented for estimating probable public costs resulting from damage caused by hurricanes, measured as local government expenditures approved for reimbursement under the Stafford Act Section 406 Public Assistance Program. The method employs a multivariate model developed through multiple regression analysis of an array of independent variables that measure meteorological, socioeconomic, and physical conditions related to the landfall of hurricanes within a local government jurisdiction. From the regression analysis we chose a log-log (base 10) model that explains 74% of the variance in the expenditure data using population and wind speed as predictors. We illustrate application of the method for a local jurisdiction-Lee County, Florida, USA. The results show that potential public costs range from $4.7 million for a category 1 hurricane with winds of 137 kilometers per hour (85 miles per hour) to $130 million for a category 5 hurricane with winds of 265 kilometers per hour (165 miles per hour). Based on these figures, we estimate expected annual public costs of $2.3 million. These cost estimates: (1) provide useful guidance for anticipating the magnitude of the federal, state, and local expenditures that would be required for the array of possible hurricanes that could affect that jurisdiction; (2) allow policy makers to assess the implications of alternative federal and state policies for providing public assistance to jurisdictions that experience hurricane damage; and (3) provide information needed to develop a contingency fund or other financial mechanism for assuring that the community has sufficient funds available to meet its obligations. KEY WORDS: Hurricane; Public costs; Local government; Disaster recovery; Disaster response; Florida; Stafford Act

  2. Analytical method to estimate resin cement diffusion into dentin

    NASA Astrophysics Data System (ADS)

    de Oliveira Ferraz, Larissa Cristina; Ubaldini, Adriana Lemos Mori; de Oliveira, Bruna Medeiros Bertol; Neto, Antonio Medina; Sato, Fracielle; Baesso, Mauro Luciano; Pascotto, Renata Corrêa

    2016-05-01

    This study analyzed the diffusion of two resin luting agents (resin cements) into dentin, with the aim of presenting an analytical method for estimating the thickness of the diffusion zone. Class V cavities were prepared in the buccal and lingual surfaces of molars (n=9). Indirect composite inlays were luted into the cavities with either a self-adhesive or a self-etch resin cement. The teeth were sectioned bucco-lingually and the cement-dentin interface was analyzed by using micro-Raman spectroscopy (MRS) and scanning electron microscopy. Evolution of peak intensities of the Raman bands, collected from the functional groups corresponding to the resin monomer (C–O–C, 1113 cm-1) present in the cements, and the mineral content (P–O, 961 cm-1) in dentin were sigmoid shaped functions. A Boltzmann function (BF) was then fitted to the peaks encountered at 1113 cm-1 to estimate the resin cement diffusion into dentin. The BF identified a resin cement-dentin diffusion zone of 1.8±0.4 μm for the self-adhesive cement and 2.5±0.3 μm for the self-etch cement. This analysis allowed the authors to estimate the diffusion of the resin cements into the dentin. Fitting the MRS data to the BF contributed to and is relevant for future studies of the adhesive interface.

  3. Analytical method to estimate resin cement diffusion into dentin

    NASA Astrophysics Data System (ADS)

    de Oliveira Ferraz, Larissa Cristina; Ubaldini, Adriana Lemos Mori; de Oliveira, Bruna Medeiros Bertol; Neto, Antonio Medina; Sato, Fracielle; Baesso, Mauro Luciano; Pascotto, Renata Corrêa

    2016-05-01

    This study analyzed the diffusion of two resin luting agents (resin cements) into dentin, with the aim of presenting an analytical method for estimating the thickness of the diffusion zone. Class V cavities were prepared in the buccal and lingual surfaces of molars (n=9). Indirect composite inlays were luted into the cavities with either a self-adhesive or a self-etch resin cement. The teeth were sectioned bucco-lingually and the cement-dentin interface was analyzed by using micro-Raman spectroscopy (MRS) and scanning electron microscopy. Evolution of peak intensities of the Raman bands, collected from the functional groups corresponding to the resin monomer (C-O-C, 1113 cm-1) present in the cements, and the mineral content (P-O, 961 cm-1) in dentin were sigmoid shaped functions. A Boltzmann function (BF) was then fitted to the peaks encountered at 1113 cm-1 to estimate the resin cement diffusion into dentin. The BF identified a resin cement-dentin diffusion zone of 1.8±0.4 μm for the self-adhesive cement and 2.5±0.3 μm for the self-etch cement. This analysis allowed the authors to estimate the diffusion of the resin cements into the dentin. Fitting the MRS data to the BF contributed to and is relevant for future studies of the adhesive interface.

  4. Uncertainty Quantification in State Estimation using the Probabilistic Collocation Method

    SciTech Connect

    Lin, Guang; Zhou, Ning; Ferryman, Thomas A.; Tuffner, Francis K.

    2011-03-23

    In this study, a new efficient uncertainty quantification technique, probabilistic collocation method (PCM) on sparse grid points is employed to enable the evaluation of uncertainty in state estimation. The PCM allows us to use just a small number of ensembles to quantify the uncertainty in estimating the state variables of power systems. By sparse grid points, the PCM approach can handle large number of uncertain parameters in power systems with relatively lower computational cost, when comparing with classic Monte Carlo (MC) simulations. The algorithm and procedure is outlined and we demonstrate the capability and illustrate the application of PCM on sparse grid points approach on uncertainty quantification in state estimation of the IEEE 14 bus model as an example. MC simulations have also been conducted to verify accuracy of the PCM approach. By comparing the results obtained from MC simulations with PCM results for mean and standard deviation of uncertain parameters, it is evident that the PCM approach is computationally more efficient than MC simulations.

  5. A method for sex estimation using the proximal femur.

    PubMed

    Curate, Francisco; Coelho, João; Gonçalves, David; Coelho, Catarina; Ferreira, Maria Teresa; Navega, David; Cunha, Eugénia

    2016-09-01

    The assessment of sex is crucial to the establishment of a biological profile of an unidentified skeletal individual. The best methods currently available for the sexual diagnosis of human skeletal remains generally rely on the presence of well-preserved pelvic bones, which is not always the case. Postcranial elements, including the femur, have been used to accurately estimate sex in skeletal remains from forensic and bioarcheological settings. In this study, we present an approach to estimate sex using two measurements (femoral neck width [FNW] and femoral neck axis length [FNAL]) of the proximal femur. FNW and FNAL were obtained in a training sample (114 females and 138 males) from the Luís Lopes Collection (National History Museum of Lisbon). Logistic regression and the C4.5 algorithm were used to develop models to predict sex in unknown individuals. Proposed cross-validated models correctly predicted sex in 82.5-85.7% of the cases. The models were also evaluated in a test sample (96 females and 96 males) from the Coimbra Identified Skeletal Collection (University of Coimbra), resulting in a sex allocation accuracy of 80.1-86.2%. This study supports the relative value of the proximal femur to estimate sex in skeletal remains, especially when other exceedingly dimorphic skeletal elements are not accessible for analysis. PMID:27373600

  6. A method for sex estimation using the proximal femur.

    PubMed

    Curate, Francisco; Coelho, João; Gonçalves, David; Coelho, Catarina; Ferreira, Maria Teresa; Navega, David; Cunha, Eugénia

    2016-09-01

    The assessment of sex is crucial to the establishment of a biological profile of an unidentified skeletal individual. The best methods currently available for the sexual diagnosis of human skeletal remains generally rely on the presence of well-preserved pelvic bones, which is not always the case. Postcranial elements, including the femur, have been used to accurately estimate sex in skeletal remains from forensic and bioarcheological settings. In this study, we present an approach to estimate sex using two measurements (femoral neck width [FNW] and femoral neck axis length [FNAL]) of the proximal femur. FNW and FNAL were obtained in a training sample (114 females and 138 males) from the Luís Lopes Collection (National History Museum of Lisbon). Logistic regression and the C4.5 algorithm were used to develop models to predict sex in unknown individuals. Proposed cross-validated models correctly predicted sex in 82.5-85.7% of the cases. The models were also evaluated in a test sample (96 females and 96 males) from the Coimbra Identified Skeletal Collection (University of Coimbra), resulting in a sex allocation accuracy of 80.1-86.2%. This study supports the relative value of the proximal femur to estimate sex in skeletal remains, especially when other exceedingly dimorphic skeletal elements are not accessible for analysis.

  7. Matrix Methods for Estimating the Coherence Functions from Estimates of the Cross-Spectral Density Matrix

    DOE PAGES

    Smallwood, D. O.

    1996-01-01

    It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.

  8. Residual fatigue life estimation using a nonlinear ultrasound modulation method

    NASA Astrophysics Data System (ADS)

    Piero Malfense Fierro, Gian; Meo, Michele

    2015-02-01

    Predicting the residual fatigue life of a material is not a simple task and requires the development and association of many variables that as standalone tasks can be difficult to determine. This work develops a modulated nonlinear elastic wave spectroscopy method for the evaluation of a metallic components residual fatigue life. An aluminium specimen (AA6082-T6) was tested at predetermined fatigue stages throughout its fatigue life using a dual-frequency ultrasound method. A modulated nonlinear parameter was derived, which described the relationship between the generation of modulated (sideband) responses of a dual frequency signal and the linear response. The sideband generation from the dual frequency (two signal output system) was shown to increase as the residual fatigue life decreased, and as a standalone measurement method it can be used to show an increase in a materials damage. A baseline-free method was developed by linking a theoretical model, obtained by combining the Paris law and the Nazarov-Sutin crack equation, to experimental nonlinear modulation measurements. The results showed good correlation between the derived theoretical model and the modulated nonlinear parameter, allowing for baseline-free material residual fatigue life estimation. Advantages and disadvantages of these methods are discussed, as well as presenting further methods that would lead to increased accuracy of residual fatigue life detection.

  9. Dental age estimation in Brazilian HIV children using Willems' method.

    PubMed

    de Souza, Rafael Boschetti; da Silva Assunção, Luciana Reichert; Franco, Ademir; Zaroni, Fábio Marzullo; Holderbaum, Rejane Maria; Fernandes, Ângela

    2015-12-01

    The notification of the Human Immunodeficiency Virus (HIV) in Brazilian children was first reported in 1984. Since that time more than 21 thousand children became infected. Approximately 99.6% of the children aged less than 13 years old are vertically infected. In this context, most of the children are abandoned after birth, or lose their relatives in a near future, growing with uncertain identification. The present study aims to estimate the dental age of Brazilian HIV patients in face of healthy patients paired by age and gender. The sample consisted of 160 panoramic radiographs of male (n: 80) and female (n: 80) patients aged between 4 and 15 years (mean age: 8.88 years), divided into HIV (n: 80) and control (n: 80) groups. The sample was analyzed by three trained examiners, using Willems' method, 2001. Intraclass Correlation Coefficient (ICC) was applied to test intra- and inter-examiner agreement, and Student paired t-test was used to determine the age association between HIV and control groups. Intra-examiner (ICC: from 0.993 to 0.997) and inter-examiner (ICC: from 0.991 to 0.995) agreement tests indicated high reproducibility of the method between the examiners (P<0.01). Willems' method revealed discrete statistical overestimation in HIV (2.86 months; P=0.019) and control (1.90 months; P=0.039) groups. However, stratified analysis by gender indicate that overestimation were only concentrated in male HIV (3.85 months; P=0.001) and control (2.86 months; P=0.022) patients. The significant statistical differences are not clinically relevant once only few months of discrepancy are detected applying Willems' method in a Brazilian HIV sample, making this method highly recommended for dental age estimation of both HIV and healthy children with unknown age.

  10. Method for estimating absolute lung volumes at constant inflation pressure.

    PubMed

    Hills, B A; Barrow, R E

    1979-10-01

    A method has been devised for measuring functional residual capacity in the intact killed animal or absolute lung volumes in any excised lung preparation without changing the inflation pressure. This is achieved by titrating the absolute pressure of a chamber in which the preparation is compressed until a known volume of air has entered the lungs. This technique was used to estimate the volumes of five intact rabbit lungs and five rigid containers of known dimensions by means of Boyle's law. Results were found to agree to within +/- 1% with values determined by alternative methods. In the discussion the advantage of determining absolute lung volumes at almost any stage in a study of lung mechanics without the determination itself changing inflation pressure and, hence, lung volume is emphasized. PMID:511699

  11. Methods for estimating the population contribution to environmental change.

    PubMed

    Raskin, P D

    1995-12-01

    "This paper introduces general methods for quantitative analysis of the role of population in environmental change. The approach is applicable over a wide range of environmental issues, and arbitrary regions and time periods. First, a single region is considered, appropriate formulae derived, and the limitations to quantitative approaches discussed. The approach is contrasted to earlier formulations, and shown to avoid weaknesses in a common approximation. Next, the analysis is extended to the multiple region problem. An apparent paradox in aggregating regional estimates is illuminated, and the risk of misleading results is underscored. The methods are applied to the problem of climate change with two case studies, an historical period and a future scenario, used to illustrate the results. The contribution of change in population to change in green house gas emissions is shown to be significant, but not dominant in both industrialized and developing regions."

  12. A variable circular-plot method for estimated bird numbers

    USGS Publications Warehouse

    Reynolds, R.T.; Scott, J.M.; Nussbaum, R.A.

    1980-01-01

    A bird census method is presented that is designed for tall, structurally complex vegetation types, and rugged terrain. With this method the observer counts all birds seen or heard around a station, and estimates the horizontal distance from the station to each bird. Count periods at stations vary according to the avian community and structural complexity of the vegetation. The density of each species is determined by inspecting a histogram of the number of individuals per unit area in concentric bands of predetermined widths about the stations, choosing the band (with outside radius x) where the density begins to decline, and summing the number of individuals counted within the circle of radius x and dividing by the area (Bx2). Although all observations beyond radius x are rejected with this procedure, coefficients of maximum distance.

  13. A new assimilation method with physical mechanism to estimate evapotranspiration

    NASA Astrophysics Data System (ADS)

    Ye, Wen; Xu, Xinyi

    2016-04-01

    The accurate estimation of regional evapotranspiration has been a research hotspot in the field of hydrology and water resources both in domestic and abroad. A new assimilation method with physical mechanism was proposed to estimate evapotranspiration, which was easier to apply. Based on the evapotranspiration (ET) calculating method with soil moisture recurrence relations in the Distributed Time Variant Gain Model (DTVGM) and Ensemble Kalman Filter (EnKF), it constructed an assimilation system for recursive calculation of evapotranspiration in combination with "observation value" by the retrieval data of evapotranspiration through the Two-Layer Remote Sensing Model. By updating the filter in the model with assimilated evapotranspiration, synchronization correction to the model estimation was achieved and more accurate time continuous series values of evapotranspiration were obtained. Through the verification of observations in Xiaotangshan Observatory and hydrological stations in the basin, the correlation coefficient of remote sensing inversion evapotranspiration and actual evapotranspiration reaches as high as 0.97, and the NS efficiency coefficient of DTVGM model was 0.80. By using the typical daily evapotranspiration from Remote Sensing and the data from DTVGM Model, we assimilated the hydrological simulation processes with DTVGM Model in Shahe Basin in Beijing to obtain continuous evapotranspiration time series. The results showed that the average relative error between the remote sensing values and DTVGM simulations is about 12.3%, and for the value between remote sensing retrieval data and assimilation values is 4.5%, which proved that the assimilation results of Ensemble Kalman Filter (EnKF) were closer to the "real" data, and was better than the evapotranspiration simulated by DTVGM without any improvement. Keyword Evapotranspiration assimilation Ensemble Kalman Filter Distributed hydrological model Two-Layer Remote Sensing Model

  14. Method of Estimating Continuous Cooling Transformation Curves of Glasses

    NASA Technical Reports Server (NTRS)

    Zhu, Dongmei; Zhou, Wancheng; Ray, Chandra S.; Day, Delbert E.

    2006-01-01

    A method is proposed for estimating the critical cooling rate and continuous cooling transformation (CCT) curve from isothermal TTT data of glasses. The critical cooling rates and CCT curves for a group of lithium disilicate glasses containing different amounts of Pt as nucleating agent estimated through this method are compared with the experimentally measured values. By analysis of the experimental and calculated data of the lithium disilicate glasses, a simple relationship between the crystallized amount in the glasses during continuous cooling, X, and the temperature of undercooling, (Delta)T, was found to be X = AR(sup-4)exp(B (Delta)T), where (Delta)T is the temperature difference between the theoretical melting point of the glass composition and the temperature in discussion, R is the cooling rate, and A and B are constants. The relation between the amount of crystallisation during continuous cooling and isothermal hold can be expressed as (X(sub cT)/X(sub iT) = (4/B)(sup 4) (Delta)T(sup -4), where X(sub cT) stands for the crystallised amount in a glass during continuous cooling for a time t when the temperature comes to T, and X(sub iT) is the crystallised amount during isothermal hold at temperature T for a time t.

  15. Study on color difference estimation method of medicine biochemical analysis

    NASA Astrophysics Data System (ADS)

    Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Sun, Jiashi; Zhou, Fengkun

    2006-01-01

    The biochemical analysis in medicine is an important inspection and diagnosis method in hospital clinic. The biochemical analysis of urine is one important item. The Urine test paper shows corresponding color with different detection project or different illness degree. The color difference between the standard threshold and the test paper color of urine can be used to judge the illness degree, so that further analysis and diagnosis to urine is gotten. The color is a three-dimensional physical variable concerning psychology, while reflectance is one-dimensional variable; therefore, the estimation method of color difference in urine test can have better precision and facility than the conventional test method with one-dimensional reflectance, it can make an accurate diagnose. The digital camera is easy to take an image of urine test paper and is used to carry out the urine biochemical analysis conveniently. On the experiment, the color image of urine test paper is taken by popular color digital camera and saved in the computer which installs a simple color space conversion (RGB -> XYZ -> L *a *b *)and the calculation software. Test sample is graded according to intelligent detection of quantitative color. The images taken every time were saved in computer, and the whole illness process will be monitored. This method can also use in other medicine biochemical analyses that have relation with color. Experiment result shows that this test method is quick and accurate; it can be used in hospital, calibrating organization and family, so its application prospect is extensive.

  16. Application of Common Mid-Point Method to Estimate Asphalt

    NASA Astrophysics Data System (ADS)

    Zhao, Shan; Al-Aadi, Imad

    2015-04-01

    3-D radar is a multi-array stepped-frequency ground penetration radar (GPR) that can measure at a very close sampling interval in both in-line and cross-line directions. Constructing asphalt layers in accordance with specified thicknesses is crucial for pavement structure capacity and pavement performance. Common mid-point method (CMP) is a multi-offset measurement method that can improve the accuracy of the asphalt layer thickness estimation. In this study, the viability of using 3-D radar to predict asphalt concrete pavement thickness with an extended CMP method was investigated. GPR signals were collected on asphalt pavements with various thicknesses. Time domain resolution of the 3-D radar was improved by applying zero-padding technique in the frequency domain. The performance of the 3-D radar was then compared to that of the air-coupled horn antenna. The study concluded that 3-D radar can be used to predict asphalt layer thickness using CMP method accurately when the layer thickness is larger than 0.13m. The lack of time domain resolution of 3-D radar can be solved by frequency zero-padding. Keywords: asphalt pavement thickness, 3-D Radar, stepped-frequency, common mid-point method, zero padding.

  17. Comparison of carbon and biomass estimation methods for European forests

    NASA Astrophysics Data System (ADS)

    Neumann, Mathias; Mues, Volker; Harkonen, Sanna; Mura, Matteo; Bouriaud, Olivier; Lang, Mait; Achten, Wouter; Thivolle-Cazat, Alain; Bronisz, Karol; Merganicova, Katarina; Decuyper, Mathieu; Alberdi, Iciar; Astrup, Rasmus; Schadauer, Klemens; Hasenauer, Hubert

    2015-04-01

    National and international reporting systems as well as research, enterprises and political stakeholders require information on carbon stocks of forests. Terrestrial assessment systems like forest inventory data in combination with carbon calculation methods are often used for this purpose. To assess the effect of the calculation method used, a comparative analysis was done using the carbon calculation methods from 13 European countries and the research plots from ICP Forests (International Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests). These methods are applied for five European tree species (Fagus sylvatica L., Quercus robur L., Betula pendula Roth, Picea abies (L.) Karst. and Pinus sylvestris L.) using a standardized theoretical tree dataset to avoid biases due to data collection and sample design. The carbon calculation methods use allometric biomass and volume functions, carbon and biomass expansion factors or a combination thereof. The results of the analysis show a high variation in the results for total tree carbon as well as for carbon in the single tree compartments. The same pattern is found when comparing the respective volume estimates. This is consistent for all five tree species and the variation remains when the results are grouped according to the European forest regions. Possible explanations are differences in the sample material used for the biomass models, the model variables or differences in the definition of tree compartments. The analysed carbon calculation methods have a strong effect on the results both for single trees and forest stands. To avoid misinterpretation the calculation method has to be chosen carefully along with quality checks and the calculation method needs consideration especially in comparative studies to avoid biased and misleading conclusions.

  18. The Mayfield method of estimating nesting success: A model, estimators and simulation results

    USGS Publications Warehouse

    Hensler, G.L.; Nichols, J.D.

    1981-01-01

    Using a nesting model proposed by Mayfield we show that the estimator he proposes is a maximum likelihood estimator (m.l.e.). M.l.e. theory allows us to calculate the asymptotic distribution of this estimator, and we propose an estimator of the asymptotic variance. Using these estimators we give approximate confidence intervals and tests of significance for daily survival. Monte Carlo simulation results show the performance of our estimators and tests under many sets of conditions. A traditional estimator of nesting success is shown to be quite inferior to the Mayfield estimator. We give sample sizes required for a given accuracy under several sets of conditions.

  19. Evaluation of non-destructive methods for estimating biomass in marshes of the upper Texas, USA coast

    USGS Publications Warehouse

    Whitbeck, M.; Grace, J.B.

    2006-01-01

    The estimation of aboveground biomass is important in the management of natural resources. Direct measurements by clipping, drying, and weighing of herbaceous vegetation are time-consuming and costly. Therefore, non-destructive methods for efficiently and accurately estimating biomass are of interest. We compared two non-destructive methods, visual obstruction and light penetration, for estimating aboveground biomass in marshes of the upper Texas, USA coast. Visual obstruction was estimated using the Robel pole method, which primarily measures the density and height of the canopy. Light penetration through the canopy was measured using a Decagon light wand, with readings taken above the vegetation and at the ground surface. Clip plots were also taken to provide direct estimates of total aboveground biomass. Regression relationships between estimated and clipped biomass were significant using both methods. However, the light penetration method was much more strongly correlated with clipped biomass under these conditions (R2 value 0.65 compared to 0.35 for the visual obstruction approach). The primary difference between the two methods in this situation was the ability of the light-penetration method to account for variations in plant litter. These results indicate that light-penetration measurements may be better for estimating biomass in marshes when plant litter is an important component. We advise that, in all cases, investigators should calibrate their methods against clip plots to evaluate applicability to their situation. ?? 2006, The Society of Wetland Scientists.

  20. Estimation of Anthocyanin Content of Berries by NIR Method

    NASA Astrophysics Data System (ADS)

    Zsivanovits, G.; Ludneva, D.; Iliev, A.

    2010-01-01

    Anthocyanin contents of fruits were estimated by VIS spectrophotometer and compared with spectra measured by NIR spectrophotometer (600-1100 nm step 10 nm). The aim was to find a relationship between NIR method and traditional spectrophotometric method. The testing protocol, using NIR, is easier, faster and non-destructive. NIR spectra were prepared in pairs, reflectance and transmittance. A modular spectrocomputer, realized on the basis of a monochromator and peripherals Bentham Instruments Ltd (GB) and a photometric camera created at Canning Research Institute, were used. An important feature of this camera is the possibility offered for a simultaneous measurement of both transmittance and reflectance with geometry patterns T0/180 and R0/45. The collected spectra were analyzed by CAMO Unscrambler 9.1 software, with PCA, PLS, PCR methods. Based on the analyzed spectra quality and quantity sensitive calibrations were prepared. The results showed that the NIR method allows measuring of the total anthocyanin content in fresh berry fruits or processed products without destroying them.

  1. Estimation of Anthocyanin Content of Berries by NIR Method

    SciTech Connect

    Zsivanovits, G.; Ludneva, D.; Iliev, A.

    2010-01-21

    Anthocyanin contents of fruits were estimated by VIS spectrophotometer and compared with spectra measured by NIR spectrophotometer (600-1100 nm step 10 nm). The aim was to find a relationship between NIR method and traditional spectrophotometric method. The testing protocol, using NIR, is easier, faster and non-destructive. NIR spectra were prepared in pairs, reflectance and transmittance. A modular spectrocomputer, realized on the basis of a monochromator and peripherals Bentham Instruments Ltd (GB) and a photometric camera created at Canning Research Institute, were used. An important feature of this camera is the possibility offered for a simultaneous measurement of both transmittance and reflectance with geometry patterns T0/180 and R0/45. The collected spectra were analyzed by CAMO Unscrambler 9.1 software, with PCA, PLS, PCR methods. Based on the analyzed spectra quality and quantity sensitive calibrations were prepared. The results showed that the NIR method allows measuring of the total anthocyanin content in fresh berry fruits or processed products without destroying them.

  2. Estimating recharge at Yucca Mountain, Nevada, USA: Comparison of methods

    USGS Publications Warehouse

    Flint, A.L.; Flint, L.E.; Kwicklis, E.M.; Fabryka-Martin, J. T.; Bodvarsson, G.S.

    2002-01-01

    Obtaining values of net infiltration, groundwater travel time, and recharge is necessary at the Yucca Mountain site, Nevada, USA, in order to evaluate the expected performance of a potential repository as a containment system for high-level radioactive waste. However, the geologic complexities of this site, its low precipitation and net infiltration, with numerous mechanisms operating simultaneously to move water through the system, provide many challenges for the estimation of the spatial distribution of recharge. A variety of methods appropriate for arid environments has been applied, including water-balance techniques, calculations using Darcy's law in the unsaturated zone, a soil-physics method applied to neutron-hole water-content data, inverse modeling of thermal profiles in boreholes extending through the thick unsaturated zone, chloride mass balance, atmospheric radionuclides, and empirical approaches. These methods indicate that near-surface infiltration rates at Yucca Mountain are highly variable in time and space, with local (point) values ranging from zero to several hundred millimeters per year. Spatially distributed net-infiltration values average 5 mm/year, with the highest values approaching 20 mm/year near Yucca Crest. Site-scale recharge estimates range from less than 1 to about 12 mm/year. These results have been incorporated into a site-scale model that has been calibrated using these data sets that reflect infiltration processes acting on highly variable temporal and spatial scales. The modeling study predicts highly non-uniform recharge at the water table, distributed significantly differently from the non-uniform infiltration pattern at the surface.

  3. Estimating recharge at Yucca Mountain, Nevada, USA: comparison of methods

    NASA Astrophysics Data System (ADS)

    Flint, Alan L.; Flint, Lorraine E.; Kwicklis, Edward M.; Fabryka-Martin, June T.; Bodvarsson, Gudmundur S.

    2002-02-01

    Obtaining values of net infiltration, groundwater travel time, and recharge is necessary at the Yucca Mountain site, Nevada, USA, in order to evaluate the expected performance of a potential repository as a containment system for high-level radioactive waste. However, the geologic complexities of this site, its low precipitation and net infiltration, with numerous mechanisms operating simultaneously to move water through the system, provide many challenges for the estimation of the spatial distribution of recharge. A variety of methods appropriate for arid environments has been applied, including water-balance techniques, calculations using Darcy's law in the unsaturated zone, a soil-physics method applied to neutron-hole water-content data, inverse modeling of thermal profiles in boreholes extending through the thick unsaturated zone, chloride mass balance, atmospheric radionuclides, and empirical approaches. These methods indicate that near-surface infiltration rates at Yucca Mountain are highly variable in time and space, with local (point) values ranging from zero to several hundred millimeters per year. Spatially distributed net-infiltration values average 5 mm/year, with the highest values approaching 20 mm/year near Yucca Crest. Site-scale recharge estimates range from less than 1 to about 12 mm/year. These results have been incorporated into a site-scale model that has been calibrated using these data sets that reflect infiltration processes acting on highly variable temporal and spatial scales. The modeling study predicts highly non-uniform recharge at the water table, distributed significantly differently from the non-uniform infiltration pattern at the surface.

  4. Estimating recharge at yucca mountain, nevada, usa: comparison of methods

    SciTech Connect

    Flint, A. L.; Flint, L. E.; Kwicklis, E. M.; Fabryka-Martin, J. T.; Bodvarsson, G. S.

    2001-11-01

    Obtaining values of net infiltration, groundwater travel time, and recharge is necessary at the Yucca Mountain site, Nevada, USA, in order to evaluate the expected performance of a potential repository as a containment system for high-level radioactive waste. However, the geologic complexities of this site, its low precipitation and net infiltration, with numerous mechanisms operating simultaneously to move water through the system, provide many challenges for the estimation of the spatial distribution of recharge. A variety of methods appropriate for and environments has been applied, including water-balance techniques, calculations using Darcy's law in the unsaturated zone, a soil-physics method applied to neutron-hole water-content data, inverse modeling of thermal profiles in boreholes extending through the thick unsaturated zone, chloride mass balance, atmospheric radionuclides, and empirical approaches. These methods indicate that near-surface infiltration rates at Yucca Mountain are highly variable in time and space, with local (point) values ranging from zero to several hundred millimeters per year. Spatially distributed net-infiltration values average 5 mm/year, with the highest values approaching 20 nun/year near Yucca Crest. Site-scale recharge estimates range from less than I to about 12 mm/year. These results have been incorporated into a site-scale model that has been calibrated using these data sets that reflect infiltration processes acting on highly variable temporal and spatial scales. The modeling study predicts highly non-uniform recharge at the water table, distributed significantly differently from the non-uniform infiltration pattern at the surface. [References: 57

  5. Computational methods estimating uncertainties for profile reconstruction in scatterometry

    NASA Astrophysics Data System (ADS)

    Gross, H.; Rathsfeld, A.; Scholze, F.; Model, R.; Bär, M.

    2008-04-01

    The solution of the inverse problem in scatterometry, i.e. the determination of periodic surface structures from light diffraction patterns, is incomplete without knowledge of the uncertainties associated with the reconstructed surface parameters. With decreasing feature sizes of lithography masks, increasing demands on metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line-space structures in order to determine geometric parameters like side-wall angles, heights, top and bottom widths and to evaluate the quality of the manufacturing process. The numerical simulation of the diffraction process is based on the finite element solution of the Helmholtz equation. The inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Restricting the class of gratings and the set of measurements, this inverse problem can be reformulated as a non-linear operator equation in Euclidean spaces. The operator maps the grating parameters to the efficiencies of diffracted plane wave modes. We employ a Gauss-Newton type iterative method to solve this operator equation and end up minimizing the deviation of the measured efficiency or phase shift values from the simulated ones. The reconstruction properties and the convergence of the algorithm, however, is controlled by the local conditioning of the non-linear mapping and the uncertainties of the measured efficiencies or phase shifts. In particular, the uncertainties of the reconstructed geometric parameters essentially depend on the uncertainties of the input data and can be estimated by various methods. We compare the results obtained from a Monte Carlo procedure to the estimations gained from the approximative covariance matrix of the profile parameters close to the optimal solution and apply them to EUV masks illuminated by plane waves with wavelengths in the range of 13 nm.

  6. A comparison of spectral estimation methods for the analysis of sibilant fricatives

    PubMed Central

    Reidy, Patrick F.

    2015-01-01

    It has been argued that, to ensure accurate spectral feature estimates for sibilants, the spectral estimation method should include a low-variance spectral estimator; however, no empirical evaluation of estimation methods in terms of feature estimates has been given. The spectra of /s/ and /ʃ/ were estimated with different methods that varied the pre-emphasis filter and estimator. These methods were evaluated in terms of effects on two features (centroid and degree of sibilance) and on the detection of four linguistic contrasts within these features. Estimation method affected the spectral features but none of the tested linguistic contrasts. PMID:25920873

  7. Estimating Earth's modal Q with epicentral stacking method

    NASA Astrophysics Data System (ADS)

    Chen, X.; Park, J. J.

    2014-12-01

    The attenuation rates of Earth's normal modes are the most important constraints on the anelastic state of Earth's deep interior. Yet current measurements of Earth's attenuation rates suffer from 3 sources of biases: the mode coupling effect, the beating effect, and the background noise, which together lead to significant uncertainties in the attenuation rates. In this research, we present a new technique to estimate the attenuation rates of Earth's normal modes - the epicentral stacking method. Rather than using the conventional geographical coordinate system, we instead deal with Earth's normal modes in the epicentral coordinate system, in which only 5 singlets rather than 2l+1 are excited. By stacking records from the same events at a series of time lags, we are able to recover the time-varying amplitudes of the 5 excited singlets, and thus measure their attenuation rates. The advantage of our method is that it enhances the SNR through stacking and minimizes the background noise effect, yet it avoids the beating effect problem commonly associated with the conventional multiplet stacking method by singling out the singlets. The attenuation rates measured from our epicentral stacking method seem to be reliable measurements in that: a) the measured attenuation rates are generally consistent among the 10 large events we used, except for a few events with unexplained larger attenuation rates; b) the line for the log of singlet amplitudes and time lag is very close to a straight line, suggesting an accurate estimation of attenuation rate. The Q measurements from our method are consistently lower than previous modal Q measurements, but closer to the PREM model. For example, for mode 0S25 whose Coriolis force coupling is negligible, our measured Q is between 190 to 210 depending on the event, while the PREM modal Q of 0S25 is 205, and previous modal Q measurements are as high as 242. The difference between our results and previous measurements might be due to the lower

  8. Effect of packing density on strain estimation by Fry method

    NASA Astrophysics Data System (ADS)

    Srivastava, Deepak; Ojha, Arun

    2015-04-01

    Fry method is a graphical technique that uses relative movement of material points, typically the grain centres or centroids, and yields the finite strain ellipse as the central vacancy of a point distribution. Application of the Fry method assumes an anticlustered and isotropic grain centre distribution in undistorted samples. This assumption is, however, difficult to test in practice. As an alternative, the sedimentological degree of sorting is routinely used as an approximation for the degree of clustering and anisotropy. The effect of the sorting on the Fry method has already been explored by earlier workers. This study tests the effect of the tightness of packing, the packing density%, which equals to the ratio% of the area occupied by all the grains and the total area of the sample. A practical advantage of using the degree of sorting or the packing density% is that these parameters, unlike the degree of clustering or anisotropy, do not vary during a constant volume homogeneous distortion. Using the computer graphics simulations and the programming, we approach the issue of packing density in four steps; (i) generation of several sets of random point distributions such that each set has same degree of sorting but differs from the other sets with respect to the packing density%, (ii) two-dimensional homogeneous distortion of each point set by various known strain ratios and orientation, (iii) estimation of strain in each distorted point set by the Fry method, and, (iv) error estimation by comparing the known strain and those given by the Fry method. Both the absolute errors and the relative root mean squared errors give consistent results. For a given degree of sorting, the Fry method gives better results in the samples having greater than 30% packing density. This is because the grain centre distributions show stronger clustering and a greater degree of anisotropy with the decrease in the packing density. As compared to the degree of sorting alone, a

  9. Effects of Using Invention Learning Approach on Inventive Abilities: A Mixed Method Study

    ERIC Educational Resources Information Center

    Wongkraso, Paisan; Sitti, Somsong; Piyakun, Araya

    2015-01-01

    This study aims to enhance inventive abilities for secondary students by using the Invention Learning Approach. Its activities focus on creating new inventions based on the students' interests by using constructional tools. The participants were twenty secondary students who took an elective science course that provided instructional units…

  10. A Study of Raters' Scoring Tendency of Speaking Ability through Verbal Report Methods and Questionnaire Analysis.

    ERIC Educational Resources Information Center

    Nakamura, Yuji

    1996-01-01

    To find ways to improve rater reliability of a tape-mediated speaking test for Japanese university students of English as a Second Language, two studies gathered information on: how raters actually made their choices on rating sheets of students' speaking ability; determined what criteria teachers think they use and actually use in rating…

  11. Placing Gifted Students At-Risk in Mixed-Ability Classrooms: A Sequential Mixed Methods Analysis

    ERIC Educational Resources Information Center

    Butterworth, Daniel B.

    2010-01-01

    Teachers are held responsible for equitable and excellent education in classrooms that are increasingly diverse culturally and student academic ability. The purpose of this study was to better understand the attitudes and experiences of teachers in heterogeneous classrooms regarding teacher preparation in order to implement new research-based…

  12. Estimates of tropical bromoform emissions using an inversion method

    NASA Astrophysics Data System (ADS)

    Ashfold, M. J.; Harris, N. R. P.; Manning, A. J.; Robinson, A. D.; Warwick, N. J.; Pyle, J. A.

    2014-01-01

    Bromine plays an important role in ozone chemistry in both the troposphere and stratosphere. When measured by mass, bromoform (CHBr3) is thought to be the largest organic source of bromine to the atmosphere. While seaweed and phytoplankton are known to be dominant sources, the size and the geographical distribution of CHBr3 emissions remains uncertain. Particularly little is known about emissions from the Maritime Continent, which have usually been assumed to be large, and which appear to be especially likely to reach the stratosphere. In this study we aim to reduce this uncertainty by combining the first multi-annual set of CHBr3 measurements from this region, and an inversion process, to investigate systematically the distribution and magnitude of CHBr3 emissions. The novelty of our approach lies in the application of the inversion method to CHBr3. We find that local measurements of a short-lived gas like CHBr3 can be used to constrain emissions from only a relatively small, sub-regional domain. We then obtain detailed estimates of CHBr3 emissions within this area, which appear to be relatively insensitive to the assumptions inherent in the inversion process. We extrapolate this information to produce estimated emissions for the entire tropics (defined as 20° S-20° N) of 225 Gg CHBr3 yr-1. The ocean in the area we base our extrapolations upon is typically somewhat shallower, and more biologically productive, than the tropical average. Despite this, our tropical estimate is lower than most other recent studies, and suggests that CHBr3 emissions in the coastline-rich Maritime Continent may not be stronger than emissions in other parts of the tropics.

  13. A practical method of estimating energy expenditure during tennis play.

    PubMed

    Novas, A M P; Rowbottom, D G; Jenkins, D G

    2003-03-01

    This study aimed to develop a practical method of estimating energy expenditure (EE) during tennis. Twenty-four elite female tennis players first completed a tennis-specific graded test in which five different Intensity levels were applied randomly. Each intensity level was intended to simulate a "game" of singles tennis and comprised six 14 s periods of activity alternated with 20 s of active rest. Oxygen consumption (VO2) and heart rate (HR) were measured continuously and each player's rate of perceived exertion (RPE) was recorded at the end of each intensity level. Rate of energy expenditure (EE(VO2)) during the test was calculated using the sum of VO2 during play and the 'O2 debt' during recovery, divided by the duration of the activity. There were significant individual linear relationships between EE(VO2) and RPE, EE(VO2) and HR (r > or = 0.89 & r > or = 0.93; p < 0.05). On a second occasion, six players completed a 60-min singles tennis match during which VO2, HR and RPE were recorded; EE(VO2) was compared with EE predicted from the previously derived RPE and HR regression equations. Analysis found that EE(VO2) was overestimated by EE(RPE) (92 +/- 76 kJ x h(-1)) and EE(HR) (435 +/- 678 kJ x h(-1)), but the error of estimation for EE(RPE) (t = -3.01; p = 0.03) was less than 5% whereas for EE(HR) such error was 20.7%. The results of the study show that RPE can be used to estimate the energetic cost of playing tennis.

  14. The Effects on Parameter Estimation of Correlated Abilities Using a Two-Dimensional, Two-Parameter Logistic Item Response Model.

    ERIC Educational Resources Information Center

    Batley, Rose-Marie; Boss, Marvin W.

    The effects of correlated dimensions on parameter estimation were assessed, using a two-dimensional item response theory model. Past research has shown the inadequacies of the unidimensional analysis of multidimensional item response data. However, few studies have reported multidimensional analysis of multidimensional data, and, in those using…

  15. A robust method for estimating optimal treatment regimes.

    PubMed

    Zhang, Baqun; Tsiatis, Anastasios A; Laber, Eric B; Davidian, Marie

    2012-12-01

    A treatment regime is a rule that assigns a treatment, among a set of possible treatments, to a patient as a function of his/her observed characteristics, hence "personalizing" treatment to the patient. The goal is to identify the optimal treatment regime that, if followed by the entire population of patients, would lead to the best outcome on average. Given data from a clinical trial or observational study, for a single treatment decision, the optimal regime can be found by assuming a regression model for the expected outcome conditional on treatment and covariates, where, for a given set of covariates, the optimal treatment is the one that yields the most favorable expected outcome. However, treatment assignment via such a regime is suspect if the regression model is incorrectly specified. Recognizing that, even if misspecified, such a regression model defines a class of regimes, we instead consider finding the optimal regime within such a class by finding the regime that optimizes an estimator of overall population mean outcome. To take into account possible confounding in an observational study and to increase precision, we use a doubly robust augmented inverse probability weighted estimator for this purpose. Simulations and application to data from a breast cancer clinical trial demonstrate the performance of the method. PMID:22550953

  16. Estimating rotavirus vaccine effectiveness in Japan using a screening method

    PubMed Central

    Araki, Kaoru; Hara, Megumi; Sakanishi, Yuta; Shimanoe, Chisato; Nishida, Yuichiro; Matsuo, Muneaki; Tanaka, Keitaro

    2016-01-01

    abstract Rotavirus gastroenteritis is a highly contagious, acute viral disease that imposes a significant health burden worldwide. In Japan, rotavirus vaccines have been commercially available since 2011 for voluntary vaccination, but vaccine coverage and effectiveness have not been evaluated. In the absence of a vaccination registry in Japan, vaccination coverage in the general population was estimated according to the number of vaccines supplied by the manufacturer, the number of children who received financial support for vaccination, and the size of the target population. Patients with rotavirus gastroenteritis were identified by reviewing the medical records of all children who consulted 6 major hospitals in Saga Prefecture with gastroenteritis symptoms. Vaccination status among these patients was investigated by reviewing their medical records or interviewing their guardians by telephone. Vaccine effectiveness was determined using a screening method. Vaccination coverage increased with time, and it was 2-times higher in municipalities where the vaccination fee was supported. In the 2012/13 season, vaccination coverage in Saga Prefecture was 14.9% whereas the proportion of patients vaccinated was 5.1% among those with clinically diagnosed rotavirus gastroenteritis and 1.9% among those hospitalized for rotavirus gastroenteritis. Thus, vaccine effectiveness was estimated as 69.5% and 88.8%, respectively. This is the first study to evaluate rotavirus vaccination coverage and effectiveness in Japan since vaccination began. PMID:26680277

  17. Predictive methods for estimating pesticide flux to air

    SciTech Connect

    Woodrow, J.E.; Seiber, J.N.

    1996-10-01

    Published evaporative flux values for pesticides volatilizing from soil, plants, and water were correlated with compound vapor pressures (VP), modified by compound properties appropriate to the treated matrix (e.g., soil adsorption coefficient [K{sub oc}], water solubility [S{sub w}]). These correlations were formulated as Ln-Ln plots with correlation (r{sup 2}) coefficients in the range 0.93-0.99: (1) Soil surface - Ln flux vs Ln (VP/[K{sub oc} x S{sub w}]); (2) soil incorporation - Ln flux vs Ln [(VP x AR)/(K{sub oc} x S{sub w} x d)] (AR = application rate, d = incorporation depth); (3) plants - Ln flux vs Ln VP; and (4) water - Ln (flux/water conc) vs Ln (VP/Sw). Using estimated flux values from the plant correlation as source terms in the EPA`s SCREEN-2 dispersion model gave downwind concentrations that agreed to within 65-114% with measured concentrations. Further validation using other treated matrices is in progress. These predictive methods for estimating flux, when coupled with downwind dispersion modeling, provide tools for limiting downwind exposures.

  18. An automatic iris occlusion estimation method based on high-dimensional density estimation.

    PubMed

    Li, Yung-Hui; Savvides, Marios

    2013-04-01

    Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation. PMID:22868651

  19. [Methods for the estimation of the renal function].

    PubMed

    Fontseré Baldellou, Néstor; Bonal I Bastons, Jordi; Romero González, Ramón

    2007-10-13

    The chronic kidney disease represents one of the pathologies with greater incidence and prevalence in the present sanitary systems. The ambulatory application of different methods that allow a suitable detection, monitoring and stratification of the renal functionalism is of crucial importance. On the basis of the vagueness obtained by means of the application of the serum creatinine, a set of predictive equations for the estimation of the glomerular filtration rate have been developed. Nevertheless, it is essential for the physician to know its limitations, in situations of normal renal function and hyperfiltration, certain associate pathologies and extreme situations of nutritional status and age. In these cases, the application of the isotopic techniques for the calculation of the renal function is more recommendable.

  20. Strengths and Limitations of Period Estimation Methods for Circadian Data

    PubMed Central

    Troup, Eilidh; Halliday, Karen J.; Millar, Andrew J.

    2014-01-01

    A key step in the analysis of circadian data is to make an accurate estimate of the underlying period. There are many different techniques and algorithms for determining period, all with different assumptions and with differing levels of complexity. Choosing which algorithm, which implementation and which measures of accuracy to use can offer many pitfalls, especially for the non-expert. We have developed the BioDare system, an online service allowing data-sharing (including public dissemination), data-processing and analysis. Circadian experiments are the main focus of BioDare hence performing period analysis is a major feature of the system. Six methods have been incorporated into BioDare: Enright and Lomb-Scargle periodograms, FFT-NLLS, mFourfit, MESA and Spectrum Resampling. Here we review those six techniques, explain the principles behind each algorithm and evaluate their performance. In order to quantify the methods' accuracy, we examine the algorithms against artificial mathematical test signals and model-generated mRNA data. Our re-implementation of each method in Java allows meaningful comparisons of the computational complexity and computing time associated with each algorithm. Finally, we provide guidelines on which algorithms are most appropriate for which data types, and recommendations on experimental design to extract optimal data for analysis. PMID:24809473

  1. A new rapid method for rockfall energies and distances estimation

    NASA Astrophysics Data System (ADS)

    Giacomini, Anna; Ferrari, Federica; Thoeni, Klaus; Lambert, Cedric

    2016-04-01

    Rockfalls are characterized by long travel distances and significant energies. Over the last decades, three main methods have been proposed in the literature to assess the rockfall runout: empirical, process-based and GIS-based methods (Dorren, 2003). Process-based methods take into account the physics of rockfall by simulating the motion of a falling rock along a slope and they are generally based on a probabilistic rockfall modelling approach that allows for taking into account the uncertainties associated with the rockfall phenomenon. Their application has the advantage of evaluating the energies, bounce heights and distances along the path of a falling block, hence providing valuable information for the design of mitigation measures (Agliardi et al., 2009), however, the implementation of rockfall simulations can be time-consuming and data-demanding. This work focuses on the development of a new methodology for estimating the expected kinetic energies and distances of the first impact at the base of a rock cliff, subject to the conditions that the geometry of the cliff and the properties of the representative block are known. The method is based on an extensive two-dimensional sensitivity analysis, conducted by means of kinematic simulations based on probabilistic modelling of two-dimensional rockfall trajectories (Ferrari et al., 2016). To take into account for the uncertainty associated with the estimation of the input parameters, the study was based on 78400 rockfall scenarios performed by systematically varying the input parameters that are likely to affect the block trajectory, its energy and distance at the base of the rock wall. The variation of the geometry of the rock cliff (in terms of height and slope angle), the roughness of the rock surface and the properties of the outcropping material were considered. A simplified and idealized rock wall geometry was adopted. The analysis of the results allowed finding empirical laws that relate impact energies

  2. A Method for Estimation of Death Tolls in Disastrous Earthquake

    NASA Astrophysics Data System (ADS)

    Pai, C.; Tien, Y.; Teng, T.

    2004-12-01

    Fatality tolls caused by the disastrous earthquake are the one of the most important items among the earthquake damage and losses. If we can precisely estimate the potential tolls and distribution of fatality in individual districts as soon as the earthquake occurrences, it not only make emergency programs and disaster management more effective but also supply critical information to plan and manage the disaster and the allotments of disaster rescue manpower and medicine resources in a timely manner. In this study, we intend to reach the estimation of death tolls caused by the Chi-Chi earthquake in individual districts based on the Attributive Database of Victims, population data, digital maps and Geographic Information Systems. In general, there were involved many factors including the characteristics of ground motions, geological conditions, types and usage habits of buildings, distribution of population and social-economic situations etc., all are related to the damage and losses induced by the disastrous earthquake. The density of seismic stations in Taiwan is the greatest in the world at present. In the meantime, it is easy to get complete seismic data by earthquake rapid-reporting systems from the Central Weather Bureau: mostly within about a minute or less after the earthquake happened. Therefore, it becomes possible to estimate death tolls caused by the earthquake in Taiwan based on the preliminary information. Firstly, we form the arithmetic mean of the three components of the Peak Ground Acceleration (PGA) to give the PGA Index for each individual seismic station, according to the mainshock data of the Chi-Chi earthquake. To supply the distribution of Iso-seismic Intensity Contours in any districts and resolve the problems for which there are no seismic station within partial districts through the PGA Index and geographical coordinates in individual seismic station, the Kriging Interpolation Method and the GIS software, The population density depends on

  3. The Impact of Student Ability and Method for Varying the Position of Correct Answers in Classroom Multiple-Choice Tests

    ERIC Educational Resources Information Center

    Joseph, Dane Christian

    2010-01-01

    Multiple-choice item-writing guideline research is in its infancy. Haladyna (2004) calls for a science of item-writing guideline research. The purpose of this study is to respond to such a call. The purpose of this study was to examine the impact of student ability and method for varying the location of correct answers in classroom multiple-choice…

  4. The Mnemonic Keyword Method: The Effects of Bidirectional Retrieval Training and of Ability to Image on Foreign Language Vocabulary Recall

    ERIC Educational Resources Information Center

    Wyra, Mirella; Lawson, Michael J.; Hungi, Njora

    2007-01-01

    The mnemonic keyword method is an effective technique for vocabulary acquisition. This study examines the effects on recall of word-meaning pairs of (a) training in use of the keyword procedure at the time of retrieval; and (b) the influence of the self-rated ability to image. The performance of students trained in bidirectional retrieval using…

  5. The effect of deoxyribonucleic acid extraction methods from lymphoid tissue on the purity, content, and amplifying ability

    PubMed Central

    Ayatollahi, Hossein; Sadeghian, Mohammad Hadi; Keramati, Mohammad Reza; Ayatollahi, Ali; Shajiei, Arezoo; Sheikhi, Maryam; Bakhshi, Samane

    2016-01-01

    Background: Nowadays, definitive diagnosis of numerous diseases is based on the genetic and molecular findings. Therefore, preparation of fundamental materials for these evaluations is necessary. Deoxyribonucleic acid (DNA) is the first material for the molecular pathology and genetic analysis, and better results need more pure DNA. Furthermore, higher concentration of achieved DNA causes better results and higher amplifying ability for subsequent steps. We aim to evaluate five DNA extraction methods to compare DNA intimacy including purity, concentration, and amplifying ability with each other. Materials and Methods: The lymphoid tissue DNA was extracted from formalin-fixed, paraffin embedded (FFPE) tissue through five different methods including phenol-chloroform as the reference method, DNA isolation kit (QIAamp DNA FFPE Tissue Kit, Qiagen, Germany), proteinase K and xylol extraction and heat alkaline plus mineral oil extraction as authorship innovative method. Finally, polymerase chain reaction (PCR) and real-time PCR method were assessed to compare each following method consider to DNA purity and its concentration. Results: Among five different applied methods, the highest mean of DNA purity was related to heat alkaline method. Moreover, the highest mean of DNA concentration was related to heat alkaline plus mineral oil. Furthermore, the best result in quantitative PCR was in proteinase K method that had the lowest cycle threshold averages among the other extraction methods. Conclusion: We concluded that our innovative method for DNA extraction (heat alkaline plus mineral oil) achieved high DNA purity and concentration. PMID:27630381

  6. The effect of deoxyribonucleic acid extraction methods from lymphoid tissue on the purity, content, and amplifying ability

    PubMed Central

    Ayatollahi, Hossein; Sadeghian, Mohammad Hadi; Keramati, Mohammad Reza; Ayatollahi, Ali; Shajiei, Arezoo; Sheikhi, Maryam; Bakhshi, Samane

    2016-01-01

    Background: Nowadays, definitive diagnosis of numerous diseases is based on the genetic and molecular findings. Therefore, preparation of fundamental materials for these evaluations is necessary. Deoxyribonucleic acid (DNA) is the first material for the molecular pathology and genetic analysis, and better results need more pure DNA. Furthermore, higher concentration of achieved DNA causes better results and higher amplifying ability for subsequent steps. We aim to evaluate five DNA extraction methods to compare DNA intimacy including purity, concentration, and amplifying ability with each other. Materials and Methods: The lymphoid tissue DNA was extracted from formalin-fixed, paraffin embedded (FFPE) tissue through five different methods including phenol-chloroform as the reference method, DNA isolation kit (QIAamp DNA FFPE Tissue Kit, Qiagen, Germany), proteinase K and xylol extraction and heat alkaline plus mineral oil extraction as authorship innovative method. Finally, polymerase chain reaction (PCR) and real-time PCR method were assessed to compare each following method consider to DNA purity and its concentration. Results: Among five different applied methods, the highest mean of DNA purity was related to heat alkaline method. Moreover, the highest mean of DNA concentration was related to heat alkaline plus mineral oil. Furthermore, the best result in quantitative PCR was in proteinase K method that had the lowest cycle threshold averages among the other extraction methods. Conclusion: We concluded that our innovative method for DNA extraction (heat alkaline plus mineral oil) achieved high DNA purity and concentration.

  7. Reliability and Discriminative Ability of a New Method for Soccer Kicking Evaluation.

    PubMed

    Radman, Ivan; Wessner, Barbara; Bachl, Norbert; Ruzic, Lana; Hackl, Markus; Baca, Arnold; Markovic, Goran

    2016-01-01

    The study aimed to evaluate the test-retest reliability of a newly developed 356 Soccer Shooting Test (356-SST), and the discriminative ability of this test with respect to the soccer players' proficiency level and leg dominance. Sixty-six male soccer players, divided into three groups based on their proficiency level (amateur, n = 24; novice semi-professional, n = 18; and experienced semi-professional players, n = 24), performed 10 kicks following a two-step run up. Forty-eight of them repeated the test on a separate day. The following shooting variables were derived: ball velocity (BV; measured via radar gun), shooting accuracy (SA; average distance from the ball-entry point to the goal centre), and shooting quality (SQ; shooting accuracy divided by the time elapsed from hitting the ball to the point of entry). No systematic bias was evident in the selected shooting variables (SA: 1.98±0.65 vs. 2.00±0.63 m; BV: 24.6±2.3 vs. 24.5±1.9 m s-1; SQ: 2.92±1.0 vs. 2.93±1.0 m s-1; all p>0.05). The intra-class correlation coefficients were high (ICC = 0.70-0.88), and the coefficients of variation were low (CV = 5.3-5.4%). Finally, all three 356-SST variables identify, with adequate sensitivity, differences in soccer shooting ability with respect to the players' proficiency and leg dominance. The results suggest that the 356-SST is a reliable and sensitive test of specific shooting ability in men's soccer. Future studies should test the validity of these findings in a fatigued state, as well as in other populations.

  8. Reliability and Discriminative Ability of a New Method for Soccer Kicking Evaluation

    PubMed Central

    Radman, Ivan; Wessner, Barbara; Bachl, Norbert; Ruzic, Lana; Hackl, Markus; Baca, Arnold; Markovic, Goran

    2016-01-01

    The study aimed to evaluate the test–retest reliability of a newly developed 356 Soccer Shooting Test (356-SST), and the discriminative ability of this test with respect to the soccer players' proficiency level and leg dominance. Sixty-six male soccer players, divided into three groups based on their proficiency level (amateur, n = 24; novice semi-professional, n = 18; and experienced semi-professional players, n = 24), performed 10 kicks following a two-step run up. Forty-eight of them repeated the test on a separate day. The following shooting variables were derived: ball velocity (BV; measured via radar gun), shooting accuracy (SA; average distance from the ball-entry point to the goal centre), and shooting quality (SQ; shooting accuracy divided by the time elapsed from hitting the ball to the point of entry). No systematic bias was evident in the selected shooting variables (SA: 1.98±0.65 vs. 2.00±0.63 m; BV: 24.6±2.3 vs. 24.5±1.9 m s-1; SQ: 2.92±1.0 vs. 2.93±1.0 m s-1; all p>0.05). The intra-class correlation coefficients were high (ICC = 0.70–0.88), and the coefficients of variation were low (CV = 5.3–5.4%). Finally, all three 356-SST variables identify, with adequate sensitivity, differences in soccer shooting ability with respect to the players' proficiency and leg dominance. The results suggest that the 356-SST is a reliable and sensitive test of specific shooting ability in men’s soccer. Future studies should test the validity of these findings in a fatigued state, as well as in other populations. PMID:26812247

  9. Reliability and Discriminative Ability of a New Method for Soccer Kicking Evaluation.

    PubMed

    Radman, Ivan; Wessner, Barbara; Bachl, Norbert; Ruzic, Lana; Hackl, Markus; Baca, Arnold; Markovic, Goran

    2016-01-01

    The study aimed to evaluate the test-retest reliability of a newly developed 356 Soccer Shooting Test (356-SST), and the discriminative ability of this test with respect to the soccer players' proficiency level and leg dominance. Sixty-six male soccer players, divided into three groups based on their proficiency level (amateur, n = 24; novice semi-professional, n = 18; and experienced semi-professional players, n = 24), performed 10 kicks following a two-step run up. Forty-eight of them repeated the test on a separate day. The following shooting variables were derived: ball velocity (BV; measured via radar gun), shooting accuracy (SA; average distance from the ball-entry point to the goal centre), and shooting quality (SQ; shooting accuracy divided by the time elapsed from hitting the ball to the point of entry). No systematic bias was evident in the selected shooting variables (SA: 1.98±0.65 vs. 2.00±0.63 m; BV: 24.6±2.3 vs. 24.5±1.9 m s-1; SQ: 2.92±1.0 vs. 2.93±1.0 m s-1; all p>0.05). The intra-class correlation coefficients were high (ICC = 0.70-0.88), and the coefficients of variation were low (CV = 5.3-5.4%). Finally, all three 356-SST variables identify, with adequate sensitivity, differences in soccer shooting ability with respect to the players' proficiency and leg dominance. The results suggest that the 356-SST is a reliable and sensitive test of specific shooting ability in men's soccer. Future studies should test the validity of these findings in a fatigued state, as well as in other populations. PMID:26812247

  10. A Method to Accurately Estimate the Muscular Torques of Human Wearing Exoskeletons by Torque Sensors

    PubMed Central

    Hwang, Beomsoo; Jeon, Doyoung

    2015-01-01

    In exoskeletal robots, the quantification of the user’s muscular effort is important to recognize the user’s motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users’ muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user’s limb accurately from the measured torque. The user’s limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user’s muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions. PMID:25860074

  11. A method to accurately estimate the muscular torques of human wearing exoskeletons by torque sensors.

    PubMed

    Hwang, Beomsoo; Jeon, Doyoung

    2015-04-09

    In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.

  12. Methods for estimating dispersal probabilities and related parameters using marked animals

    USGS Publications Warehouse

    Bennetts, R.E.; Nichols, J.D.; Pradel, R.; Lebreton, J.D.; Kitchens, W.M.; Clobert, Jean; Danchin, Etienne; Dhondt, Andre A.; Nichols, James D.

    2001-01-01

    Deriving valid inferences about the causes and consequences of dispersal from empirical studies depends largely on our ability reliably to estimate parameters associated with dispersal. Here, we present a review of the methods available for estimating dispersal and related parameters using marked individuals. We emphasize methods that place dispersal in a probabilistic framework. In this context, we define a dispersal event as a movement of a specified distance or from one predefined patch to another, the magnitude of the distance or the definition of a `patch? depending on the ecological or evolutionary question(s) being addressed. We have organized the chapter based on four general classes of data for animals that are captured, marked, and released alive: (1) recovery data, in which animals are recovered dead at a subsequent time, (2) recapture/resighting data, in which animals are either recaptured or resighted alive on subsequent sampling occasions, (3) known-status data, in which marked animals are reobserved alive or dead at specified times with probability 1.0, and (4) combined data, in which data are of more than one type (e.g., live recapture and ring recovery). For each data type, we discuss the data required, the estimation techniques, and the types of questions that might be addressed from studies conducted at single and multiple sites.

  13. Adjoint-based error estimation and mesh adaptation for the correction procedure via reconstruction method

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Wang, Z. J.

    2015-08-01

    Adjoint-based mesh adaptive methods are capable of distributing computational resources to areas which are important for predicting an engineering output. In this paper, we develop an adjoint-based h-adaptation approach based on the high-order correction procedure via reconstruction formulation (CPR) to minimize the output or functional error. A dual-consistent CPR formulation of hyperbolic conservation laws is developed and its dual consistency is analyzed. Super-convergent functional and error estimate for the output with the CPR method are obtained. Factors affecting the dual consistency, such as the solution point distribution, correction functions, boundary conditions and the discretization approach for the non-linear flux divergence term, are studied. The presented method is then used to perform simulations for the 2D Euler and Navier-Stokes equations with mesh adaptation driven by the adjoint-based error estimate. Several numerical examples demonstrate the ability of the presented method to dramatically reduce the computational cost comparing with uniform grid refinement.

  14. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  15. Ellipsoidal Guaranteed Estimation Method for Satellite Collision Avoidance

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Lee, J.; Ovseevich, A.

    2012-01-01

    The article represents a new guaranteed approach to determine a small area of deviations around Earth orbiting satellite nominal Keplerian orbit position, caused by a set of acting external disturbing forces and initial conditions. Only very restricted information is assumed about the disturbances: maximum values with no assumptions about the law of their distribution of probability density. The area of satellite deviations achievability is approximated by a state vector ellipsoid that can include satellite position and the velocity as the vector components. Mathematical equations that allow one to find the ellipsoid are developed on the base of linear Euler-Hill equations of satellite orbital motion. The approach can be considered and applied to various problems of satellite collision avoidance with other satellite or space debris, as well as for establishing potentially safe space traffic control norms. In particular, in CSA it is considering for planning collision avoidance manoeuvres of Earth observation satellite family RADARSAT, SCISAT and newly developing satellites. Originally general approach of ellipsoidal estimation was developed by Russian scientist academician .F. Chernousko. Considered in the article problem was studied by his followers and some of them participated in the method development together with the founder.

  16. The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.

    ERIC Educational Resources Information Center

    Blackwood, Larry G.; Bradley, Edwin L.

    1989-01-01

    Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)

  17. [Opinionating in cases referring to estimation of ability to participate in legal proceedings and estimation of ability to serve a sentence of imprisonment or restriction of freedom in the material of Department of Forensic Medicine in Białystok in the years 2005-2009].

    PubMed

    Ptaszyńska-Sarosiek, Iwona; Niemcunowicz-Janica, Anna; Filimoniuk, Marcin; Okłota, Magdalena; Wardaszka, Zofia; Szeremeta, Michał; Sackiewicz, Adam

    2010-01-01

    In the last years in Poland, the number of medicolegal opinions issued concerning the health status of defendants or convicts with regard to their ability to participate in legal proceedings (taking part in trial) and serve a sentence of imprisonment or restriction of freedom (doing free social labor) has been increasing. In the years 2005-2009, in our department, 115 opinions about defendants, convicts, one witness and one sufferer were issued. In this number, 37 opinions were associated with estimation of ability to serve a sentence of imprisonment, 22--estimation of ability to serve a penalty of restricted liberty or possibility of alternative serving a sentence of imprisonment, 56--estimation of ability to take part in legal proceedings. In 8 cases the experts estimated if the health status allowed a defendant to be detained awaiting trial and be detained in custody pending inquiry. The age, sex, place of residence, diseases the opinioned persons suffered from and the judicial organ that commissioned the opinion were taken into consideration in the analysis. The opinions were issued on the basis of court files and medical documentation only--18 opinions, or on the basis of court files, documentation and medical examination--97 opinions. In 52 cases, only specialists of forensic medicine issued the opinions, but in 63 instances, participation of experts in other medical specialties was necessary. Most often, the opinions of cardiologists were sought.

  18. Application of age estimation methods based on teeth eruption: how easy is Olze method to use?

    PubMed

    De Angelis, D; Gibelli, D; Merelli, V; Botto, M; Ventura, F; Cattaneo, C

    2014-09-01

    The development of new methods for age estimation has become with time an urgent issue because of the increasing immigration, in order to estimate accurately the age of those subjects who lack valid identity documents. Methods of age estimation are divided in skeletal and dental ones, and among the latter, Olze's method is one of the most recent, since it was introduced in 2010 with the aim to identify the legal age of 18 and 21 years by evaluating the different stages of development of the periodontal ligament of the third molars with closed root apices. The present study aims at verifying the applicability of the method to the daily forensic practice, with special focus on the interobserver repeatability. Olze's method was applied by three different observers (two physicians and one dentist without a specific training in Olze's method) to 61 orthopantomograms from subjects of mixed ethnicity aged between 16 and 51 years. The analysis took into consideration the lower third molars. The results provided by the different observers were then compared in order to verify the interobserver error. Results showed that interobserver error varies between 43 and 57 % for the right lower third molar (M48) and between 23 and 49 % for the left lower third molar (M38). Chi-square test did not show significant differences according to the side of teeth and type of professional figure. The results prove that Olze's method is not easy to apply when used by not adequately trained personnel, because of an intrinsic interobserver error. Since it is however a crucial method in age determination, it should be used only by experienced observers after an intensive and specific training.

  19. Can the gradient method improve our ability to predict soil respiration?

    NASA Astrophysics Data System (ADS)

    Phillips, Claire; Nickerson, Nicholas; Risk, Dave

    2015-04-01

    Soil surface flux measurements integrate respiration across steep vertical gradients of soil texture, moisture, temperature, and carbon substrates. Although there are benefits to integrating complex soil processes in a single surface measure, i.e. for constructing soil carbon budgets, one serious drawback of studying only surface respiration is the difficulty in generating predictive relationships from environmental drivers. For example, the relationship between depth-integrated soil respiration and temperature measured at a single discreet depth (apparent temperature sensitivity) can bear little resemblance to the temperature sensitivity of soil respiration within soil layers (actual temperature sensitivity). Here we present several examples of how the inferred environmental sensitivity of soil respiration can be improved from observations of CO2 flux profiles in contrast to surface fluxes alone. We present a theoretical approach for estimating the temperature sensitivity of soil respiration in situ, called the weighted heat flux approach, which avoids much of the hysteresis produced by typical respiration-temperature comparisons. The weighted heat flux approach gives more accurate estimates of within-soil temperature sensitivity, and is arguably the most theoretically robust analytical temperature model available. We also show how soil drying influences the effectiveness of the weighted heat flux approach, as well as the relative activity of discreet soil layers and specific soil organisms, such as mycorrhizal fungi. The additional information provided by within-soil flux profiles can improve the fidelity of both probabilistic and mechanistic soil respiration models

  20. Novel method of channel estimation for WCDMA downlink

    NASA Astrophysics Data System (ADS)

    Sheng, Bin; You, XiaoHu

    2001-10-01

    A novel scheme for channel estimation is proposed in this paper for WCDMA Downlink where a pilot channel is simultaneously transmitted with a dada traffic channel. The proposed scheme exploits channel information in both pilot and data traffic channels by combining channel estimates from these two channels. It is demonstrated by computer simulations that the performance of the Rake receiver is improved obviously.

  1. Software Effort Estimation Accuracy: A Comparative Study of Estimations Based on Software Sizing and Development Methods

    ERIC Educational Resources Information Center

    Lafferty, Mark T.

    2010-01-01

    The number of project failures and those projects completed over cost and over schedule has been a significant issue for software project managers. Among the many reasons for failure, inaccuracy in software estimation--the basis for project bidding, budgeting, planning, and probability estimates--has been identified as a root cause of a high…

  2. A Simulation Study Comparison of Bayesian Estimation with Conventional Methods for Estimating Unknown Change Points

    ERIC Educational Resources Information Center

    Wang, Lijuan; McArdle, John J.

    2008-01-01

    The main purpose of this research is to evaluate the performance of a Bayesian approach for estimating unknown change points using Monte Carlo simulations. The univariate and bivariate unknown change point mixed models were presented and the basic idea of the Bayesian approach for estimating the models was discussed. The performance of Bayesian…

  3. Issues and advances in research methods on video games and cognitive abilities.

    PubMed

    Sobczyk, Bart; Dobrowolski, Paweł; Skorko, Maciek; Michalak, Jakub; Brzezicka, Aneta

    2015-01-01

    The impact of video game playing on cognitive abilities has been the focus of numerous studies over the last 10 years. Some cross-sectional comparisons indicate the cognitive advantages of video game players (VGPs) over non-players (NVGPs) and the benefits of video game trainings, while others fail to replicate these findings. Though there is an ongoing discussion over methodological practices and their impact on observable effects, some elementary issues, such as the representativeness of recruited VGP groups and lack of genre differentiation have not yet been widely addressed. In this article we present objective and declarative gameplay time data gathered from large samples in order to illustrate how playtime is distributed over VGP populations. The implications of this data are then discussed in the context of previous studies in the field. We also argue in favor of differentiating video games based on their genre when recruiting study samples, as this form of classification reflects the core mechanics that they utilize and therefore provides a measure of insight into what cognitive functions are likely to be engaged most. Additionally, we present the Covert Video Game Experience Questionnaire as an example of how this sort of classification can be applied during the recruitment process. PMID:26483717

  4. Issues and advances in research methods on video games and cognitive abilities

    PubMed Central

    Sobczyk, Bart; Dobrowolski, Paweł; Skorko, Maciek; Michalak, Jakub; Brzezicka, Aneta

    2015-01-01

    The impact of video game playing on cognitive abilities has been the focus of numerous studies over the last 10 years. Some cross-sectional comparisons indicate the cognitive advantages of video game players (VGPs) over non-players (NVGPs) and the benefits of video game trainings, while others fail to replicate these findings. Though there is an ongoing discussion over methodological practices and their impact on observable effects, some elementary issues, such as the representativeness of recruited VGP groups and lack of genre differentiation have not yet been widely addressed. In this article we present objective and declarative gameplay time data gathered from large samples in order to illustrate how playtime is distributed over VGP populations. The implications of this data are then discussed in the context of previous studies in the field. We also argue in favor of differentiating video games based on their genre when recruiting study samples, as this form of classification reflects the core mechanics that they utilize and therefore provides a measure of insight into what cognitive functions are likely to be engaged most. Additionally, we present the Covert Video Game Experience Questionnaire as an example of how this sort of classification can be applied during the recruitment process. PMID:26483717

  5. Issues and advances in research methods on video games and cognitive abilities.

    PubMed

    Sobczyk, Bart; Dobrowolski, Paweł; Skorko, Maciek; Michalak, Jakub; Brzezicka, Aneta

    2015-01-01

    The impact of video game playing on cognitive abilities has been the focus of numerous studies over the last 10 years. Some cross-sectional comparisons indicate the cognitive advantages of video game players (VGPs) over non-players (NVGPs) and the benefits of video game trainings, while others fail to replicate these findings. Though there is an ongoing discussion over methodological practices and their impact on observable effects, some elementary issues, such as the representativeness of recruited VGP groups and lack of genre differentiation have not yet been widely addressed. In this article we present objective and declarative gameplay time data gathered from large samples in order to illustrate how playtime is distributed over VGP populations. The implications of this data are then discussed in the context of previous studies in the field. We also argue in favor of differentiating video games based on their genre when recruiting study samples, as this form of classification reflects the core mechanics that they utilize and therefore provides a measure of insight into what cognitive functions are likely to be engaged most. Additionally, we present the Covert Video Game Experience Questionnaire as an example of how this sort of classification can be applied during the recruitment process.

  6. Analogical Reasoning and Ability Level: An Examination of R. J. Sternberg's Componential Method.

    ERIC Educational Resources Information Center

    McConaghy, J.; Kirby, N. H.

    1987-01-01

    Four experiments examined the extent to which the componential method of analogical reasoning, developed by R. J. Sternberg, could be used to investigate the cognitive processes of subjects with both above- and below-average intelligence. (Author/LMO)

  7. Variational methods to estimate terrestrial ecosystem model parameters

    NASA Astrophysics Data System (ADS)

    Delahaies, Sylvain; Roulstone, Ian

    2016-04-01

    Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.

  8. Analytic Method to Estimate Particle Acceleration in Flux Ropes

    NASA Technical Reports Server (NTRS)

    Guidoni, S. E.; Karpen, J. T.; DeVore, C. R.

    2015-01-01

    The mechanism that accelerates particles to the energies required to produce the observed high-energy emission in solar flares is not well understood. Drake et al. (2006) proposed a kinetic mechanism for accelerating electrons in contracting magnetic islands formed by reconnection. In this model, particles that gyrate around magnetic field lines transit from island to island, increasing their energy by Fermi acceleration in those islands that are contracting. Based on these ideas, we present an analytic model to estimate the energy gain of particles orbiting around field lines inside a flux rope (2.5D magnetic island). We calculate the change in the velocity of the particles as the flux rope evolves in time. The method assumes a simple profile for the magnetic field of the evolving island; it can be applied to any case where flux ropes are formed. In our case, the flux-rope evolution is obtained from our recent high-resolution, compressible 2.5D MHD simulations of breakout eruptive flares. The simulations allow us to resolve in detail the generation and evolution of large-scale flux ropes as a result of sporadic and patchy reconnection in the flare current sheet. Our results show that the initial energy of particles can be increased by 2-5 times in a typical contracting island, before the island reconnects with the underlying arcade. Therefore, particles need to transit only from 3-7 islands to increase their energies by two orders of magnitude. These macroscopic regions, filled with a large number of particles, may explain the large observed rates of energetic electron production in flares. We conclude that this mechanism is a promising candidate for electron acceleration in flares, but further research is needed to extend our results to 3D flare conditions.

  9. Detection of main tidal frequencies using least squares harmonic estimation method

    NASA Astrophysics Data System (ADS)

    Mousavian, R.; Hossainali, M. Mashhadi

    2012-11-01

    In this paper the efficiency of the method of Least Squares Harmonic Estimation (LS-HE) for detecting the main tidal frequencies is investigated. Using this method, the tidal spectrum of the sea level data is evaluated at two tidal stations: Bandar Abbas in south of Iran and Workington on the eastern coast of the UK. The amplitudes of the tidal constituents at these two tidal stations are not the same. Moreover, in contrary to the Workington station, the Bandar Abbas tidal record is not an equispaced time series. Therefore, the analysis of the hourly tidal observations in Bandar Abbas and Workington can provide a reasonable insight into the efficiency of this method for analyzing the frequency content of tidal time series. Furthermore, applying the method of Fourier transform to the Workington tidal record provides an independent source of information for evaluating the tidal spectrum proposed by the LS-HE method. According to the obtained results, the spectrums of these two tidal records contain the components with the maximum amplitudes among the expected ones in this time span and some new frequencies in the list of known constituents. In addition, in terms of frequencies with maximum amplitude; the power spectrums derived from two aforementioned methods are the same. These results demonstrate the ability of LS-HE for identifying the frequencies with maximum amplitude in both tidal records.

  10. Preservice Early Childhood Teachers' Learning of Science in a Methods Course: Examining the Predictive Ability of an Intentional Learning Model

    NASA Astrophysics Data System (ADS)

    Saçkes, Mesut; Trundle, Kathy Cabe

    2014-06-01

    This study investigated the predictive ability of an intentional learning model in the change of preservice early childhood teachers' conceptual understanding of lunar phases. Fifty-two preservice early childhood teachers who were enrolled in an early childhood science methods course participated in the study. Results indicated that the use of metacognitive strategies facilitated preservice early childhood teachers' use of deep-level cognitive strategies, which in turn promoted conceptual change. Also, preservice early childhood teachers with high motivational beliefs were more likely to use cognitive and metacognitive strategies. Thus, they were more likely to engage in conceptual change. The results provided evidence that the hypothesized model of intentional learning has a high predictive ability in explaining the change in preservice early childhood teachers' conceptual understandings from the pre to post-interviews. Implications for designing a science methods course for preservice early childhood teachers are provided.

  11. [Using Lamendin and Meindl-Lovejoy methods for age at death estimation of the unknown person].

    PubMed

    Bednarek, Jarosław; Engelgardt, Piotr; Bloch-Bogusławska, Elzbieta; Sliwka, Karol

    2002-01-01

    The paper presents the precise description of two methods used for age estimation on the base of single rooted tooth and cranial suture obliteration. Using the methods mentioned above, the age at death of the unknown person was estimated. A comparison of the estimated age and chronological age derived after identification, showed high usefulness of the mentioned methods.

  12. Comparison of the Ability of Different Clinical Treatment Scores to Estimate Prognosis in High-Risk Early Breast Cancer Patients: A Hellenic Cooperative Oncology Group Study

    PubMed Central

    Pliarchopoulou, Kyriaki; Wirtz, Ralph M.; Alexopoulou, Zoi; Zagouri, Flora; Veltrup, Elke; Timotheadou, Eleni; Gogas, Helen; Koutras, Angelos; Lazaridis, Georgios; Christodoulou, Christos; Pentheroudakis, George; Laskarakis, Apostolos; Arapantoni-Dadioti, Petroula; Batistatou, Anna; Sotiropoulou, Maria; Aravantinos, Gerasimos; Papakostas, Pavlos; Kosmidis, Paris; Pectasides, Dimitrios; Fountzilas, George

    2016-01-01

    Background-Aim Early breast cancer is a heterogeneous disease, and, therefore, prognostic tools have been developed to evaluate the risk for distant recurrence. In the present study, we sought to develop a risk for recurrence score (RRS) based on mRNA expression of three proliferation markers in high-risk early breast cancer patients and evaluate its ability to predict risk for relapse and death. In addition the Adjuvant! Online score (AOS) was also determined for each patient, providing a 10-year estimate of relapse and mortality risk. We then evaluated whether RRS or AOS might possibly improve the prognostic information of the clinical treatment score (CTS), a model derived from clinicopathological variables. Methods A total of 1,681 patients, enrolled in two prospective phase III trials, were treated with anthracycline-based adjuvant chemotherapy. Sufficient RNA was extracted from 875 samples followed by multiplex quantitative reverse transcription-polymerase chain reaction for assessing RACGAP1, TOP2A and Ki67 mRNA expression. The CTS, slightly modified to fit our cohort, integrated the prognostic information from age, nodal status, tumor size, histological grade and treatment. Patients were also classified to breast cancer subtypes defined by immunohistochemistry. Likelihood ratio (LR) tests and concordance indices were used to estimate the relative increase in the amount of information provided when either RRS or AOS is added to CTS. Results The optimal RRS, in terms of disease-free survival (DFS) and overall survival (OS), was based on the co-expression of two of the three evaluated genes (RACGAP1 and TOP2A). CTS was prognostic for DFS (p<0.001), while CTS, AOS and RRS were all prognostic for OS (p<0.001, p<0.001 and p = 0.036, respectively). The use of AOS in addition to CTS added prognostic information regarding DFS (LR-Δχ2 8.7, p = 0.003), however the use of RRS in addition to CTS did not. For estimating OS, the use of either AOS or RRS in addition to

  13. Dynamic State Estimation Utilizing High Performance Computing Methods

    SciTech Connect

    Schneider, Kevin P.; Huang, Zhenyu; Yang, Bo; Hauer, Matthew L.; Nieplocha, Jaroslaw

    2009-03-18

    The state estimation tools which are currently deployed in power system control rooms are based on a quasi-steady-state assumption. As a result, the suite of operational tools that rely on state estimation results as inputs do not have dynamic information available and their accuracy is compromised. This paper presents an overview of the Kalman Filtering process and then focuses on the implementation of the predication component on multiple processors.

  14. Variability in Reading Ability Gains as a Function of Computer-Assisted Instruction Method of Presentation

    ERIC Educational Resources Information Center

    Johnson, Erin Phinney; Perry, Justin; Shamir, Haya

    2010-01-01

    This study examines the effects on early reading skills of three different methods of presenting material with computer-assisted instruction (CAI): (1) learner-controlled picture menu, which allows the student to choose activities, (2) linear sequencer, which progresses the students through lessons at a pre-specified pace, and (3) mastery-based…

  15. Knowledge, Skills, and Abilities for Entry-Level Business Analytics Positions: A Multi-Method Study

    ERIC Educational Resources Information Center

    Cegielski, Casey G.; Jones-Farmer, L. Allison

    2016-01-01

    It is impossible to deny the significant impact from the emergence of big data and business analytics on the fields of Information Technology, Quantitative Methods, and the Decision Sciences. Both industry and academia seek to hire talent in these areas with the hope of developing organizational competencies. This article describes a multi-method…

  16. Methods and Measures: Confirmatory Factor Analysis and Multidimensional Scaling for Construct Validation of Cognitive Abilities

    ERIC Educational Resources Information Center

    Tucker-Drob, Elliot M.; Salthouse, Timothy A.

    2009-01-01

    Although factor analysis is the most commonly-used method for examining the structure of cognitive variable interrelations, multidimensional scaling (MDS) can provide visual representations highlighting the continuous nature of interrelations among variables. Using data (N = 8,813; ages 17-97 years) aggregated across 38 separate studies, MDS was…

  17. Ability, Demography, Learning Style, and Personality Trait Correlates of Student Preference for Assessment Method

    ERIC Educational Resources Information Center

    Furnham, Adrian; Christopher, Andrew; Garwood, Jeanette; Martin, Neil G.

    2008-01-01

    More than 400 students from four universities in America and Britain completed measures of learning style preference, general knowledge (as a proxy for intelligence), and preference for examination method. Learning style was consistently associated with preferences: surface learners preferred multiple choice and group work options, and viewed…

  18. Effects of Changes in the Examinees' Ability Distribution on the Exposure Control Methods in CAT.

    ERIC Educational Resources Information Center

    Chang, Shun-Wen; Twu, Bor-Yaun

    To satisfy the security requirements of computerized adaptive tests (CATs), efforts have been made to control the exposure rates of optimal items directly by incorporating statistical methods into the item selection procedure. Since differences are likely to occur between the exposure control parameter derivation stage and the operational CAT…

  19. Estimation of Organ Activity using Four Different Methods of Background Correction in Conjugate View Method

    PubMed Central

    Shanei, Ahmad; Afshin, Maryam; Moslehi, Masoud; Rastaghi, Sedighe

    2015-01-01

    To make an accurate estimation of the uptake of radioactivity in an organ using the conjugate view method, corrections of physical factors, such as background activity, scatter, and attenuation are needed. The aim of this study was to evaluate the accuracy of four different methods for background correction in activity quantification of the heart in myocardial perfusion scans. The organ activity was calculated using the conjugate view method. A number of 22 healthy volunteers were injected with 17–19 mCi of 99mTc-methoxy-isobutyl-isonitrile (MIBI) at rest or during exercise. Images were obtained by a dual-headed gamma camera. Four methods for background correction were applied: (1) Conventional correction (referred to as the Gates' method), (2) Buijs method, (3) BgdA subtraction, (4) BgdB subtraction. To evaluate the accuracy of these methods, the results of the calculations using the above-mentioned methods were compared with the reference results. The calculated uptake in the heart using conventional method, Buijs method, BgdA subtraction, and BgdB subtraction methods was 1.4 ± 0.7% (P < 0.05), 2.6 ± 0.6% (P < 0.05), 1.3 ± 0.5% (P < 0.05), and 0.8 ± 0.3% (P < 0.05) of injected dose (I.D) at rest and 1.8 ± 0.6% (P > 0.05), 3.1 ± 0.8% (P > 0.05), 1.9 ± 0.8% (P < 0.05), and 1.2 ± 0.5% (P < 0.05) of I.D, during exercise. The mean estimated myocardial uptake of 99mTc-MIBI was dependent on the correction method used. Comparison among the four different methods of background activity correction applied in this study showed that the Buijs method was the most suitable method for background correction in myocardial perfusion scan. PMID:26955568

  20. Estimation of Organ Activity using Four Different Methods of Background Correction in Conjugate View Method.

    PubMed

    Shanei, Ahmad; Afshin, Maryam; Moslehi, Masoud; Rastaghi, Sedighe

    2015-01-01

    To make an accurate estimation of the uptake of radioactivity in an organ using the conjugate view method, corrections of physical factors, such as background activity, scatter, and attenuation are needed. The aim of this study was to evaluate the accuracy of four different methods for background correction in activity quantification of the heart in myocardial perfusion scans. The organ activity was calculated using the conjugate view method. A number of 22 healthy volunteers were injected with 17-19 mCi of (99m)Tc-methoxy-isobutyl-isonitrile (MIBI) at rest or during exercise. Images were obtained by a dual-headed gamma camera. Four methods for background correction were applied: (1) Conventional correction (referred to as the Gates' method), (2) Buijs method, (3) BgdA subtraction, (4) BgdB subtraction. To evaluate the accuracy of these methods, the results of the calculations using the above-mentioned methods were compared with the reference results. The calculated uptake in the heart using conventional method, Buijs method, BgdA subtraction, and BgdB subtraction methods was 1.4 ± 0.7% (P < 0.05), 2.6 ± 0.6% (P < 0.05), 1.3 ± 0.5% (P < 0.05), and 0.8 ± 0.3% (P < 0.05) of injected dose (I.D) at rest and 1.8 ± 0.6% (P > 0.05), 3.1 ± 0.8% (P > 0.05), 1.9 ± 0.8% (P < 0.05), and 1.2 ± 0.5% (P < 0.05) of I.D, during exercise. The mean estimated myocardial uptake of (99m)Tc-MIBI was dependent on the correction method used. Comparison among the four different methods of background activity correction applied in this study showed that the Buijs method was the most suitable method for background correction in myocardial perfusion scan. PMID:26955568

  1. Estimation of Organ Activity using Four Different Methods of Background Correction in Conjugate View Method.

    PubMed

    Shanei, Ahmad; Afshin, Maryam; Moslehi, Masoud; Rastaghi, Sedighe

    2015-01-01

    To make an accurate estimation of the uptake of radioactivity in an organ using the conjugate view method, corrections of physical factors, such as background activity, scatter, and attenuation are needed. The aim of this study was to evaluate the accuracy of four different methods for background correction in activity quantification of the heart in myocardial perfusion scans. The organ activity was calculated using the conjugate view method. A number of 22 healthy volunteers were injected with 17-19 mCi of (99m)Tc-methoxy-isobutyl-isonitrile (MIBI) at rest or during exercise. Images were obtained by a dual-headed gamma camera. Four methods for background correction were applied: (1) Conventional correction (referred to as the Gates' method), (2) Buijs method, (3) BgdA subtraction, (4) BgdB subtraction. To evaluate the accuracy of these methods, the results of the calculations using the above-mentioned methods were compared with the reference results. The calculated uptake in the heart using conventional method, Buijs method, BgdA subtraction, and BgdB subtraction methods was 1.4 ± 0.7% (P < 0.05), 2.6 ± 0.6% (P < 0.05), 1.3 ± 0.5% (P < 0.05), and 0.8 ± 0.3% (P < 0.05) of injected dose (I.D) at rest and 1.8 ± 0.6% (P > 0.05), 3.1 ± 0.8% (P > 0.05), 1.9 ± 0.8% (P < 0.05), and 1.2 ± 0.5% (P < 0.05) of I.D, during exercise. The mean estimated myocardial uptake of (99m)Tc-MIBI was dependent on the correction method used. Comparison among the four different methods of background activity correction applied in this study showed that the Buijs method was the most suitable method for background correction in myocardial perfusion scan.

  2. A robust and efficient method for estimating enzyme complex abundance and metabolic flux from expression data.

    PubMed

    Barker, Brandon E; Sadagopan, Narayanan; Wang, Yiping; Smallbone, Kieran; Myers, Christopher R; Xi, Hongwei; Locasale, Jason W; Gu, Zhenglong

    2015-12-01

    A major theme in constraint-based modeling is unifying experimental data, such as biochemical information about the reactions that can occur in a system or the composition and localization of enzyme complexes, with high-throughput data including expression data, metabolomics, or DNA sequencing. The desired result is to increase predictive capability and improve our understanding of metabolism. The approach typically employed when only gene (or protein) intensities are available is the creation of tissue-specific models, which reduces the available reactions in an organism model, and does not provide an objective function for the estimation of fluxes. We develop a method, flux assignment with LAD (least absolute deviation) convex objectives and normalization (FALCON), that employs metabolic network reconstructions along with expression data to estimate fluxes. In order to use such a method, accurate measures of enzyme complex abundance are needed, so we first present an algorithm that addresses quantification of complex abundance. Our extensions to prior techniques include the capability to work with large models and significantly improved run-time performance even for smaller models, an improved analysis of enzyme complex formation, the ability to handle large enzyme complex rules that may incorporate multiple isoforms, and either maintained or significantly improved correlation with experimentally measured fluxes. FALCON has been implemented in MATLAB and ATS, and can be downloaded from: https://github.com/bbarker/FALCON. ATS is not required to compile the software, as intermediate C source code is available. FALCON requires use of the COBRA Toolbox, also implemented in MATLAB.

  3. An Investigation of Methods for Improving Estimation of Test Score Distributions.

    ERIC Educational Resources Information Center

    Hanson, Bradley A.

    Three methods of estimating test score distributions that may improve on using the observed frequencies (OBFs) as estimates of a population test score distribution are considered: the kernel method (KM); the polynomial method (PM); and the four-parameter beta binomial method (FPBBM). The assumption each method makes about the smoothness of the…

  4. The effects of cognitive style, method of instruction, and visual ability on learning

    NASA Astrophysics Data System (ADS)

    Lynch, Mark D.

    The relationships between cognitive style, method of instruction, and visual skill on learning chemical kinetics were investigated. Participants enrolled in a general chemistry course were classified on each of three factors: cognitive style (field dependent, field neutral, or field independent), method of instruction (computer lesson, teaching assistant, or neither), and visual skill (high visual skill, moderate visual skill, or low visual skill). Participants who were classified as field independent scored significantly higher than those classified as field dependent on the kinetics portions of the hour and final exams. Also, participants who worked with the computer lesson scored significantly higher than those who did not work with either the computer lesson or the teaching assistant on the kinetics portions of the hour and final exams. In addition, participants who were classified with high visual skill scored significantly higher than those classified with low visual skill on the kinetics portions of the hour and final exams. No significant interaction effects were found for cognitive style and method of instruction. However, a trend was discovered in that participants who were classified as field dependent or field independent and worked with the computer based lesson seemed to score higher than those classified as field dependent or field independent and worked with the teaching assistant. Also, those classified as field neutral and worked with the teaching assistant seemed to score higher than those who were classified as field neutral and worked with the computer lesson. Finally, no significant difference was found for cognitive style and the percentage of time spent in the simulated environment component of the computer lesson. However, a trend was found in that participants classified as field independent seemed to spend a greater percentage of time in the simulated environment than participants classified as field dependent.

  5. Iterative methods for distributed parameter estimation in parabolic PDE

    SciTech Connect

    Vogel, C.R.; Wade, J.G.

    1994-12-31

    The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.

  6. PHREATOPHYTE WATER USE ESTIMATED BY EDDY-CORRELATION METHODS.

    USGS Publications Warehouse

    Weaver, H.L.; Weeks, E.P.; Campbell, G.S.; Stannard, D.I.; Tanner, B.D.

    1986-01-01

    Water-use was estimated for three phreatophyte communities: a saltcedar community and an alkali-Sacaton grass community in New Mexico, and a greasewood rabbit-brush-saltgrass community in Colorado. These water-use estimates were calculated from eddy-correlation measurements using three different analyses, since the direct eddy-correlation measurements did not satisfy a surface energy balance. The analysis that seems to be most accurate indicated the saltcedar community used from 58 to 87 cm (23 to 34 in. ) of water each year. The other two communities used about two-thirds this quantity.

  7. Fast, moment-based estimation methods for delay network tomography

    SciTech Connect

    Lawrence, Earl Christophre; Michailidis, George; Nair, Vijayan N

    2008-01-01

    Consider the delay network tomography problem where the goal is to estimate distributions of delays at the link-level using data on end-to-end delays. These measurements are obtained using probes that are injected at nodes located on the periphery of the network and sent to other nodes also located on the periphery. Much of the previous literature deals with discrete delay distributions by discretizing the data into small bins. This paper considers more general models with a focus on computationally efficient estimation. The moment-based schemes presented here are designed to function well for larger networks and for applications like monitoring that require speedy solutions.

  8. Regional and longitudinal estimation of product lifespan distribution: a case study for automobiles and a simplified estimation method.

    PubMed

    Oguchi, Masahiro; Fuse, Masaaki

    2015-02-01

    Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.

  9. A High Throughput Method for Estimating Mouth-Level Intake of Mainstream Cigarette Smoke

    PubMed Central

    Yan, Xizheng; Zhang, Liqin; Hearn, Bryan A.; Valentín-Blasini, Liza; Polzin, Gregory M.; Watson, Clifford H.

    2016-01-01

    Introduction We developed a high throughput method for estimating smoker’s mainstream smoke intake on a per-cigarette basis by analyzing discarded cigarette butts. This new method utilizes ultraviolet/visible (UV-Vis) spectrophotometric analysis of isopropanol-soluble smoke particulate matter extracted from discarded cigarette filters. Methods When measured under a wide range of smoking conditions for a given brand variant, smoking machine delivery of nicotine, benzene, polycyclic aromatic hydrocarbons, and tobacco-specific nitrosamines can be related to the overall filter extract absorbance at 360 nm. Once this relationship has been established, UV-Vis analysis of a discarded cigarette filter butt gives a quantitative measure of a smoker’s exposure to these analytes. Results The measured mainstream smoke constituents correlated closely (correlation coefficients from 0.9303 to 0.9941) with the filter extract absorbance. These high correlations held over a wide range of smoking conditions for 2R4F research cigarettes as well as popular domestic cigarette brands sold in the United States. Conclusions This low cost, high throughput method is suitable for high volume analyses (hundreds of samples per day) because UV-Vis spectrophotometry, rather than mass spectrometry, is used for the cigarette filter butt analysis. This method provides a stable and noninvasive means for estimating mouth-level delivery of many mainstream smoke constituents. The ability to gauge the mouthlevel intake of harmful chemicals and total mainstream smoke for cigarette smokers in a natural setting on a cigarette-by-cigarette basis can provide insights on factors contributing to morbidity and mortality from cigarette smoking, as well as insights on strategies related to smoking cessation. PMID:25649054

  10. COMPARISON OF METHODS FOR ESTIMATING GROUND-WATER PUMPAGE FOR IRRIGATION.

    USGS Publications Warehouse

    Frenzel, Steven A.

    1985-01-01

    Ground-water pumpage for irrigation was measured at 32 sites on the eastern Snake River Plain in southern Idaho during 1983. Pumpage at these sites also was estimated by three commonly used methods, and pumpage estimates were compared to measured values to determine the accuracy of each estimate. Statistical comparisons of estimated and metered pumpage using an F-test showed that only estimates made using the instantaneous discharge method were not significantly different ( alpha equals 0. 01) from metered values. Pumpage estimates made using the power consumption method reflect variability in pumping efficiency among sites. Pumpage estimates made using the crop-consumptive use method reflect variability in water-management practices. Pumpage estimates made using the instantaneous discharge method reflect variability in discharges at each site during the irrigation season.

  11. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods

    PubMed Central

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-01-01

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell’s equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions than have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called Minimum Norm Estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed-norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as Mixed-Norm Estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furhermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data. PMID:22421459

  12. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods

    NASA Astrophysics Data System (ADS)

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-04-01

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.

  13. A Practical Method of Policy Analysis by Estimating Effect Size

    ERIC Educational Resources Information Center

    Phelps, James L.

    2011-01-01

    The previous articles on class size and other productivity research paint a complex and confusing picture of the relationship between policy variables and student achievement. Missing is a conceptual scheme capable of combining the seemingly unrelated research and dissimilar estimates of effect size into a unified structure for policy analysis and…

  14. Assessing Methods for Generalizing Experimental Impact Estimates to Target Populations

    ERIC Educational Resources Information Center

    Kern, Holger L.; Stuart, Elizabeth A.; Hill, Jennifer; Green, Donald P.

    2016-01-01

    Randomized experiments are considered the gold standard for causal inference because they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations. In education research,…

  15. Assessment of in silico methods to estimate aquatic species sensitivity

    EPA Science Inventory

    Determining the sensitivity of a diversity of species to environmental contaminants continues to be a significant challenge in ecological risk assessment because toxicity data are generally limited to a few standard species. In many cases, QSAR models are used to estimate toxici...

  16. Estimation method for national methane emission from solid waste landfills

    NASA Astrophysics Data System (ADS)

    Kumar, Sunil; Gaikwad, S. A.; Shekdar, A. V.; Kshirsagar, P. S.; Singh, R. N.

    In keeping with the global efforts on inventorisation of methane emission, municipal solid waste (MSW) landfills are recognised as one of the major sources of anthropogenic emissions generated from human activities. In India, most of the solid wastes are disposed of by landfilling in low-lying areas located in and around the urban centres resulting in generation of large quantities of biogas containing a sizeable proportion of methane. After a critical review of literature on the methodology for estimation of methane emissions, the default methodology has been used in estimation following the IPCC guidelines 1996. However, as the default methodology assumes that all potential methane is emitted in the year of waste deposition, a triangular model for biogas from landfill has been proposed and the results are compared. The methodology proposed for methane emissions from landfills based on a triangular model is more realistic and can very well be used in estimation on global basis. Methane emissions from MSW landfills for the year AD 1980-1999 have been estimated which could be used in computing national inventories of methane emission.

  17. Methods to explain genomic estimates of breeding value

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genetic markers allow animal breeders to locate, estimate, and trace inheritance of many unknown genes that affect quantitative traits. Traditional models use pedigree data to compute expected proportions of genes identical by descent (assumed the same for all traits). Newer genomic models use thous...

  18. Application of two methods of calculation of solvation descriptor L to estimate C5 -C7 alkenes retention.

    PubMed

    Jirkal, Štěpán; Ševčík, Jiří G K

    2015-07-01

    The solvation descriptor L for 59 isomers of all C5 -C7 alkenes was calculated using two methods based on additive contributions of particular fragments in the molecule by the method of Havelec and Ševčík and the method of Platts and Butina. These descriptors were used to estimate the gas chromatography retention of alkenes on squalane and polydimethylsiloxane stationary phases. The retention was described better by the Platts-Butina method. Modification of the Havelec-Ševčík method by omitting the contribution for interaction of the cis isomers led to a substantial improvement in the estimation ability of the model. The modified Havelec-Ševčík method was found to be preferable for estimation of the descriptor L compared to the Platts-Butina method. A more comprehensive description of the retention of alkenes was achieved by inclusion of an additional descriptor E. This model with the descriptors L and E yielded better estimation for alkenes compared to the model with a single descriptor.

  19. Preparation of mesoporous silica thin films by photocalcination method and their adsorption abilities for various proteins.

    PubMed

    Kato, Katsuya; Nakamura, Hitomi; Yamauchi, Yoshihiro; Nakanishi, Kazuma; Tomita, Masahiro

    2014-07-01

    Mesoporous silica (MPS) thin film biosensor platforms were established. MPS thin films were prepared from tetraethoxysilane (TEOS) via using sol-gel and spin-coating methods using a poly-(ethylene oxide)-block-poly-(propylene oxide)-block-poly-(ethylene oxide) triblock polymer, such as P123 ((EO)20(PO)70(EO)20) or F127 ((EO)106(PO)70(EO)106), as the structure-directing agent. The MPS thin film prepared using P123 as the mesoporous template and treated via vacuum ultraviolet (VUV) irradiation to remove the triblock copolymer had a more uniform pore array than that of the corresponding film prepared via thermal treatment. Protein adsorption and enzyme-linked immunosorbent assay (ELISA) on the synthesized MPS thin films were also investigated. VUV-irradiated MPS thin films adsorbed a smaller quantity of protein A than the thermally treated films; however, the human immunoglobulin G (IgG) binding efficiency was higher on the former. In addition, protein A-IgG specific binding on MPS thin films was achieved without using a blocking reagent; i.e., nonspecific adsorption was inhibited by the uniform pore arrays of the films. Furthermore, VUV-irradiated MPS thin films exhibited high sensitivity for ELISA testing, and cytochrome c adsorbed on the MPS thin films exhibited high catalytic activity and recyclability. These results suggest that MPS thin films are attractive platforms for the development of novel biosensors. PMID:24857463

  20. A method to confer Protein L binding ability to any antibody fragment.

    PubMed

    Lakhrif, Zineb; Pugnière, Martine; Henriquet, Corinne; di Tommaso, Anne; Dimier-Poisson, Isabelle; Billiald, Philippe; Juste, Matthieu O; Aubrey, Nicolas

    2016-01-01

    Recombinant antibody single-chain variable fragments (scFv) are difficult to purify homogeneously from a protein complex mixture. The most effective, specific and fastest method of purification is an affinity chromatography on Protein L (PpL) matrix. This protein is a multi-domain bacterial surface protein that is able to interact with conformational patterns on kappa light chains. It mainly recognizes amino acid residues located at the VL FR1 and some residues in the variable and constant (CL) domain. Not all kappa chains are recognized, however, and the lack of CL can reduce the interaction. From a scFv composed of IGKV10-94 according to IMGT®, it is possible, with several mutations, to transfer the motif from the IGKV12-46 naturally recognized by the PpL, and, with the single mutation T8P, to confer PpL recognition with a higher affinity. A second mutation S24R greatly improves the affinity, in particular by modifying the dissociation rate (kd). The equilibrium dissociation constant (KD) was measured at 7.2 10(-11) M by surface plasmon resonance. It was possible to confer PpL recognition to all kappa chains. This protein interaction can be modulated according to the characteristics of scFv (e.g., stability) and their use with conjugated PpL. This work could be extrapolated to recombinant monoclonal antibodies, and offers an alternative for protein A purification and detection. PMID:26683650

  1. A Modified Frequency Estimation Equating Method for the Common-Item Nonequivalent Groups Design

    ERIC Educational Resources Information Center

    Wang, Tianyou; Brennan, Robert L.

    2009-01-01

    Frequency estimation, also called poststratification, is an equating method used under the common-item nonequivalent groups design. A modified frequency estimation method is proposed here, based on altering one of the traditional assumptions in frequency estimation in order to correct for equating bias. A simulation study was carried out to…

  2. Etalon-photometric method for estimation of tissues density at x-ray images

    NASA Astrophysics Data System (ADS)

    Buldakov, Nicolay S.; Buldakova, Tatyana I.; Suyatinov, Sergey I.

    2016-04-01

    The etalon-photometric method for quantitative estimation of physical density of pathological entities is considered. The method consists in using etalon during the registration and estimation of photometric characteristics of objects. The algorithm for estimating of physical density at X-ray images is offered.

  3. Testing the ability of a proposed geotechnical based method to evaluate the liquefaction potential analysis subjected to earthquake vibrations

    NASA Astrophysics Data System (ADS)

    Abbaszadeh Shahri, A.; Behzadafshar, K.; Esfandiyari, B.; Rajablou, R.

    2010-12-01

    During the earthquakes a number of earth dams have had severe damages or suffered major displacements as a result of liquefaction, thus modeling by computer codes can provide a reliable tool to predict the response of the dam foundation against earthquakes. These modeling can be used in the design of new dams or safety assessments of existing ones. In this paper, on base of the field and laboratory tests and by combination of several software packages a seismic geotechnical based analysis procedure is proposed and verified by comparison with computer model tests, field and laboratory experiences. Verification or validation of the analyses relies to ability of the applied computer codes. By use of Silakhor earthquake (2006, Ms 6.1) and in order to check the efficiency of the proposed framework, the procedure is applied to the Korzan earth dam of Iran which is located in Hamedan Province to analyze and estimate the liquefaction and safety factor. Design and development of a computer code by authors which named as “Abbas Converter” with graphical user interface which operates as logic connecter function that can computes and models the soil profiles is the critical point of this study and the results are confirm and proved the ability of the generated computer code on evaluation of soil behavior under the earthquake excitations. Also this code can make and render facilitate this study more than previous have done, and take over the encountered problem.

  4. Pain from the life cycle perspective: Evaluation and Measurement through psychophysical methods of category estimation and magnitude estimation 1

    PubMed Central

    Sousa, Fátima Aparecida Emm Faleiros; da Silva, Talita de Cássia Raminelli; Siqueira, Hilze Benigno de Oliveira Moura; Saltareli, Simone; Gomez, Rodrigo Ramon Falconi; Hortense, Priscilla

    2016-01-01

    Abstract Objective: to describe acute and chronic pain from the perspective of the life cycle. Methods: participants: 861 people in pain. The Multidimensional Pain Evaluation Scale (MPES) was used. Results: in the category estimation method the highest descriptors of chronic pain for children/ adolescents were "Annoying" and for adults "Uncomfortable". The highest descriptors of acute pain for children/adolescents was "Complicated"; and for adults was "Unbearable". In magnitude estimation method, the highest descriptors of chronic pain was "Desperate" and for descriptors of acute pain was "Terrible". Conclusions: the MPES is a reliable scale it can be applied during different stages of development. PMID:27556875

  5. Quantitative estimation of poikilocytosis by the coherent optical method

    NASA Astrophysics Data System (ADS)

    Safonova, Larisa P.; Samorodov, Andrey V.; Spiridonov, Igor N.

    2000-05-01

    The investigation upon the necessity and the reliability required of the determination of the poikilocytosis in hematology has shown that existing techniques suffer from grave shortcomings. To determine a deviation of the erythrocytes' form from the normal (rounded) one in blood smears it is expedient to use an integrative estimate. The algorithm which is based on the correlation between erythrocyte morphological parameters with properties of the spatial-frequency spectrum of blood smear is suggested. During analytical and experimental research an integrative form parameter (IFP) which characterizes the increase of the relative concentration of cells with the changed form over 5% and the predominating type of poikilocytes was suggested. An algorithm of statistically reliable estimation of the IFP on the standard stained blood smears has been developed. To provide the quantitative characterization of the morphological features of cells a form vector has been proposed, and its validity for poikilocytes differentiation was shown.

  6. Evaluation of a method estimating real-time individual lysine requirements in two lines of growing-finishing pigs.

    PubMed

    Cloutier, L; Pomar, C; Létourneau Montminy, M P; Bernier, J F; Pomar, J

    2015-04-01

    The implementation of precision feeding in growing-finishing facilities requires accurate estimates of the animals' nutrient requirements. The objectives of the current study was to validate a method for estimating the real-time individual standardized ileal digestible (SID) lysine (Lys) requirements of growing-finishing pigs and the ability of this method to estimate the Lys requirements of pigs with different feed intake and growth patterns. Seventy-five pigs from a terminal cross and 72 pigs from a maternal cross were used in two 28-day experimental phases beginning at 25.8 (±2.5) and 73.3 (±5.2) kg BW, respectively. Treatments were randomly assigned to pigs within each experimental phase according to a 2×4 factorial design in which the two genetic lines and four dietary SID Lys levels (70%, 85%, 100% and 115% of the requirements estimated by the factorial method developed for precision feeding) were the main factors. Individual pigs' Lys requirements were estimated daily using a factorial approach based on their feed intake, BW and weight gain patterns. From 25 to 50 kg BW, this method slightly underestimated the pigs' SID Lys requirements, given that maximum protein deposition and weight gain were achieved at 115% of SID Lys requirements. However, the best gain-to-feed ratio (G : F) was obtained at a level of 85% or more of the estimated Lys requirement. From 70 to 100 kg, the method adequately estimated the pigs' individual requirements, given that maximum performance was achieved at 100% of Lys requirements. Terminal line pigs ate more (P=0.04) during the first experimental phase and tended to eat more (P=0.10) during the second phase than the maternal line pigs but both genetic lines had similar ADG and protein deposition rates during the two phases. The factorial method used in this study to estimate individual daily SID Lys requirements was able to accommodate the small genetic differences in feed intake, and it was concluded that this method can be

  7. A Simple Echocardiographic Method To Estimate Pulmonary Vascular Resistance

    PubMed Central

    Opotowsky, Alexander R.; Clair, Mathieu; Afilalo, Jonathan; Landzberg, Michael J.; Waxman, Aaron B.; Moko, Lilamarie; Maron, Bradley; Vaidya, Anjali; Forfia, Paul R.

    2015-01-01

    Pulmonary hypertension is comprised of heterogeneous diagnoses with distinct hemodynamic pathophysiology. Identifying elevated pulmonary vascular resistance (PVR) is critical for appropriate treatment. We reviewed data for patients seen at referral PH clinics who underwent echocardiography and right heart catheterization within 1 year. We derived equations to estimate PVR based on the ratio of estimated pulmonary artery (PA) systolic pressure (PASPDoppler) to RVOT VTI. We validated these equations in a separate sample and compared them to a published model based on the ratio of transtricuspid flow velocity to RVOT VTI (Model 1, Abbas et al 2003). The derived models were: (Model 2)PVR=1.2×PASPRVOT VTI (Model 3)PVR=PASPRVOT VTI+3if notch present The cohort included 217 patients with mean PA pressure=45.3±11.9mmHg, PVR=7.3±5.0WU and PA wedge pressure=14.8±8.1mmHg; just over 1/3rd had PA wedge pressure >15mmHg (35.5%) and 82.0% had PVR>3WU. Model 1 systematically underestimated PVR, especially with high PVR. The derived models demonstrated no systematic bias. Model 3 correlated best with PVR (r=0.80 vs. 0.73 and 0.77 for Models 1 and 2 respectively). Model 3 had superior discriminatory power for PVR>3WU (AUC=0.946) and PVR>5WU (AUC=0.924), though all models discriminated well. Model 3 estimated PVR>3 was 98.3% sensitive and 61.1% specific for PVR>3WU (PPV=93%; NPV=88%). In conclusion, we present an equation to estimate PVR, using the ratio of PASPDoppler to RVOT VTI and a constant designating presence of RVOT VTI mid-systolic notching, which provides superior agreement with PVR across a wide range of values. PMID:23735649

  8. Methods for estimating drought streamflow probabilities for Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  9. Comparison of Two Parametric Methods to Estimate Pesticide Mass Loads in California's Central Valley

    USGS Publications Warehouse

    Saleh, D.K.; Lorenz, D.L.; Domagalski, J.L.

    2011-01-01

    Mass loadings were calculated for four pesticides in two watersheds with different land uses in the Central Valley, California, by using two parametric models: (1) the Seasonal Wave model (SeaWave), in which a pulse signal is used to describe the annual cycle of pesticide occurrence in a stream, and (2) the Sine Wave model, in which first-order Fourier series sine and cosine terms are used to simulate seasonal mass loading patterns. The models were applied to data collected during water years 1997 through 2005. The pesticides modeled were carbaryl, diazinon, metolachlor, and molinate. Results from the two models show that the ability to capture seasonal variations in pesticide concentrations was affected by pesticide use patterns and the methods by which pesticides are transported to streams. Estimated seasonal loads compared well with results from previous studies for both models. Loads estimated by the two models did not differ significantly from each other, with the exceptions of carbaryl and molinate during the precipitation season, where loads were affected by application patterns and rainfall. However, in watersheds with variable and intermittent pesticide applications, the SeaWave model is more suitable for use on the basis of its robust capability of describing seasonal variation of pesticide concentrations. ?? 2010 American Water Resources Association. This article is a US Government work and is in the public domain in the USA.

  10. Comparison of two parametric methods to estimate pesticide mass loads in California's Central Valley

    USGS Publications Warehouse

    Saleh, Dina K.; Lorenz, David L.; Domagalski, Joseph L.

    2011-01-01

    Mass loadings were calculated for four pesticides in two watersheds with different land uses in the Central Valley, California, by using two parametric models: (1) the Seasonal Wave model (SeaWave), in which a pulse signal is used to describe the annual cycle of pesticide occurrence in a stream, and (2) the Sine Wave model, in which first-order Fourier series sine and cosine terms are used to simulate seasonal mass loading patterns. The models were applied to data collected during water years 1997 through 2005. The pesticides modeled were carbaryl, diazinon, metolachlor, and molinate. Results from the two models show that the ability to capture seasonal variations in pesticide concentrations was affected by pesticide use patterns and the methods by which pesticides are transported to streams. Estimated seasonal loads compared well with results from previous studies for both models. Loads estimated by the two models did not differ significantly from each other, with the exceptions of carbaryl and molinate during the precipitation season, where loads were affected by application patterns and rainfall. However, in watersheds with variable and intermittent pesticide applications, the SeaWave model is more suitable for use on the basis of its robust capability of describing seasonal variation of pesticide concentrations.

  11. Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene a.

    2006-01-01

    Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.

  12. Full 3-D transverse oscillations: a method for tissue motion estimation.

    PubMed

    Salles, Sebastien; Liebgott, Hervé; Garcia, Damien; Vray, Didier

    2015-08-01

    We present a new method to estimate 4-D (3-D + time) tissue motion. The method used combines 3-D phase based motion estimation with an unconventional beamforming strategy. The beamforming technique allows us to obtain full 3-D RF volumes with axial, lateral, and elevation modulations. Based on these images, we propose a method to estimate 3-D motion that uses phase images instead of amplitude images. First, volumes featuring 3-D oscillations are created using only a single apodization function, and the 3-D displacement between two consecutive volumes is estimated simultaneously by applying this 3-D estimation. The validity of the method is investigated by conducting simulations and phantom experiments. The results are compared with those obtained with two other conventional estimation methods: block matching and optical flow. The results show that the proposed method outperforms the conventional methods, especially in the transverse directions.

  13. An Estimation Method of Waiting Time for Health Service at Hospital by Using a Portable RFID and Robust Estimation

    NASA Astrophysics Data System (ADS)

    Ishigaki, Tsukasa; Yamamoto, Yoshinobu; Nakamura, Yoshiyuki; Akamatsu, Motoyuki

    Patients that have an health service by doctor have to wait long time at many hospitals. The long waiting time is the worst factor of patient's dissatisfaction for hospital service according to questionnaire for patients. The present paper describes an estimation method of the waiting time for each patient without an electronic medical chart system. The method applies a portable RFID system to data acquisition and robust estimation of probability distribution of the health service and test time by doctor for high-accurate waiting time estimation. We carried out an health service of data acquisition at a real hospital and verified the efficiency of the proposed method. The proposed system widely can be used as data acquisition system in various fields such as marketing service, entertainment or human behavior measurement.

  14. A Five-Parameter Wind Field Estimation Method Based on Spherical Upwind Lidar Measurements

    NASA Astrophysics Data System (ADS)

    Kapp, S.; Kühn, M.

    2014-12-01

    Turbine mounted scanning lidar systems of focussed continuous-wave type are taken into consideration to sense approaching wind fields. The quality of wind information depends on the lidar technology itself but also substantially on the scanning technique and reconstruction algorithm. In this paper a five-parameter wind field model comprising mean wind speed, vertical and horizontal linear shear and homogeneous direction angles is introduced. A corresponding parameter estimation method is developed based on the assumption of upwind lidar measurements scanned over spherical segments. As a main advantage of this method all relevant parameters, in terms of wind turbine control, can be provided. Moreover, the ability to distinguish between shear and skew potentially increases the quality of the resulting feedforward pitch angles when compared to three-parameter methods. It is shown that minimal three measurements, each in turn from two independent directions are necessary for the application of the algorithm, whereas simpler measurements, each taken from only one direction, are not sufficient.

  15. Differences in Movement Pattern and Detectability between Males and Females Influence How Common Sampling Methods Estimate Sex Ratio

    PubMed Central

    Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco

    2016-01-01

    Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population’s sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns. PMID:27441554

  16. Simple and robust baseline estimation method for multichannel SAR-GMTI systems

    NASA Astrophysics Data System (ADS)

    Chen, Zhao-Yan; Wang, Tong; Ma, Nan

    2016-07-01

    In this paper, the authors propose an approach of estimating the effective baseline for ground moving target indication (GMTI) mode of synthetic aperture radar (SAR), which is different from any previous work. The authors show that the new method leads to a simpler and more robust baseline estimate. This method employs a baseline search operation, where the degree of coherence (DOC) is served as a metric to judge whether the optimum baseline estimate is obtained. The rationale behind this method is that the more accurate the baseline estimate, the higher the coherence of the two channels after co-registering with the estimated baseline value. The merits of the proposed method are twofold: simple to design and robust to the Doppler centroid estimation error. The performance of the proposed method is good. The effectiveness of the method is tested with real SAR data.

  17. Researching children's individual empathic abilities in the context of their daily lives: the importance of mixed methods

    PubMed Central

    Roerig, Simone; van Wesel, Floryt; Evers, Sandra J. T. M.; Krabbendam, Lydia

    2015-01-01

    In social neuroscience, empathy is often approached as an individual ability, whereas researchers in anthropology focus on empathy as a dialectic process between agents. In this perspective paper, we argue that to further elucidate the mechanisms underlying the development of empathy, social neuroscience research should draw on insights and methods from anthropology. First, we discuss neuropsychological studies that investigate empathy in inter-relational contexts. Second, we highlight differences between the social neuroscience and anthropological conceptualizations of empathy. Third, we introduce a new study design based on a mixed method approach, and present initial results from one classroom that was part of a larger study and included 28 children (m = 13, f = 15). Participants (aged 9–11) were administered behavioral tasks and a social network questionnaire; in addition an observational study was also conducted over a period of 3 months. Initial results showed how children's expressions of their empathic abilities were influenced by situational cues in classroom processes. This effect was further explained by children's positions within classroom networks. Our results emphasize the value of interdisciplinary research in the study of empathy. PMID:26283901

  18. Advanced Method to Estimate Fuel Slosh Simulation Parameters

    NASA Technical Reports Server (NTRS)

    Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl

    2005-01-01

    The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the

  19. Numerical method for estimating the size of chaotic regions of phase space

    SciTech Connect

    Henyey, F.S.; Pomphrey, N.

    1987-10-01

    A numerical method for estimating irregular volumes of phase space is derived. The estimate weights the irregular area on a surface of section with the average return time to the section. We illustrate the method by application to the stadium and oval billiard systems and also apply the method to the continuous Henon-Heiles system. 15 refs., 10 figs. (LSP)

  20. Semi-quantitative method to estimate levels of Campylobacter

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Introduction: Research projects utilizing live animals and/or systems often require reliable, accurate quantification of Campylobacter following treatments. Even with marker strains, conventional methods designed to quantify are labor and material intensive requiring either serial dilutions or MPN ...

  1. A history-based method to estimate animal preference.

    PubMed

    Maia, Caroline Marques; Volpato, Gilson Luiz

    2016-01-01

    Giving animals their preferred items (e.g., environmental enrichment) has been suggested as a method to improve animal welfare, thus raising the question of how to determine what animals want. Most studies have employed choice tests for detecting animal preferences. However, whether choice tests represent animal preferences remains a matter of controversy. Here, we present a history-based method to analyse data from individual choice tests to discriminate between preferred and non-preferred items. This method differentially weighs choices from older and recent tests performed over time. Accordingly, we provide both a preference index that identifies preferred items contrasted with non-preferred items in successive multiple-choice tests and methods to detect the strength of animal preferences for each item. We achieved this goal by investigating colour choices in the Nile tilapia fish species. PMID:27350213

  2. A history-based method to estimate animal preference

    PubMed Central

    Maia, Caroline Marques; Volpato, Gilson Luiz

    2016-01-01

    Giving animals their preferred items (e.g., environmental enrichment) has been suggested as a method to improve animal welfare, thus raising the question of how to determine what animals want. Most studies have employed choice tests for detecting animal preferences. However, whether choice tests represent animal preferences remains a matter of controversy. Here, we present a history-based method to analyse data from individual choice tests to discriminate between preferred and non-preferred items. This method differentially weighs choices from older and recent tests performed over time. Accordingly, we provide both a preference index that identifies preferred items contrasted with non-preferred items in successive multiple-choice tests and methods to detect the strength of animal preferences for each item. We achieved this goal by investigating colour choices in the Nile tilapia fish species. PMID:27350213

  3. Method of estimating pulse response using an impedance spectrum

    DOEpatents

    Morrison, John L; Morrison, William H; Christophersen, Jon P; Motloch, Chester G

    2014-10-21

    Electrochemical Impedance Spectrum data are used to predict pulse performance of an energy storage device. The impedance spectrum may be obtained in-situ. A simulation waveform includes a pulse wave with a period greater than or equal to the lowest frequency used in the impedance measurement. Fourier series coefficients of the pulse train can be obtained. The number of harmonic constituents in the Fourier series are selected so as to appropriately resolve the response, but the maximum frequency should be less than or equal to the highest frequency used in the impedance measurement. Using a current pulse as an example, the Fourier coefficients of the pulse are multiplied by the impedance spectrum at corresponding frequencies to obtain Fourier coefficients of the voltage response to the desired pulse. The Fourier coefficients of the response are then summed and reassembled to obtain the overall time domain estimate of the voltage using the Fourier series analysis.

  4. Self- and other-estimates of multiple abilities in Britain and Turkey: a cross-cultural comparison of subjective ratings of intelligence.

    PubMed

    Furnham, Adrian; Arteche, Adriane; Chamorro-Premuzic, Tomas; Keser, Askin; Swami, Viren

    2009-12-01

    This study is part of a programmatic research effort into the determinants of self-assessed abilities. It examined cross-cultural differences in beliefs about intelligence and self- and other-estimated intelligence in two countries at extreme ends of the European continent. In all, 172 British and 272 Turkish students completed a three-part questionnaire where they estimated their parents', partners' and own multiple intelligences (Gardner (10) and Sternberg (3)). They also completed a measure of the 'big five' personality scales and rated six questions about intelligence. The British sample had more experience with IQ tests than the Turks. The majority of participants in both groups did not believe in sex differences in intelligence but did think there were race differences. They also believed that intelligence was primarily inherited. Participants rated their social and emotional intelligence highly (around one standard deviation above the norm). Results suggested that there were more cultural than sex differences in all the ratings, with various interactions mainly due to the British sample differentiating more between the sexes than the Turks. Males rated their overall, verbal, logical, spatial, creative and practical intelligence higher than females. Turks rated their musical, body-kinesthetic, interpersonal and intrapersonal intelligence as well as existential, naturalistic, emotional, creative, and practical intelligence higher than the British. There was evidence of participants rating their fathers' intelligence on most factors higher than their mothers'. Factor analysis of the ten Gardner intelligences yield two clear factors: cognitive and social intelligence. The first factor was impacted by sex but not culture; it was the other way round for the second factor. Regressions showed that five factors predicted overall estimates: sex (male), age (older), test experience (has done tests), extraversion (strong) and openness (strong). Results are discussed in

  5. Self- and other-estimates of multiple abilities in Britain and Turkey: a cross-cultural comparison of subjective ratings of intelligence.

    PubMed

    Furnham, Adrian; Arteche, Adriane; Chamorro-Premuzic, Tomas; Keser, Askin; Swami, Viren

    2009-12-01

    This study is part of a programmatic research effort into the determinants of self-assessed abilities. It examined cross-cultural differences in beliefs about intelligence and self- and other-estimated intelligence in two countries at extreme ends of the European continent. In all, 172 British and 272 Turkish students completed a three-part questionnaire where they estimated their parents', partners' and own multiple intelligences (Gardner (10) and Sternberg (3)). They also completed a measure of the 'big five' personality scales and rated six questions about intelligence. The British sample had more experience with IQ tests than the Turks. The majority of participants in both groups did not believe in sex differences in intelligence but did think there were race differences. They also believed that intelligence was primarily inherited. Participants rated their social and emotional intelligence highly (around one standard deviation above the norm). Results suggested that there were more cultural than sex differences in all the ratings, with various interactions mainly due to the British sample differentiating more between the sexes than the Turks. Males rated their overall, verbal, logical, spatial, creative and practical intelligence higher than females. Turks rated their musical, body-kinesthetic, interpersonal and intrapersonal intelligence as well as existential, naturalistic, emotional, creative, and practical intelligence higher than the British. There was evidence of participants rating their fathers' intelligence on most factors higher than their mothers'. Factor analysis of the ten Gardner intelligences yield two clear factors: cognitive and social intelligence. The first factor was impacted by sex but not culture; it was the other way round for the second factor. Regressions showed that five factors predicted overall estimates: sex (male), age (older), test experience (has done tests), extraversion (strong) and openness (strong). Results are discussed in

  6. Evaluation of acidity estimation methods for mine drainage, Pennsylvania, USA.

    PubMed

    Park, Daeryong; Park, Byungtae; Mendinsky, Justin J; Paksuchon, Benjaphon; Suhataikul, Ratda; Dempsey, Brian A; Cho, Yunchul

    2015-01-01

    Eighteen sites impacted by abandoned mine drainage (AMD) in Pennsylvania were sampled and measured for pH, acidity, alkalinity, metal ions, and sulfate. This study compared the accuracy of four acidity calculation methods with measured hot peroxide acidity and identified the most accurate calculation method for each site as a function of pH and sulfate concentration. Method E1 was the sum of proton and acidity based on total metal concentrations; method E2 added alkalinity; method E3 also accounted for aluminum speciation and temperature effects; and method E4 accounted for sulfate speciation. To evaluate errors between measured and predicted acidity, the Nash-Sutcliffe efficiency (NSE), the coefficient of determination (R (2)), and the root mean square error to standard deviation ratio (RSR) methods were applied. The error evaluation results show that E1, E2, E3, and E4 sites were most accurate at 0, 9, 4, and 5 of the sites, respectively. Sites where E2 was most accurate had pH greater than 4.0 and less than 400 mg/L of sulfate. Sites where E3 was most accurate had pH greater than 4.0 and sulfate greater than 400 mg/L with two exceptions. Sites where E4 was most accurate had pH less than 4.0 and more than 400 mg/L sulfate with one exception. The results indicate that acidity in AMD-affected streams can be accurately predicted by using pH, alkalinity, sulfate, Fe(II), Mn(II), and Al(III) concentrations in one or more of the identified equations, and that the appropriate equation for prediction can be selected based on pH and sulfate concentration. PMID:25399119

  7. Evaluation of acidity estimation methods for mine drainage, Pennsylvania, USA.

    PubMed

    Park, Daeryong; Park, Byungtae; Mendinsky, Justin J; Paksuchon, Benjaphon; Suhataikul, Ratda; Dempsey, Brian A; Cho, Yunchul

    2015-01-01

    Eighteen sites impacted by abandoned mine drainage (AMD) in Pennsylvania were sampled and measured for pH, acidity, alkalinity, metal ions, and sulfate. This study compared the accuracy of four acidity calculation methods with measured hot peroxide acidity and identified the most accurate calculation method for each site as a function of pH and sulfate concentration. Method E1 was the sum of proton and acidity based on total metal concentrations; method E2 added alkalinity; method E3 also accounted for aluminum speciation and temperature effects; and method E4 accounted for sulfate speciation. To evaluate errors between measured and predicted acidity, the Nash-Sutcliffe efficiency (NSE), the coefficient of determination (R (2)), and the root mean square error to standard deviation ratio (RSR) methods were applied. The error evaluation results show that E1, E2, E3, and E4 sites were most accurate at 0, 9, 4, and 5 of the sites, respectively. Sites where E2 was most accurate had pH greater than 4.0 and less than 400 mg/L of sulfate. Sites where E3 was most accurate had pH greater than 4.0 and sulfate greater than 400 mg/L with two exceptions. Sites where E4 was most accurate had pH less than 4.0 and more than 400 mg/L sulfate with one exception. The results indicate that acidity in AMD-affected streams can be accurately predicted by using pH, alkalinity, sulfate, Fe(II), Mn(II), and Al(III) concentrations in one or more of the identified equations, and that the appropriate equation for prediction can be selected based on pH and sulfate concentration.

  8. Performance of different detrending methods in turbulent flux estimation

    NASA Astrophysics Data System (ADS)

    Donateo, Antonio; Cava, Daniela; Contini, Daniele

    2015-04-01

    The eddy covariance is the most direct, efficient and reliable method to measure the turbulent flux of a scalar (Baldocchi, 2003). Required conditions for high-quality eddy covariance measurements are amongst others stationarity of the measured data and a fully developed turbulence. The simplest method for obtaining the fluctuating components for covariance calculation according to Reynolds averaging rules under ideal stationary conditions is the so called mean removal method. However steady state conditions rarely exist in the atmosphere, because of the diurnal cycle, changes in meteorological conditions, or sensor drift. All these phenomena produce trends or low-frequency changes superimposed to the turbulent signal. Different methods for trend removal have been proposed in literature; however a general agreement on how separate low frequency perturbations from turbulence has not yet been reached. The most commonly applied methods are the linear detrending (Gash and Culf, 1996) and the high-pass filter, namely the moving average (Moncrieff et al., 2004). Moreover Vickers and Mahrt (2003) proposed a multi resolution decomposition method in order to select an appropriate time scale for mean removal as a function of atmospheric stability conditions. The present work investigates the performance of these different detrending methods in removing the low frequency contribution to the turbulent fluxes calculation, including also a spectral filter by a Fourier decomposition of the time series. The different methods have been applied to the calculation of the turbulent fluxes for different scalars (temperature, ultrafine particles number concentration, carbon dioxide and water vapour concentration). A comparison of the detrending methods will be performed also for different measurement site, namely a urban site, a suburban area, and a remote area in Antarctica. Moreover the performance of the moving average in detrending time series has been analyzed as a function of the

  9. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  10. Comparative evaluation of two quantitative precipitation estimation methods in Korea

    NASA Astrophysics Data System (ADS)

    Ko, H.; Nam, K.; Jung, H.

    2013-12-01

    The spatial distribution and intensity of rainfall is necessary for hydrological model, particularly, grid based distributed model. The weather radar is much higher spatial resolution (1kmx1km) than rain gauges (~13km) although radar is indirect measurement of rainfall and rain gauges are directly observed it. And also, radar is provided areal and gridded rainfall information while rain gauges are provided point data. Therefore, radar rainfall data can be useful for input data on the hydrological model. In this study, we compared two QPE schemes to produce radar rainfall for hydrological utilization. The two methods are 1) spatial adjustment and 2) real-time Z-R relationship adjustment (hereafter RAR; Radar-Aws Rain rate). We computed and analyzed the statistics such as ME (Mean Error), RMSE (Root mean square Error), and correlation using cross-validation method (here, leave-one-out method).

  11. Estimation of mechanical properties of nanomaterials using artificial intelligence methods

    NASA Astrophysics Data System (ADS)

    Vijayaraghavan, V.; Garg, A.; Wong, C. H.; Tai, K.

    2014-09-01

    Computational modeling tools such as molecular dynamics (MD), ab initio, finite element modeling or continuum mechanics models have been extensively applied to study the properties of carbon nanotubes (CNTs) based on given input variables such as temperature, geometry and defects. Artificial intelligence techniques can be used to further complement the application of numerical methods in characterizing the properties of CNTs. In this paper, we have introduced the application of multi-gene genetic programming (MGGP) and support vector regression to formulate the mathematical relationship between the compressive strength of CNTs and input variables such as temperature and diameter. The predictions of compressive strength of CNTs made by these models are compared to those generated using MD simulations. The results indicate that MGGP method can be deployed as a powerful method for predicting the compressive strength of the carbon nanotubes.

  12. A method for estimating abundance of mobile populations using telemetry and counts of unmarked animals

    USGS Publications Warehouse

    Clement, Matthew; O'Keefe, Joy M; Walters, Brianne

    2015-01-01

    While numerous methods exist for estimating abundance when detection is imperfect, these methods may not be appropriate due to logistical difficulties or unrealistic assumptions. In particular, if highly mobile taxa are frequently absent from survey locations, methods that estimate a probability of detection conditional on presence will generate biased abundance estimates. Here, we propose a new estimator for estimating abundance of mobile populations using telemetry and counts of unmarked animals. The estimator assumes that the target population conforms to a fission-fusion grouping pattern, in which the population is divided into groups that frequently change in size and composition. If assumptions are met, it is not necessary to locate all groups in the population to estimate abundance. We derive an estimator, perform a simulation study, conduct a power analysis, and apply the method to field data. The simulation study confirmed that our estimator is asymptotically unbiased with low bias, narrow confidence intervals, and good coverage, given a modest survey effort. The power analysis provided initial guidance on survey effort. When applied to small data sets obtained by radio-tracking Indiana bats, abundance estimates were reasonable, although imprecise. The proposed method has the potential to improve abundance estimates for mobile species that have a fission-fusion social structure, such as Indiana bats, because it does not condition detection on presence at survey locations and because it avoids certain restrictive assumptions.

  13. A robust and efficient method for estimating enzyme complex abundance and metabolic flux from expression data.

    PubMed

    Barker, Brandon E; Sadagopan, Narayanan; Wang, Yiping; Smallbone, Kieran; Myers, Christopher R; Xi, Hongwei; Locasale, Jason W; Gu, Zhenglong

    2015-12-01

    A major theme in constraint-based modeling is unifying experimental data, such as biochemical information about the reactions that can occur in a system or the composition and localization of enzyme complexes, with high-throughput data including expression data, metabolomics, or DNA sequencing. The desired result is to increase predictive capability and improve our understanding of metabolism. The approach typically employed when only gene (or protein) intensities are available is the creation of tissue-specific models, which reduces the available reactions in an organism model, and does not provide an objective function for the estimation of fluxes. We develop a method, flux assignment with LAD (least absolute deviation) convex objectives and normalization (FALCON), that employs metabolic network reconstructions along with expression data to estimate fluxes. In order to use such a method, accurate measures of enzyme complex abundance are needed, so we first present an algorithm that addresses quantification of complex abundance. Our extensions to prior techniques include the capability to work with large models and significantly improved run-time performance even for smaller models, an improved analysis of enzyme complex formation, the ability to handle large enzyme complex rules that may incorporate multiple isoforms, and either maintained or significantly improved correlation with experimentally measured fluxes. FALCON has been implemented in MATLAB and ATS, and can be downloaded from: https://github.com/bbarker/FALCON. ATS is not required to compile the software, as intermediate C source code is available. FALCON requires use of the COBRA Toolbox, also implemented in MATLAB. PMID:26381164

  14. A method for estimating both the solubility parameters and molar volumes of liquids

    NASA Technical Reports Server (NTRS)

    Fedors, R. F.

    1974-01-01

    Development of an indirect method of estimating the solubility parameter of high molecular weight polymers. The proposed method of estimating the solubility parameter, like Small's method, is based on group additive constants, but is believed to be superior to Small's method for two reasons: (1) the contribution of a much larger number of functional groups have been evaluated, and (2) the method requires only a knowledge of structural formula of the compound.

  15. Effects of Vertical Scaling Methods on Linear Growth Estimation

    ERIC Educational Resources Information Center

    Lei, Pui-Wa; Zhao, Yu

    2012-01-01

    Vertical scaling is necessary to facilitate comparison of scores from test forms of different difficulty levels. It is widely used to enable the tracking of student growth in academic performance over time. Most previous studies on vertical scaling methods assume relatively long tests and large samples. Little is known about their performance when…

  16. Fourier methods for estimating power system stability limits

    SciTech Connect

    Marceau, R.J.; Galiana, F.D. . Dept. of Electrical Engineering); Mailhot, R.; Denomme, F.; McGillis, D.T. )

    1994-05-01

    This paper shows how the use of new generation tools such as a generalized shell for dynamic security analysis can help improve the understanding of fundamental power systems behavior. Using the ELISA prototype shell as a laboratory tool, it is shown that the signal energy of the network impulse response acts as a barometer to define the relative severity of a contingency with respect to some parameter, for instance power generation or power transfer. In addition, for a given contingency, as the parameter is varied and a network approaches instability, signal energy increases smoothly and predictably towards an asymptote which defines the network's stability limit: this, in turn, permits comparison of the severity of different contingencies. Using a Fourier transform approach, it is shown that this behavior can be explained in terms of the effect of increasing power on the damping component of a power system's dominant poles. A simple function is derived which estimates network stability limits with surprising accuracy from two or three simulations, provided that at least one of these is within 5% of the limit. These results hold notwithstanding the presence of many active, nonlinear voltage-support elements (i.e. generators, synchronous condensers, SVCs, static excitation systems, etc.) in the network.

  17. Comparison of some biased estimation methods (including ordinary subset regression) in the linear model

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1975-01-01

    Ridge, Marquardt's generalized inverse, shrunken, and principal components estimators are discussed in terms of the objectives of point estimation of parameters, estimation of the predictive regression function, and hypothesis testing. It is found that as the normal equations approach singularity, more consideration must be given to estimable functions of the parameters as opposed to estimation of the full parameter vector; that biased estimators all introduce constraints on the parameter space; that adoption of mean squared error as a criterion of goodness should be independent of the degree of singularity; and that ordinary least-squares subset regression is the best overall method.

  18. Comparing the estimation methods of stable distributions with respect to robustness properties

    NASA Astrophysics Data System (ADS)

    Celik, Nuri; Erden, Samet; Sarikaya, M. Zeki

    2016-04-01

    In statistical applications, some data set may exhibit the features like high skewness and kurtosis and heavy tailness that are incompatible with the normality assumption especially in finance and engineering. For these reason, the modeling of the data sets with α stable distributions will be reasonable approach. The stable distributions have four parameters. In literature, the estimation methods have been studied in order to estimate these unknown model parameters. In this study, we give small information about these proposed estimation methods and we compare these estimators with respect to robustness properties with a comprehensive simulation study, since the robustness property of an estimator has been an important tool for an appropriate modeling.

  19. Experimental Sentinel-2 LAI estimation using parametric, non-parametric and physical retrieval methods - A comparison

    NASA Astrophysics Data System (ADS)

    Verrelst, Jochem; Rivera, Juan Pablo; Veroustraete, Frank; Muñoz-Marí, Jordi; Clevers, Jan G. P. W.; Camps-Valls, Gustau; Moreno, José

    2015-10-01

    Given the forthcoming availability of Sentinel-2 (S2) images, this paper provides a systematic comparison of retrieval accuracy and processing speed of a multitude of parametric, non-parametric and physically-based retrieval methods using simulated S2 data. An experimental field dataset (SPARC), collected at the agricultural site of Barrax (Spain), was used to evaluate different retrieval methods on their ability to estimate leaf area index (LAI). With regard to parametric methods, all possible band combinations for several two-band and three-band index formulations and a linear regression fitting function have been evaluated. From a set of over ten thousand indices evaluated, the best performing one was an optimized three-band combination according to (ρ560 -ρ1610 -ρ2190) / (ρ560 +ρ1610 +ρ2190) with a 10-fold cross-validation RCV2 of 0.82 (RMSECV : 0.62). This family of methods excel for their fast processing speed, e.g., 0.05 s to calibrate and validate the regression function, and 3.8 s to map a simulated S2 image. With regard to non-parametric methods, 11 machine learning regression algorithms (MLRAs) have been evaluated. This methodological family has the advantage of making use of the full optical spectrum as well as flexible, nonlinear fitting. Particularly kernel-based MLRAs lead to excellent results, with variational heteroscedastic (VH) Gaussian Processes regression (GPR) as the best performing method, with a RCV2 of 0.90 (RMSECV : 0.44). Additionally, the model is trained and validated relatively fast (1.70 s) and the processed image (taking 73.88 s) includes associated uncertainty estimates. More challenging is the inversion of a PROSAIL based radiative transfer model (RTM). After the generation of a look-up table (LUT), a multitude of cost functions and regularization options were evaluated. The best performing cost function is Pearson's χ -square. It led to a R2 of 0.74 (RMSE: 0.80) against the validation dataset. While its validation went fast

  20. Feasible methods to estimate disease based price indexes.

    PubMed

    Bradley, Ralph

    2013-05-01

    There is a consensus that statistical agencies should report medical data by disease rather than by service. This study computes price indexes that are necessary to deflate nominal disease expenditures and to decompose their growth into price, treated prevalence and output per patient growth. Unlike previous studies, it uses methods that can be implemented by the Bureau of Labor Statistics (BLS). For the calendar years 2005-2010, I find that these feasible disease based indexes are approximately 1% lower on an annual basis than indexes computed by current methods at BLS. This gives evidence that traditional medical price indexes have not accounted for the more efficient use of medical inputs in treating most diseases.

  1. A TRMM Rainfall Estimation Method Applicable to Land Areas

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Oki, R.; Weinman, J. A.

    1998-01-01

    Utilizing multi-spectral, dual-polarization Special Sensor Microwave Imager (SSM/I) radiometer measurements, we have developed in this study a method to retrieve average rain rate, R(sub f(sub R)), in a mesoscale grid box of 2deg x 3deg over land. The key parameter of this method is the fractional rain area, f(sub R), in that grid box, which is determined with the help of a threshold on the 85 GHz scattering depression 0 deduced from the SSM/I data. In order to demonstrate the usefulness of this method, nine-months of R(sub f(sub R))are retrieved from SSM/I data over three grid boxes in the Northeastern United States. These retrievals are then compared with the corresponding ground-truth-average rain rate, R(sub g), deduced from 15-minute rain gauges. Based on nine months of rain rate retrievals over three grid boxes, we find that R(sub f(sub R)can explain about 64 % of the variance contained in R(sub g). A similar evaluation of the grid-box-average rain rates R(sub GSCAT) and R(sub SRL), given by the NASA/GSCAT and NOAA/SRL rain retrieval algorithms, is performed. This evaluation reveals that R(sub GSCAT) and R(sub SRL) can explain only about 42 % of the variance contained in R(sub g). In our method, a threshold on the 85 GHz scattering depression is used primarily to determine the fractional rain area in a mesoscale grid box. Quantitative information pertaining to the 85 GHz scattering depression in the grid box is disregarded. In the NASA/GSCAT and NOAA/SRL methods on the other hand, this quantitative information is included. Based on the performance of all three methods, we infer that the magnitude of the scattering depression is a poor indicator of rain rate. Furthermore, from maps based on the observations made by SSM/I on land and ocean we find that there is a significant redundancy in the information content of the SSM/I multi-spectral observations. This leads us to infer that observations of SSM/I at 19 and 37 GHz add only marginal information to that

  2. A new gaze estimation method considering external light.

    PubMed

    Lee, Jong Man; Lee, Hyeon Chang; Gwon, Su Yeong; Jung, Dongwook; Pan, Weiyuan; Cho, Chul Woo; Park, Kang Ryoung; Kim, Hyun-Cheol; Cha, Jihun

    2015-01-01

    Gaze tracking systems usually utilize near-infrared (NIR) lights and NIR cameras, and the performance of such systems is mainly affected by external light sources that include NIR components. This is ascribed to the production of additional (imposter) corneal specular reflection (SR) caused by the external light, which makes it difficult to discriminate between the correct SR as caused by the NIR illuminator of the gaze tracking system and the imposter SR. To overcome this problem, a new method is proposed for determining the correct SR in the presence of external light based on the relationship between the corneal SR and the pupil movable area with the relative position of the pupil and the corneal SR. The experimental results showed that the proposed method makes the gaze tracking system robust to the existence of external light. PMID:25769050

  3. Data-Driven Method to Estimate Nonlinear Chemical Equivalence

    PubMed Central

    Mayo, Michael; Collier, Zachary A.; Winton, Corey; Chappell, Mark A

    2015-01-01

    There is great need to express the impacts of chemicals found in the environment in terms of effects from alternative chemicals of interest. Methods currently employed in fields such as life-cycle assessment, risk assessment, mixtures toxicology, and pharmacology rely mostly on heuristic arguments to justify the use of linear relationships in the construction of “equivalency factors,” which aim to model these concentration-concentration correlations. However, the use of linear models, even at low concentrations, oversimplifies the nonlinear nature of the concentration-response curve, therefore introducing error into calculations involving these factors. We address this problem by reporting a method to determine a concentration-concentration relationship between two chemicals based on the full extent of experimentally derived concentration-response curves. Although this method can be easily generalized, we develop and illustrate it from the perspective of toxicology, in which we provide equations relating the sigmoid and non-monotone, or “biphasic,” responses typical of the field. The resulting concentration-concentration relationships are manifestly nonlinear for nearly any chemical level, even at the very low concentrations common to environmental measurements. We demonstrate the method using real-world examples of toxicological data which may exhibit sigmoid and biphasic mortality curves. Finally, we use our models to calculate equivalency factors, and show that traditional results are recovered only when the concentration-response curves are “parallel,” which has been noted before, but we make formal here by providing mathematical conditions on the validity of this approach. PMID:26158701

  4. Comparison of ready biodegradation estimation methods for fragrance materials.

    PubMed

    Boethling, Robert

    2014-11-01

    Biodegradability is fundamental to the assessment of environmental exposure and risk from organic chemicals. Predictive models can be used to pursue both regulatory and chemical design (green chemistry) objectives, which are most effectively met when models are easy to use and available free of charge. The objective of this work was to evaluate no-cost estimation programs with respect to prediction of ready biodegradability. Fragrance materials, which are structurally diverse and have significant exposure potential, were used for this purpose. Using a database of 222 fragrance compounds with measured ready biodegradability, 10 models were compared on the basis of overall accuracy, sensitivity, specificity, and Matthews correlation coefficient (MCC), a measure of quality for binary classification. The 10 models were VEGA© Non-Interactive Client, START (Toxtree©), Biowin©1-6, and two models based on inductive machine learning. Applicability domain (AD) was also considered. Overall accuracy was ca. 70% and varied little over all models, but sensitivity, specificity and MCC showed wider variation. Based on MCC, the best models for fragrance compounds were Biowin6, VEGA and Biowin3. VEGA performance was slightly better for the <50% of the compounds it identified as having "high reliability" predictions (AD index >0.8). However, removing compounds with one and only one quaternary carbon yielded similar improvement in predictivity for VEGA, START, and Biowin3/6, with a smaller penalty in reduced coverage. Of the nine compounds for which the eight models (VEGA, START, Biowin1-6) all disagreed with the measured value, measured analog data were available for seven, and all supported the predicted value. VEGA, Biowin3 and Biowin6 are judged suitable for ready biodegradability screening of fragrance compounds.

  5. Systematic variational method for statistical nonlinear state and parameter estimation.

    PubMed

    Ye, Jingxin; Rey, Daniel; Kadakia, Nirag; Eldridge, Michael; Morone, Uriel I; Rozdeba, Paul; Abarbanel, Henry D I; Quinn, John C

    2015-11-01

    In statistical data assimilation one evaluates the conditional expected values, conditioned on measurements, of interesting quantities on the path of a model through observation and prediction windows. This often requires working with very high dimensional integrals in the discrete time descriptions of the observations and model dynamics, which become functional integrals in the continuous-time limit. Two familiar methods for performing these integrals include (1) Monte Carlo calculations and (2) variational approximations using the method of Laplace plus perturbative corrections to the dominant contributions. We attend here to aspects of the Laplace approximation and develop an annealing method for locating the variational path satisfying the Euler-Lagrange equations that comprises the major contribution to the integrals. This begins with the identification of the minimum action path starting with a situation where the model dynamics is totally unresolved in state space, and the consistent minimum of the variational problem is known. We then proceed to slowly increase the model resolution, seeking to remain in the basin of the minimum action path, until a path that gives the dominant contribution to the integral is identified. After a discussion of some general issues, we give examples of the assimilation process for some simple, instructive models from the geophysical literature. Then we explore a slightly richer model of the same type with two distinct time scales. This is followed by a model characterizing the biophysics of individual neurons. PMID:26651756

  6. A comparative study of Interaural Time Delay estimation methods.

    PubMed

    Katz, Brian F G; Noisternig, Markus

    2014-06-01

    The Interaural Time Delay (ITD) is an important binaural cue for sound source localization. Calculations of ITD values are obtained either from measured time domain Head-Related Impulse Responses (HRIRs) or from their frequency transform Head-Related Transfer Functions (HRTFs). Numerous methods exist in current literature, based on a variety of definitions and assumptions of the nature of the ITD as an acoustic cue. This work presents a thorough comparative study of the degree of variability between some of the most common methods for calculating the ITD from measured data. Thirty-two different calculations or variations are compared for positions on the horizontal plane for the HRTF measured on both a KEMAR mannequin and a rigid sphere. Specifically, the spatial variations of the methods are investigated. Included is a discussion of the primary potential causes of these differences, such as the existence of multiple peaks in the HRIR of the contra-lateral ear for azimuths near the inter-aural axis due to multipath propagation and head/pinnae shadowing. PMID:24907816

  7. Method to estimate water storage capacity of capillary barriers - Discussion

    SciTech Connect

    Gee, Glendon W. ); Ward, Anderson L. ); Meyer, Philip D. )

    1998-11-01

    This is a brief comment on a previously published paper. The paper by Stormont and Morris[JGGE 124 (4):297-302] provides an interesting approach to computing water storage capacity of capillary barriers used as landfill covers. They correctly show that available water storage capacity can be increased up to a factor of two for a silt loam soil, when it is used in a capillary barrier as compared to existing as a deep soil profile. For this very reason such a capillary barrier, utilizing silt loam soil, was constructed and successfully tested at the U. S. Department of Energy?s Hanford Site in southeastern Washington State. Silt loam soil provides optimal water storage for capillary barriers and ensures minimal drainage. Less benefits are obtained when capillary barriers utilize more sandy soils. We would endorse a limited application of the method of Stormont and Morris. We suggest that there will be large uncertainties in field capacity, wilting point and water retention characteristics and only when these uncertainties are accounted for can such a method be used to provide sound engineering judgement for cover design. A recommended procedure for using this method would include actual field measurements of the soil hydraulic properties of the cover materials.

  8. Real-Time Parameter Estimation Method Applied to a MIMO Process and its Comparison with an Offline Identification Method

    SciTech Connect

    Kaplanoglu, Erkan; Safak, Koray K.; Varol, H. Selcuk

    2009-01-12

    An experiment based method is proposed for parameter estimation of a class of linear multivariable systems. The method was applied to a pressure-level control process. Experimental time domain input/output data was utilized in a gray-box modeling approach. Prior knowledge of the form of the system transfer function matrix elements is assumed to be known. Continuous-time system transfer function matrix parameters were estimated in real-time by the least-squares method. Simulation results of experimentally determined system transfer function matrix compare very well with the experimental results. For comparison and as an alternative to the proposed real-time estimation method, we also implemented an offline identification method using artificial neural networks and obtained fairly good results. The proposed methods can be implemented conveniently on a desktop PC equipped with a data acquisition board for parameter estimation of moderately complex linear multivariable systems.

  9. Ability of combined Near-Infrared Spectroscopy-Intravascular Ultrasound (NIRS-IVUS) imaging to detect lipid core plaques and estimate cap thickness in human autopsy coronary arteries

    NASA Astrophysics Data System (ADS)

    Grainger, S. J.; Su, J. L.; Greiner, C. A.; Saybolt, M. D.; Wilensky, R. L.; Raichlen, J. S.; Madden, S. P.; Muller, J. E.

    2016-03-01

    The ability to determine plaque cap thickness during catheterization is thought to be of clinical importance for plaque vulnerability assessment. While methods to compositionally assess cap integrity are in development, a method utilizing currently available tools to measure cap thickness is highly desirable. NIRS-IVUS is a commercially available dual imaging method in current clinical use that may provide cap thickness information to the skilled reader; however, this is as yet unproven. Ten autopsy hearts (n=15 arterial segments) were scanned with the multimodality NIRS-IVUS catheter (TVC Imaging System, Infraredx, Inc.) to identify lipid core plaques (LCPs). Skilled readers made predictions of cap thickness over regions of chemogram LCP, using NIRS-IVUS. Artery segments were perfusion fixed and cut into 2 mm serial blocks. Thin sections stained with Movat's pentachrome were analyzed for cap thickness at LCP regions. Block level predictions were compared to histology, as classified by a blinded pathologist. Within 15 arterial segments, 117 chemogram blocks were found by NIRS to contain LCP. Utilizing NIRSIVUS, chemogram blocks were divided into 4 categories: thin capped fibroatheromas (TCFA), thick capped fibroatheromas (ThCFA), pathological intimal thickening (PIT)/lipid pool (no defined cap), and calcified/unable to determine cap thickness. Sensitivities/specificities for thin cap fibroatheromas, thick cap fibroatheromas, and PIT/lipid pools were 0.54/0.99, 0.68/0.88, and 0.80/0.97, respectively. The overall accuracy rate was 70.1% (including 22 blocks unable to predict, p = 0.075). In the absence of calcium, NIRS-IVUS imaging provided predictions of cap thickness over LCP with moderate accuracy. The ability of this multimodality imaging method to identify vulnerable coronary plaques requires further assessment in both larger autopsy studies, and clinical studies in patients undergoing NIRS-IVUS imaging.

  10. The development and discussion of computerized visual perception assessment tool for Chinese characters structures - Concurrent estimation of the overall ability and the domain ability in item response theory approach.

    PubMed

    Wu, Huey-Min; Lin, Chin-Kai; Yang, Yu-Mao; Kuo, Bor-Chen

    2014-11-12

    Visual perception is the fundamental skill required for a child to recognize words, and to read and write. There was no visual perception assessment tool developed for preschool children based on Chinese characters in Taiwan. The purposes were to develop the computerized visual perception assessment tool for Chinese Characters Structures and to explore the psychometrical characteristic of assessment tool. This study adopted purposive sampling. The study evaluated 551 kindergarten-age children (293 boys, 258 girls) ranging from 46 to 81 months of age. The test instrument used in this study consisted of three subtests and 58 items, including tests of basic strokes, single-component characters, and compound characters. Based on the results of model fit analysis, the higher-order item response theory was used to estimate the performance in visual perception, basic strokes, single-component characters, and compound characters simultaneously. Analyses of variance were used to detect significant difference in age groups and gender groups. The difficulty of identifying items in a visual perception test ranged from -2 to 1. The visual perception ability of 4- to 6-year-old children ranged from -1.66 to 2.19. Gender did not have significant effects on performance. However, there were significant differences among the different age groups. The performance of 6-year-olds was better than that of 5-year-olds, which was better than that of 4-year-olds. This study obtained detailed diagnostic scores by using a higher-order item response theory model to understand the visual perception of basic strokes, single-component characters, and compound characters. Further statistical analysis showed that, for basic strokes and compound characters, girls performed better than did boys; there also were differences within each age group. For single-component characters, there was no difference in performance between boys and girls. However, again the performance of 6-year-olds was better than

  11. EXPERIMENTAL METHODS TO ESTIMATE ACCUMULATED SOLIDS IN NUCLEAR WASTE TANKS

    SciTech Connect

    Duignan, M.; Steeper, T.; Steimke, J.

    2012-12-10

    devices and techniques were very effective to estimate the movement, location, and concentrations of the solids representing plutonium and are expected to perform well at a larger scale. The operation of the techniques and their measurement accuracies will be discussed as well as the overall results of the accumulated solids test.

  12. A Practical Torque Estimation Method for Interior Permanent Magnet Synchronous Machine in Electric Vehicles

    PubMed Central

    Zhu, Yuan

    2015-01-01

    The torque output accuracy of the IPMSM in electric vehicles using a state of the art MTPA strategy highly depends on the accuracy of machine parameters, thus, a torque estimation method is necessary for the safety of the vehicle. In this paper, a torque estimation method based on flux estimator with a modified low pass filter is presented. Moreover, by taking into account the non-ideal characteristic of the inverter, the torque estimation accuracy is improved significantly. The effectiveness of the proposed method is demonstrated through MATLAB/Simulink simulation and experiment. PMID:26114557

  13. A Practical Torque Estimation Method for Interior Permanent Magnet Synchronous Machine in Electric Vehicles.

    PubMed

    Wu, Zhihong; Lu, Ke; Zhu, Yuan

    2015-01-01

    The torque output accuracy of the IPMSM in electric vehicles using a state of the art MTPA strategy highly depends on the accuracy of machine parameters, thus, a torque estimation method is necessary for the safety of the vehicle. In this paper, a torque estimation method based on flux estimator with a modified low pass filter is presented. Moreover, by taking into account the non-ideal characteristic of the inverter, the torque estimation accuracy is improved significantly. The effectiveness of the proposed method is demonstrated through MATLAB/Simulink simulation and experiment.

  14. A Practical Torque Estimation Method for Interior Permanent Magnet Synchronous Machine in Electric Vehicles.

    PubMed

    Wu, Zhihong; Lu, Ke; Zhu, Yuan

    2015-01-01

    The torque output accuracy of the IPMSM in electric vehicles using a state of the art MTPA strategy highly depends on the accuracy of machine parameters, thus, a torque estimation method is necessary for the safety of the vehicle. In this paper, a torque estimation method based on flux estimator with a modified low pass filter is presented. Moreover, by taking into account the non-ideal characteristic of the inverter, the torque estimation accuracy is improved significantly. The effectiveness of the proposed method is demonstrated through MATLAB/Simulink simulation and experiment. PMID:26114557

  15. RELICA: a method for estimating the reliability of independent components.

    PubMed

    Artoni, Fiorenzo; Menicucci, Danilo; Delorme, Arnaud; Makeig, Scott; Micera, Silvestro

    2014-12-01

    Independent Component Analysis (ICA) is a widely applied data-driven method for parsing brain and non-brain EEG source signals, mixed by volume conduction to the scalp electrodes, into a set of maximally temporally and often functionally independent components (ICs). Many ICs may be identified with a precise physiological or non-physiological origin. However, this process is hindered by partial instability in ICA results that can arise from noise in the data. Here we propose RELICA (RELiable ICA), a novel method to characterize IC reliability within subjects. RELICA first computes IC "dipolarity" a measure of physiological plausibility, plus a measure of IC consistency across multiple decompositions of bootstrap versions of the input data. RELICA then uses these two measures to visualize and cluster the separated ICs, providing a within-subject measure of IC reliability that does not involve checking for its occurrence across subjects. We demonstrate the use of RELICA on EEG data recorded from 14 subjects performing a working memory experiment and show that many brain and ocular artifact ICs are correctly classified as "stable" (highly repeatable across decompositions of bootstrapped versions of the input data). Many stable ICs appear to originate in the brain, while other stable ICs account for identifiable non-brain processes such as line noise. RELICA might be used with any linear blind source separation algorithm to reduce the risk of basing conclusions on unstable or physiologically un-interpretable component processes. PMID:25234117

  16. Automated methods for estimation of sperm flagellar bending parameters.

    PubMed

    Brokaw, C J

    1984-01-01

    Parameters to describe flagellar bending patterns can be obtained by a microcomputer procedure that uses a set of parameters to synthesize model bending patterns, compares the model bending patterns with digitized and filtered data from flagellar photographs, and uses the Simplex method to vary the parameters until a solution with minimum root mean square differences between the model and the data is found. Parameters for Chlamydomonas bending patterns have been obtained from comparison of shear angle curves for the model and the data. To avoid the determination of the orientation of the basal end of the flagellum, which is required for calculation of shear angles, parameters for sperm flagella have been obtained by comparison of curves of curvature as a function of length for the model and for the data. A constant curvature model, modified from that originally used for Chlamydomonas flagella, has been used for obtaining parameters from sperm flagella, but the methods can be applied using other models for synthesizing the model bending patterns.

  17. System and Method for Outlier Detection via Estimating Clusters

    NASA Technical Reports Server (NTRS)

    Iverson, David J. (Inventor)

    2016-01-01

    An efficient method and system for real-time or offline analysis of multivariate sensor data for use in anomaly detection, fault detection, and system health monitoring is provided. Models automatically derived from training data, typically nominal system data acquired from sensors in normally operating conditions or from detailed simulations, are used to identify unusual, out of family data samples (outliers) that indicate possible system failure or degradation. Outliers are determined through analyzing a degree of deviation of current system behavior from the models formed from the nominal system data. The deviation of current system behavior is presented as an easy to interpret numerical score along with a measure of the relative contribution of each system parameter to any off-nominal deviation. The techniques described herein may also be used to "clean" the training data.

  18. Method and system for non-linear motion estimation

    NASA Technical Reports Server (NTRS)

    Lu, Ligang (Inventor)

    2011-01-01

    A method and system for extrapolating and interpolating a visual signal including determining a first motion vector between a first pixel position in a first image to a second pixel position in a second image, determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image, determining a third motion vector between one of the first pixel position in the first image and the second pixel position in the second image, and the second pixel position in the second image and the third pixel position in the third image using a non-linear model, determining a position of the fourth pixel in a fourth image based upon the third motion vector.

  19. Statistical classification methods for estimating ancestry using morphoscopic traits.

    PubMed

    Hefner, Joseph T; Ousley, Stephen D

    2014-07-01

    Ancestry assessments using cranial morphoscopic traits currently rely on subjective trait lists and observer experience rather than empirical support. The trait list approach, which is untested, unverified, and in many respects unrefined, is relied upon because of tradition and subjective experience. Our objective was to examine the utility of frequently cited morphoscopic traits and to explore eleven appropriate and novel methods for classifying an unknown cranium into one of several reference groups. Based on these results, artificial neural networks (aNNs), OSSA, support vector machines, and random forest models showed mean classification accuracies of at least 85%. The aNNs had the highest overall classification rate (87.8%), and random forests show the smallest difference between the highest (90.4%) and lowest (76.5%) classification accuracies. The results of this research demonstrate that morphoscopic traits can be successfully used to assess ancestry without relying only on the experience of the observer.

  20. Comparative Evaluation of Two Methods to Estimate Natural Gas Production in Texas

    EIA Publications

    2003-01-01

    This report describes an evaluation conducted by the Energy Information Administration (EIA) in August 2003 of two methods that estimate natural gas production in Texas. The first method (parametric method) was used by EIA from February through August 2003 and the second method (multinomial method) replaced it starting in September 2003, based on the results of this evaluation.

  1. A method to estimate weight and dimensions of large and small gas turbine engines

    NASA Technical Reports Server (NTRS)

    Onat, E.; Klees, G. W.

    1979-01-01

    A computerized method was developed to estimate weight and envelope dimensions of large and small gas turbine engines within + or - 5% to 10%. The method is based on correlations of component weight and design features of 29 data base engines. Rotating components were estimated by a preliminary design procedure which is sensitive to blade geometry, operating conditions, material properties, shaft speed, hub tip ratio, etc. The development and justification of the method selected, and the various methods of analysis are discussed.

  2. Techniques and methods for estimating abundance of larval and metamorphosed sea lampreys in Great Lakes tributaries, 1995 to 2001

    USGS Publications Warehouse

    Slade, Jeffrey W.; Adams, Jean V.; Christie, Gavin C.; Cuddy, Douglas W.; Fodale, Michael F.; Heinrich, John W.; Quinlan, Henry R.; Weise, Jerry G.; Weisser, John W.; Young, Robert J.

    2003-01-01

    Before 1995, Great Lakes streams were selected for lampricide treatment based primarily on qualitative measures of the relative abundance of larval sea lampreys, Petromyzon marinus. New integrated pest management approaches required standardized quantitative measures of sea lamprey. This paper evaluates historical larval assessment techniques and data and describes how new standardized methods for estimating abundance of larval and metamorphosed sea lampreys were developed and implemented. These new methods have been used to estimate larval and metamorphosed sea lamprey abundance in about 100 Great Lakes streams annually and to rank them for lampricide treatment since 1995. Implementation of these methods has provided a quantitative means of selecting streams for treatment based on treatment cost and estimated production of metamorphosed sea lampreys, provided managers with a tool to estimate potential recruitment of sea lampreys to the Great Lakes and the ability to measure the potential consequences of not treating streams, resulting in a more justifiable allocation of resources. The empirical data produced can also be used to simulate the impacts of various control scenarios.

  3. Study on Comparison of Bidding and Pricing Behavior Distinction between Estimate Methods

    NASA Astrophysics Data System (ADS)

    Morimoto, Emi; Namerikawa, Susumu

    The most characteristic trend on bidding and pricing behavior distinction in recent years is the increasing number of bidders just above the criteria for low-price bidding investigations. The contractor's markup is the difference between the bidding price and the execution price. Therefore, the contractor's markup is the difference between criteria for low-price bidding investigations price and the execution price in the public works bid in Japan. Virtually, bidder's strategies and behavior have been controlled by public engineer's budgets. Estimation and bid are inseparably linked in the Japanese public works procurement system. The trial of the unit price-type estimation method begins in 2004. On another front, accumulated estimation method is one of the general methods in public works. So, there are two types of standard estimation methods in Japan. In this study, we did a statistical analysis on the bid information of civil engineering works for the Ministry of Land, Infrastructure, and Transportation in 2008. It presents several issues that bidding and pricing behavior is related to an estimation method (several estimation methods) for public works bid in Japan. The two types of standard estimation methods produce different results that number of bidders (decide on bid-no bid strategy) and distribution of bid price (decide on mark-up strategy).The comparison on the distribution of bid prices showed that the percentage of the bid concentrated on the criteria for low-price bidding investigations have had a tendency to get higher in the large-sized public works by the unit price-type estimation method, comparing with the accumulated estimation method. On one hand, the number of bidders who bids for public works estimated unit-price tends to increase significantly Public works estimated unit-price is likely to have been one of the factors for the construction companies to decide if they participate in the biddings.

  4. A new TDOA estimation method in Three-satellite interference localisation

    NASA Astrophysics Data System (ADS)

    Dou, Huijing; Lei, Qian; Li, Wenxue; Xing, Qingqing

    2015-05-01

    Time difference of arrival (TDOA) parameter estimation is the key to Three-satellite interference localisation. Therefore, in order to improve the accuracy of Three-satellite interference location, we must estimate the TDOA parameter accurately and effectively. Based on the study of wavelet transform correlation TDOA estimation algorithm, combining with correlation and Hilbert subtraction method, we put forward a high precision TDOA estimation method for Three-satellite interference location. The proposed algorithm utilises the characteristics of the zero-crossing point of Hilbert transform method corresponding to the correlation peak point of correlation method, using correlation function of wavelet transform correlation method minus the absolute value of its Hilbert transform, to sharpen peak point and improve the TDOA estimation precision, so that the positioning is more accurate and effective.

  5. A comparison of methods to estimate photosynthetic light absorption in leaves with contrasting morphology.

    PubMed

    Olascoaga, Beñat; Mac Arthur, Alasdair; Atherton, Jon; Porcar-Castell, Albert

    2016-03-01

    Accurate temporal and spatial measurements of leaf optical traits (i.e., absorption, reflectance and transmittance) are paramount to photosynthetic studies. These optical traits are also needed to couple radiative transfer and physiological models to facilitate the interpretation of optical data. However, estimating leaf optical traits in leaves with complex morphologies remains a challenge. Leaf optical traits can be measured using integrating spheres, either by placing the leaf sample in one of the measuring ports (External Method) or by placing the sample inside the sphere (Internal Method). However, in leaves with complex morphology (e.g., needles), the External Method presents limitations associated with gaps between the leaves, and the Internal Method presents uncertainties related to the estimation of total leaf area. We introduce a modified version of the Internal Method, which bypasses the effect of gaps and the need to estimate total leaf area, by painting the leaves black and measuring them before and after painting. We assess and compare the new method with the External Method using a broadleaf and two conifer species. Both methods yielded similar leaf absorption estimates for the broadleaf, but absorption estimates were higher with the External Method for the conifer species. Factors explaining the differences between methods, their trade-offs and their advantages and limitations are also discussed. We suggest that the new method can be used to estimate leaf absorption in any type of leaf independently of its morphology, and be used to study further the impact of gap fraction in the External Method.

  6. An improved method for nonlinear parameter estimation: a case study of the Rössler model

    NASA Astrophysics Data System (ADS)

    He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan

    2016-08-01

    Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.

  7. Multivariate drought frequency estimation using copula method in Southwest China

    NASA Astrophysics Data System (ADS)

    Hao, Cui; Zhang, Jiahua; Yao, Fengmei

    2015-12-01

    Drought over Southwest China occurs frequently and has an obvious seasonal characteristic. Proper management of regional droughts requires knowledge of the expected frequency or probability of specific climate information. This study utilized k-means classification and copulas to demonstrate the regional drought occurrence probability and return period based on trivariate drought properties, i.e., drought duration, severity, and peak. A drought event in this study was defined when 3-month Standardized Precipitation Evapotranspiration Index (SPEI) was less than -0.99 according to the regional climate characteristic. Then, the next step was to classify the region into six clusters by k-means method based on annual and seasonal precipitation and temperature and to establish marginal probabilistic distributions for each drought property in each sub-region. Several copula types were selected to test the best fit distribution, and Student t copula was recognized as the best one to integrate drought duration, severity, and peak. The results indicated that a proper classification was important for a regional drought frequency analysis, and copulas were useful tools in exploring the associations of the correlated drought variables and analyzing drought frequency. Student t copula was a robust and proper function for drought joint probability and return period analysis, which is important for analyzing and predicting the regional drought risks.

  8. Variable methods to estimate the ionospheric horizontal gradient

    NASA Astrophysics Data System (ADS)

    Nagarajoo, Karthigesu

    2016-06-01

    DGPS or differential Global Positioning System is a system where the range error at a reference station (after eliminating the error due to its’ clock, hardware delay and multipath) will be eliminated from the range measurement at the user, which view the same satellite, presuming that the satellites path to both the reference station and the user experience common errors due to the ionosphere, clock errors etc. In this assumption, the error due to the ionospheric refraction is assumed to be the same for the two closely spaced paths (such as a baseline length between reference station and the user of 10km as used in simulations throughout this paper, unless otherwise stated) and thus the presence of ionospheric horizontal gradient is ignored. If a user's path is exposed to a drastically large ionosphere gradient, the large difference of ionosphere delays between the reference station and the user can result in significant position error for the user. Several examples of extremely large ionosphere gradients that could cause the significant user errors have been observed. The ionospheric horizontal gradient could be obtained instead from the gradient of the Total Electron Content, TEC observed from a number of received GPS satellites at one or more reference stations or based on empirical models updated with real time data. To investigate the former, in this work, the dual frequency method has been used to obtain both South-North and East-West gradients by using four different receiving stations separated in those directions. In addition, observation data from Navy Ionospheric Monitoring System (NIMS) receivers and the TEC contour map from Rutherford Appleton Laboratory (RAL) UK have also been used in order to define the magnitude and direction of the gradient.

  9. Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods

    USGS Publications Warehouse

    Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.

    2011-01-01

    Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.

  10. Simultaneous Effects of Allowed Time, Teaching Method, Ability, and Student Assessment of Treatment on Achievement in a High School Biology Course (ISIS).

    ERIC Educational Resources Information Center

    Burkman, Ernest; And Others

    1982-01-01

    Examined effects of teaching method (self-directed, group-directed, teacher-directed), academic ability, student assessment of treatment, and allowed time on achievement in three Individualized Science Instructional System (ISIS) biology minicourses. Results, among others, indicated that individualized instruction favored high-ability students and…

  11. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia

    PubMed Central

    Kidney, Darren; Rawson, Benjamin M.; Borchers, David L.; Stevenson, Ben C.; Marques, Tiago A.; Thomas, Len

    2016-01-01

    Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers’ estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method

  12. The effect of physical parameters of inertial stabilization platform on disturbance rejection ability and its improvement method

    NASA Astrophysics Data System (ADS)

    Mao, Yao; Deng, Chao; Gan, Xun; Tian, Jing

    2015-10-01

    The development of space optical communication requires arcsecond precision or even higher precision of the tracking performance of ATP(Acquisition, Tracking and Pointing) system under the condition of base disturbance. ATP system supported by stabilized reference beam which is provided by inertial stabilization platform with high precision and high bandwidth, can effectively restrain the influence of base angular disturbance on the line of sight. To get better disturbance rejection ability, this paper analyzes the influence of transfer characteristics and physical parameters of stabilization platform on disturbance stabilization performance, the result shows that the stabilization characteristics of inertial stabilization platform equals to the product of rejection characteristics of control loop and disturbance transfer characteristics of the platform, and improving isolation characteristics of the platform or extending control bandwidth can both achieve the result of getting a better rejection ability. Limited by factors such as mechanical characteristics of stabilization platform, bandwidth/noise of the sensor, and so on, as the control bandwidth of the LOS stabilization platform is limited, and high frequency disturbance can not be effectively rejected, so the rejection of high frequency disturbance mainly depends on the isolation characteristics of the platform itself. This paper puts forward three methods of improving the isolation characteristics of the platform itself, which includes 1) changing mechanical structure, such as reducing elastic coefficient, increasing moment of inertia of the platform, and so on; 2) changing electrical structure of the platform, such as increasing resistance, adding current loop, and so on; 3)adding a passive vibration isolator between the inertial stabilization platform and the base. The result of the experiment shows that adding current loop or adding a passive vibration isolator can effectively reject high frequency

  13. Quantitative Estimation of Trace Chemicals in Industrial Effluents with the Sticklet Transform Method

    SciTech Connect

    Mehta, N C; Scharlemann, E T; Stevens, C G

    2001-04-02

    Application of a novel transform operator, the Sticklet transform, to the quantitative estimation of trace chemicals in industrial effluent plumes is reported. The sticklet transform is a superset of the well-known derivative operator and the Haar wavelet, and is characterized by independently adjustable lobe width and separation. Computer simulations demonstrate that they can make accurate and robust concentration estimates of multiple chemical species in industrial effluent plumes in the presence of strong clutter background, interferent chemicals and random noise. In this paper they address the application of the sticklet transform in estimating chemical concentrations in effluent plumes in the presence of atmospheric transmission effects. They show that this transform retains the ability to yield accurate estimates using on-plume/off-plume measurements that represent atmospheric differentials up to 10% of the full atmospheric attenuation.

  14. Applicability of Demirjian's four methods and Willems method for age estimation in a sample of Turkish children.

    PubMed

    Akkaya, Nursel; Yilanci, Hümeyra Özge; Göksülük, Dinçer

    2015-09-01

    The aim of this study was to evaluate applicability of five dental methods including Demirjian's original, revised, four teeth, and alternate four teeth methods and Willems method for age estimation in a sample of Turkish children. Panoramic radiographs of 799 children (412 females, 387 males) aged between 2.20 and 15.99years were examined by two observers. A repeated measures ANOVA was performed to compare dental methods among gender and age groups. All of the five methods overestimated the chronological age on the average. Among these, Willems method was found to be the most accurate method, which showed 0.07 and 0.15years overestimation for males and females, respectively. It was followed by Demirjian's four teeth methods, revised and original methods. According to the results, Willems method can be recommended for dental age estimation of Turkish children in forensic applications.

  15. Assessment of a rapid method for quantitative reach-scale estimates of deposited fine sediment in rivers

    NASA Astrophysics Data System (ADS)

    Duerdoth, C. P.; Arnold, A.; Murphy, J. F.; Naden, P. S.; Scarlett, P.; Collins, A. L.; Sear, D. A.; Jones, J. I.

    2015-02-01

    Despite increasing concerns about the negative effects that increased loads of fine-grained sediment are having on freshwaters, the need is clear for a rapid and cost-effective methodology that gives precise estimates of deposited sediment across all river types and that are relevant to morphological and ecological impact. To date few attempts have been made to assess the precision of techniques used to assemble data on fine sediment storage in river channels. Accordingly, we present an investigation into the sources of uncertainty associated with estimates of deposited fine-grained sediment in rivers using a sediment resuspension technique, an approach that provides an instantaneous measure of deposited fine sediment (surface and subsurface) in terms of quantity and quality. We investigated how variation associated with river type, spatial patchiness within rivers, sampling, and individual operators influenced estimates of deposited fine sediment using this approach and compared the precision with that of visual estimates of river bed composition - a commonly applied technique in rapid river surveys. We have used this information to develop an effective methodology for producing reach-scale estimates with known confidence intervals. By using a spatially-focussed sampling strategy that captured areas of visually high and low deposition of fine-grained sediment, the dominant aspects of small-scale spatial variability were controlled and a more precise instantaneous estimate of deposited fine sediment derived. The majority of the remaining within-site variance was attributable to spatial and sampling variability at the smallest (patch) scale. The method performed as well as visual estimates of percentage of the river bed comprising fines in its ability to discriminate between rivers but, unlike visual estimates, was not affected by operator bias. Confidence intervals for reach-scale measures of deposited fine-grained sediment were derived for the technique, and these

  16. Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†

    PubMed Central

    Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia

    2015-01-01

    Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144

  17. Joint state and parameter estimation of the hemodynamic model by particle smoother expectation maximization method

    NASA Astrophysics Data System (ADS)

    Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata

    2016-08-01

    Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.

  18. Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark

    2009-01-01

    High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.

  19. Research on Parameter Estimation Methods for Alpha Stable Noise in a Laser Gyroscope's Random Error.

    PubMed

    Wang, Xueyun; Li, Kui; Gao, Pengyu; Meng, Suxia

    2015-01-01

    Alpha stable noise, determined by four parameters, has been found in the random error of a laser gyroscope. Accurate estimation of the four parameters is the key process for analyzing the properties of alpha stable noise. Three widely used estimation methods-quantile, empirical characteristic function (ECF) and logarithmic moment method-are analyzed in contrast with Monte Carlo simulation in this paper. The estimation accuracy and the application conditions of all methods, as well as the causes of poor estimation accuracy, are illustrated. Finally, the highest precision method, ECF, is applied to 27 groups of experimental data to estimate the parameters of alpha stable noise in a laser gyroscope's random error. The cumulative probability density curve of the experimental data fitted by an alpha stable distribution is better than that by a Gaussian distribution, which verifies the existence of alpha stable noise in a laser gyroscope's random error.

  20. Method for Estimating Low-Frequency Return Current of DC Electric Railcar

    NASA Astrophysics Data System (ADS)

    Hatsukade, Satoru

    The Estimation of the harmonic current of railcars is necessary for achieving compatibility between train signaling systems and railcar equipment. However, although several theoretical analyses methods for estimating the harmonic current of railcars using switching functions exist, there are no theoretical analysis methods estimating a low-frequency current at a frequency less than the power converter's carrier frequency. This paper describes a method for estimating the spectrum (frequency and amplitude) of the low-frequency return current of DC electric railcars. First, relationships between the return current and characteristics of the DC electric railcars, such as mass and acceleration, are determined. Then, the mathematical (not numerical) calculation results for low-frequency current are obtained from the time-current curve for a DC electric railcar by using Fourier series expansions. Finally, the measurement results clearly show the effectiveness of the estimation method development in this study.

  1. A Comparison of Methods for Estimating Quadratic Effects in Nonlinear Structural Equation Models

    PubMed Central

    Harring, Jeffrey R.; Weiss, Brandi A.; Hsu, Jui-Chen

    2012-01-01

    Two Monte Carlo simulations were performed to compare methods for estimating and testing hypotheses of quadratic effects in latent variable regression models. The methods considered in the current study were (a) a 2-stage moderated regression approach using latent variable scores, (b) an unconstrained product indicator approach, (c) a latent moderated structural equation method, (d) a fully Bayesian approach, and (e) marginal maximum likelihood estimation. Of the 5 estimation methods, it was found that overall the methods based on maximum likelihood estimation and the Bayesian approach performed best in terms of bias, root-mean-square error, standard error ratios, power, and Type I error control, although key differences were observed. Similarities as well as disparities among methods are highlight and general recommendations articulated. As a point of comparison, all 5 approaches were fit to a reparameterized version of the latent quadratic model to educational reading data. PMID:22429193

  2. Using Resampling To Estimate the Precision of an Empirical Standard-Setting Method.

    ERIC Educational Resources Information Center

    Muijtjens, Arno M. M.; Kramer, Anneke W. M.; Kaufman, David M.; Van der Vleuten, Cees P. M.

    2003-01-01

    Developed a method to estimate the cutscore precisions for empirical standard-setting methods by using resampling. Illustrated the method with two actual datasets consisting of 86 Dutch medical residents and 155 Canadian medical students taking objective structured clinical examinations. Results show the applicability of the method. (SLD)

  3. Comparison of Parametric and Nonparametric Bootstrap Methods for Estimating Random Error in Equipercentile Equating

    ERIC Educational Resources Information Center

    Cui, Zhongmin; Kolen, Michael J.

    2008-01-01

    This article considers two methods of estimating standard errors of equipercentile equating: the parametric bootstrap method and the nonparametric bootstrap method. Using a simulation study, these two methods are compared under three sample sizes (300, 1,000, and 3,000), for two test content areas (the Iowa Tests of Basic Skills Maps and Diagrams…

  4. Novel and simple non-parametric methods of estimating the joint and marginal densities

    NASA Astrophysics Data System (ADS)

    Alghalith, Moawia

    2016-07-01

    We introduce very simple non-parametric methods that overcome key limitations of the existing literature on both the joint and marginal density estimation. In doing so, we do not assume any form of the marginal distribution or joint distribution a priori. Furthermore, our method circumvents the bandwidth selection problems. We compare our method to the kernel density method.

  5. Feasibility study of a novel method for real-time aerodynamic coefficient estimation

    NASA Astrophysics Data System (ADS)

    Gurbacki, Phillip M.

    In this work, a feasibility study of a novel technique for the real-time identification of uncertain nonlinear aircraft aerodynamic coefficients has been conducted. The major objective of this paper is to investigate the feasibility of a system for parameter identification in a real-time flight environment. This system should be able to calculate aerodynamic coefficients and derivative information using typical pilot inputs while ensuring robust, stable, and rapid convergence. The parameter estimator investigated is based upon the nonlinear sliding mode control schema; one of the main advantages of the sliding mode estimator is the ability to guarantee a stable and robust convergence. Stable convergence is ensured by choosing a sliding surface and function that satisfies the Lyapunov stability criteria. After a proper sliding surface has been chosen, the nonlinear equations of motion for an F-16 aircraft are substituted into the sliding surface yielding an estimator capable of identifying a single aircraft parameter. Multiple sliding surfaces are then developed for each of the different flight parameters that will be identified. Sliding surfaces and parameter estimators have been developed and simulated for the pitching moment, lift force, and drag force coefficients of the F-16 aircraft. Comparing the estimated coefficients with the reference coefficients shows rapid and stable convergence for a variety of pilot inputs. Starting with simple doublet and sin wave commands, and followed by more complicated continuous pilot inputs, estimated aerodynamic coefficients have been shown to match the actual coefficients with a high degree of accuracy. This estimator is also shown to be superior to model reference or adaptive estimators, it is able to handle positive and negative estimated parameters and control inputs along with guaranteeing Lyapunov stability during convergence. Accurately estimating these aerodynamic parameters in real-time during a flight is essential

  6. A Simple Joint Estimation Method of Residual Frequency Offset and Sampling Frequency Offset for DVB Systems

    NASA Astrophysics Data System (ADS)

    Kwon, Ki-Won; Cho, Yongsoo

    This letter presents a simple joint estimation method for residual frequency offset (RFO) and sampling frequency offset (STO) in OFDM-based digital video broadcasting (DVB) systems. The proposed method selects a continual pilot (CP) subset from an unsymmetrically and non-uniformly distributed CP set to obtain an unbiased estimator. Simulation results show that the proposed method using a properly selected CP subset is unbiased and performs robustly.

  7. A method for estimating and removing streaking artifacts in quantitative susceptibility mapping.

    PubMed

    Li, Wei; Wang, Nian; Yu, Fang; Han, Hui; Cao, Wei; Romero, Rebecca; Tantiwongkosi, Bundhit; Duong, Timothy Q; Liu, Chunlei

    2015-03-01

    Quantitative susceptibility mapping (QSM) is a novel MRI method for quantifying tissue magnetic property. In the brain, it reflects the molecular composition and microstructure of the local tissue. However, susceptibility maps reconstructed from single-orientation data still suffer from streaking artifacts which obscure structural details and small lesions. We propose and have developed a general method for estimating streaking artifacts and subtracting them from susceptibility maps. Specifically, this method uses a sparse linear equation and least-squares (LSQR)-algorithm-based method to derive an initial estimation of magnetic susceptibility, a fast quantitative susceptibility mapping method to estimate the susceptibility boundaries, and an iterative approach to estimate the susceptibility artifact from ill-conditioned k-space regions only. With a fixed set of parameters for the initial susceptibility estimation and subsequent streaking artifact estimation and removal, the method provides an unbiased estimate of tissue susceptibility with negligible streaking artifacts, as compared to multi-orientation QSM reconstruction. This method allows for improved delineation of white matter lesions in patients with multiple sclerosis and small structures of the human brain with excellent anatomical details. The proposed methodology can be extended to other existing QSM algorithms.

  8. Inter-Method Discrepancies in Brain Volume Estimation May Drive Inconsistent Findings in Autism

    PubMed Central

    Katuwal, Gajendra J.; Baum, Stefi A.; Cahill, Nathan D.; Dougherty, Chase C.; Evans, Eli; Evans, David W.; Moore, Gregory J.; Michael, Andrew M.

    2016-01-01

    Previous studies applying automatic preprocessing methods on Structural Magnetic Resonance Imaging (sMRI) report inconsistent neuroanatomical abnormalities in Autism Spectrum Disorder (ASD). In this study we investigate inter-method differences as a possible cause behind these inconsistent findings. In particular, we focus on the estimation of the following brain volumes: gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), and total intra cranial volume (TIV). T1-weighted sMRIs of 417 ASD subjects and 459 typically developing controls (TDC) from the ABIDE dataset were estimated using three popular preprocessing methods: SPM, FSL, and FreeSurfer (FS). Brain volumes estimated by the three methods were correlated but had significant inter-method differences; except TIVSPM vs. TIVFS, all inter-method differences were significant. ASD vs. TDC group differences in all brain volume estimates were dependent on the method used. SPM showed that TIV, GM, and CSF volumes of ASD were larger than TDC with statistical significance, whereas FS and FSL did not show significant differences in any of the volumes; in some cases, the direction of the differences were opposite to SPM. When methods were compared with each other, they showed differential biases for autism, and several biases were larger than ASD vs. TDC differences of the respective methods. After manual inspection, we found inter-method segmentation mismatches in the cerebellum, sub-cortical structures, and inter-sulcal CSF. In addition, to validate automated TIV estimates we performed manual segmentation on a subset of subjects. Results indicate that SPM estimates are closest to manual segmentation, followed by FS while FSL estimates were significantly lower. In summary, we show that ASD vs. TDC brain volume differences are method dependent and that these inter-method discrepancies can contribute to inconsistent neuroimaging findings in general. We suggest cross-validation across methods and emphasize the

  9. A new method for estimating the number of non-differentially expressed genes.

    PubMed

    Wu, J; Liu, C Y; Chen, W T; Ma, W Y; Ding, Y

    2016-01-01

    Control of the false discovery rate is a statistical method that is widely used when identifying differentially expressed genes in high-throughput sequencing assays. It is often calculated using an adaptive linear step-up procedure in which the number of non-differentially expressed genes should be estimated accurately. In this paper, we discuss the estimation of this parameter and point out defects in the original estimation method. We also propose a new estimation method and provide the error estimation. We compared the estimation results from the two methods in a simulation study that produced a mean, standard deviation, range, and root mean square error. The results revealed that there was little difference in the mean between the two methods, but the standard deviation, range, and root mean square error obtained using the new method were much smaller than those produced by the original method, which indicates that the new method is more accurate and robust. Furthermore, we used real microarray data to verify the conclusion. Finally we provide a suggestion when analyzing differentially expressed genes using statistical methods. PMID:27051004

  10. Methods for estimating monthly streamflow characteristics at ungaged sites in western Montana

    USGS Publications Warehouse

    Parrett, Charles; Cartier, Kenn D.

    1989-01-01

    Three methods were developed for estimating monthly streamflow characteristics for western Montana. The first method, based on multiple-regression equations, relates monthly streamflow characteristics to various basin and climatic variables. Standard errors range from 43 to 107%. The equations are generally not applicable to streams that receive or lose water as a result of geology or that have appreciable upstream storage or diversions. The second method, based on regression equations, relates monthly streamflow characteristics to channel width. Standard errors range from 41 to 111%. The equations are generally not applicable to streams with exposed bedrock, with braided or sand channel, or with recent alterations. The third method requires 12 once-monthly streamflow measurements at an ungaged site. They are then correlated with concurrent flows at some nearby gaged site, and the resulting relation is used to estimate the required monthly streamflow characteristic at the ungaged site. Standard errors range from 19 to 92%. Although generally substantially more reliable than the first or second method, this method may be unreliable if the measurement site and the gage site are not hydrologically similar. A procedure for weighting individual estimates, based on variance and degree of independence of individual estimating methods, was also developed. Standard errors range from 15 to 43% when all three methods are used. The weighted-average estimated from all three methods are generally substantially more reliable than any of the individual estimates. (USGS)

  11. Methods to estimate the between-study variance and its uncertainty in meta-analysis.

    PubMed

    Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P T; Langan, Dean; Salanti, Georgia

    2016-03-01

    Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance, has been long challenged. Our aim is to identify known methods for estimation of the between-study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between-study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between-study variance. Based on the scenarios and results presented in the published studies, we recommend the Q-profile method and the alternative approach based on a 'generalised Cochran between-study variance statistic' to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence-based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. PMID:26332144

  12. Statistical Methods for Estimating the Uncertainty in the Best Basis Inventories

    SciTech Connect

    WILMARTH, S.R.

    2000-09-07

    This document describes the statistical methods used to determine sample-based uncertainty estimates for the Best Basis Inventory (BBI). For each waste phase, the equation for the inventory of an analyte in a tank is Inventory (Kg or Ci) = Concentration x Density x Waste Volume. the total inventory is the sum of the inventories in the different waste phases. Using tanks sample data: statistical methods are used to obtain estimates of the mean concentration of an analyte the density of the waste, and their standard deviations. The volumes of waste in the different phases, and their standard deviations, are estimated based on other types of data. The three estimates are multiplied to obtain the inventory estimate. The standard deviations are combined to obtain a standard deviation of the inventory. The uncertainty estimate for the Best Basis Inventory (BBI) is the approximate 95% confidence interval on the inventory.

  13. Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant

    DOEpatents

    Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa

    2013-09-17

    System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.

  14. A TRMM Microwave Radiometer Rain Rate Estimation Method with Convective and Stratiform Discrimination

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Weinman, J. A.; Dalu, G.

    1999-01-01

    cases is on the average about 15 %. Taking advantage of this ability of our retrieval method, one could derive the latent heat input into the atmosphere over the 760 km wide swath of the TMI radiometer in the tropics.

  15. Estimating of equilibrium formation temperature by curve fitting method and it's problems

    SciTech Connect

    Kenso Takai; Masami Hyodo; Shinji Takasugi

    1994-01-20

    Determination of true formation temperature from measured bottom hole temperature is important for geothermal reservoir evaluation after completion of well drilling. For estimation of equilibrium formation temperature, we studied non-linear least squares fitting method adapting the Middleton Model (Chiba et al., 1988). It was pointed out that this method was applicable as simple and relatively reliable method for estimation of the equilibrium formation temperature after drilling. As a next step, we are studying the estimation of equilibrium formation temperature from bottom hole temperature data measured by MWD (measurement while drilling system). In this study, we have evaluated availability of nonlinear least squares fitting method adapting curve fitting method and the numerical simulator (GEOTEMP2) for estimation of the equilibrium formation temperature while drilling.

  16. New Method for Estimation of Aeolian Sand Transport Rate Using Ceramic Sand Flux Sensor (UD-101)

    PubMed Central

    Udo, Keiko

    2009-01-01

    In this study, a new method for the estimation of aeolian sand transport rate was developed; the method employs a ceramic sand flux sensor (UD-101). UD-101 detects wind-blown sand impacting on its surface. The method was devised by considering the results of wind tunnel experiments that were performed using a vertical sediment trap and the UD-101. Field measurements to evaluate the estimation accuracy during the prevalence of unsteady winds were performed on a flat backshore. The results showed that aeolian sand transport rates estimated using the developed method were of the same order as those estimated using the existing method for high transport rates, i.e., for transport rates greater than 0.01 kg m−1 s−1. PMID:22291553

  17. A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2015-01-01

    A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.

  18. Summary of methods for calculating dynamic lateral stability and response and for estimating aerodynamic stability derivatives

    NASA Technical Reports Server (NTRS)

    Campbell, John P; Mckinney, Marion O

    1952-01-01

    A summary of methods for making dynamic lateral stability and response calculations and for estimating the aerodynamic stability derivatives required for use in these calculations is presented. The processes of performing calculations of the time histories of lateral motions, of the period and damping of these motions, and of the lateral stability boundaries are presented as a series of simple straightforward steps. Existing methods for estimating the stability derivatives are summarized and, in some cases, simple new empirical formulas are presented. Detailed estimation methods are presented for low-subsonic-speed conditions but only a brief discussion and a list of references are given for transonic and supersonic speed conditions.

  19. Fast 2D DOA Estimation Algorithm by an Array Manifold Matching Method with Parallel Linear Arrays

    PubMed Central

    Yang, Lisheng; Liu, Sheng; Li, Dong; Jiang, Qingping; Cao, Hailin

    2016-01-01

    In this paper, the problem of two-dimensional (2D) direction-of-arrival (DOA) estimation with parallel linear arrays is addressed. Two array manifold matching (AMM) approaches, in this work, are developed for the incoherent and coherent signals, respectively. The proposed AMM methods estimate the azimuth angle only with the assumption that the elevation angles are known or estimated. The proposed methods are time efficient since they do not require eigenvalue decomposition (EVD) or peak searching. In addition, the complexity analysis shows the proposed AMM approaches have lower computational complexity than many current state-of-the-art algorithms. The estimated azimuth angles produced by the AMM approaches are automatically paired with the elevation angles. More importantly, for estimating the azimuth angles of coherent signals, the aperture loss issue is avoided since a decorrelation procedure is not required for the proposed AMM method. Numerical studies demonstrate the effectiveness of the proposed approaches. PMID:26907301

  20. Fast 2D DOA Estimation Algorithm by an Array Manifold Matching Method with Parallel Linear Arrays.

    PubMed

    Yang, Lisheng; Liu, Sheng; Li, Dong; Jiang, Qingping; Cao, Hailin

    2016-01-01

    In this paper, the problem of two-dimensional (2D) direction-of-arrival (DOA) estimation with parallel linear arrays is addressed. Two array manifold matching (AMM) approaches, in this work, are developed for the incoherent and coherent signals, respectively. The proposed AMM methods estimate the azimuth angle only with the assumption that the elevation angles are known or estimated. The proposed methods are time efficient since they do not require eigenvalue decomposition (EVD) or peak searching. In addition, the complexity analysis shows the proposed AMM approaches have lower computational complexity than many current state-of-the-art algorithms. The estimated azimuth angles produced by the AMM approaches are automatically paired with the elevation angles. More importantly, for estimating the azimuth angles of coherent signals, the aperture loss issue is avoided since a decorrelation procedure is not required for the proposed AMM method. Numerical studies demonstrate the effectiveness of the proposed approaches. PMID:26907301

  1. Estimation of design sea ice thickness with maximum entropy distribution by particle swarm optimization method

    NASA Astrophysics Data System (ADS)

    Tao, Shanshan; Dong, Sheng; Wang, Zhifeng; Jiang, Wensheng

    2016-06-01

    The maximum entropy distribution, which consists of various recognized theoretical distributions, is a better curve to estimate the design thickness of sea ice. Method of moment and empirical curve fitting method are common-used parameter estimation methods for maximum entropy distribution. In this study, we propose to use the particle swarm optimization method as a new parameter estimation method for the maximum entropy distribution, which has the advantage to avoid deviation introduced by simplifications made in other methods. We conducted a case study to fit the hindcasted thickness of the sea ice in the Liaodong Bay of Bohai Sea using these three parameter-estimation methods for the maximum entropy distribution. All methods implemented in this study pass the K-S tests at 0.05 significant level. In terms of the average sum of deviation squares, the empirical curve fitting method provides the best fit for the original data, while the method of moment provides the worst. Among all three methods, the particle swarm optimization method predicts the largest thickness of the sea ice for a same return period. As a result, we recommend using the particle swarm optimization method for the maximum entropy distribution for offshore structures mainly influenced by the sea ice in winter, but using the empirical curve fitting method to reduce the cost in the design of temporary and economic buildings.

  2. Handbook for cost estimating. A method for developing estimates of costs for generic actions for nuclear power plants

    SciTech Connect

    Ball, J.R.; Cohen, S.; Ziegler, E.Z.

    1984-10-01

    This document provides overall guidance to assist the NRC in preparing the types of cost estimates required by the Regulatory Analysis Guidelines and to assist in the assignment of priorities in resolving generic safety issues. The Handbook presents an overall cost model that allows the cost analyst to develop a chronological series of activities needed to implement a specific regulatory requirement throughout all applicable commercial LWR power plants and to identify the significant cost elements for each activity. References to available cost data are provided along with rules of thumb and cost factors to assist in evaluating each cost element. A suitable code-of-accounts data base is presented to assist in organizing and aggregating costs. Rudimentary cost analysis methods are described to allow the analyst to produce a constant-dollar, lifetime cost for the requirement. A step-by-step example cost estimate is included to demonstrate the overall use of the Handbook.

  3. IN-RESIDENCE, MULTIPLE ROUTE EXPOSURES TO CHLORPYRIFOS AND DIAZINON ESTIMATED BY INDIRECT METHOD MODELS

    EPA Science Inventory

    One of the objectives of the National Human Exposure Assessment Survey (NHEXAS) is to estimate exposures to several pollutants in multiple media and determine their distributions for the population of Arizona. This paper presents modeling methods used to estimate exposure dist...

  4. Fitting Multilevel Models with Ordinal Outcomes: Performance of Alternative Specifications and Methods of Estimation

    ERIC Educational Resources Information Center

    Bauer, Daniel J.; Sterba, Sonya K.

    2011-01-01

    Previous research has compared methods of estimation for fitting multilevel models to binary data, but there are reasons to believe that the results will not always generalize to the ordinal case. This article thus evaluates (a) whether and when fitting multilevel linear models to ordinal outcome data is justified and (b) which estimator to employ…

  5. Bayesian and Frequentist Methods for Estimating Joint Uncertainty of Freundlich Adsorption Isotherm Fitting Parameters

    EPA Science Inventory

    In this paper, we present methods for estimating Freundlich isotherm fitting parameters (K and N) and their joint uncertainty, which have been implemented into the freeware software platforms R and WinBUGS. These estimates were determined by both Frequentist and Bayesian analyse...

  6. Validation tests of an improved kernel density estimation method for identifying disease clusters

    NASA Astrophysics Data System (ADS)

    Cai, Qiang; Rushton, Gerard; Bhaduri, Budhendra

    2012-07-01

    The spatial filter method, which belongs to the class of kernel density estimation methods, has been used to make morbidity and mortality maps in several recent studies. We propose improvements in the method to include spatially adaptive filters to achieve constant standard error of the relative risk estimates; a staircase weight method for weighting observations to reduce estimation bias; and a parameter selection tool to enhance disease cluster detection performance, measured by sensitivity, specificity, and false discovery rate. We test the performance of the method using Monte Carlo simulations of hypothetical disease clusters over a test area of four counties in Iowa. The simulations include different types of spatial disease patterns and high-resolution population distribution data. Results confirm that the new features of the spatial filter method do substantially improve its performance in realistic situations comparable to those where the method is likely to be used.

  7. Estimating Rooftop Suitability for PV: A Review of Methods, Patents, and Validation Techniques

    SciTech Connect

    Melius, J.; Margolis, R.; Ong, S.

    2013-12-01

    A number of methods have been developed using remote sensing data to estimate rooftop area suitable for the installation of photovoltaics (PV) at various geospatial resolutions. This report reviews the literature and patents on methods for estimating rooftop-area appropriate for PV, including constant-value methods, manual selection methods, and GIS-based methods. This report also presents NREL's proposed method for estimating suitable rooftop area for PV using Light Detection and Ranging (LiDAR) data in conjunction with a GIS model to predict areas with appropriate slope, orientation, and sunlight. NREL's method is validated against solar installation data from New Jersey, Colorado, and California to compare modeled results to actual on-the-ground measurements.

  8. Joint estimation of TOA and DOA in IR-UWB system using a successive propagator method

    NASA Astrophysics Data System (ADS)

    Wang, Fangqiu; Zhang, Xiaofei; Wang, Chenghua; Zhou, Shengkui

    2015-10-01

    Impulse radio ultra-wideband (IR-UWB) ranging and positioning require accurate estimation of time-of-arrival (TOA) and direction-of-arrival (DOA). With receiver of two antennas, both of the TOA and DOA parameters can be estimated via two-dimensional (2D) propagator method (PM), in which the 2D spectral peak searching, however, renders much higher computational complexity. This paper proposes a successive PM algorithm for joint TOA and DOA estimation in IR-UWB system to avoid 2D spectral peak searching. The proposed algorithm firstly gets the initial TOA estimates in the two antennas from the propagation matrix, then utilises successively one-dimensional (1D) local searches to achieve the estimation of TOAs in the two antennas, and finally obtains the DOA estimates via the difference in the TOAs between the two antennas. The proposed algorithm, which only requires 1D local searches, can avoid the high computational cost in 2D-PM algorithm. Furthermore, the proposed algorithm can obtain automatically paired parameters and has better joint TOA and DOA estimation performance than conventional PM algorithm, estimation of signal parameters via rotational invariance techniques algorithm and matrix pencil algorithm. Meanwhile, it has very close parameter estimation to that of 2D-PM algorithm. We have also derived the mean square error of TOA and DOA estimation of the proposed algorithm and the Cramer-Rao bound of TOA and DOA estimation in this paper. The simulation results verify the usefulness of the proposed algorithm.

  9. Estimating Small-area Populations by Age and Sex Using Spatial Interpolation and Statistical Inference Methods

    SciTech Connect

    Qai, Qiang; Rushton, Gerald; Bhaduri, Budhendra L; Bright, Eddie A; Coleman, Phil R

    2006-01-01

    The objective of this research is to compute population estimates by age and sex for small areas whose boundaries are different from those for which the population counts were made. In our approach, population surfaces and age-sex proportion surfaces are separately estimated. Age-sex population estimates for small areas and their confidence intervals are then computed using a binomial model with the two surfaces as inputs. The approach was implemented for Iowa using a 90 m resolution population grid (LandScan USA) and U.S. Census 2000 population. Three spatial interpolation methods, the areal weighting (AW) method, the ordinary kriging (OK) method, and a modification of the pycnophylactic method, were used on Census Tract populations to estimate the age-sex proportion surfaces. To verify the model, age-sex population estimates were computed for paired Block Groups that straddled Census Tracts and therefore were spatially misaligned with them. The pycnophylactic method and the OK method were more accurate than the AW method. The approach is general and can be used to estimate subgroup-count types of variables from information in existing administrative areas for custom-defined areas used as the spatial basis of support in other applications.

  10. Linear least-squares method for unbiased estimation of T1 from SPGR signals.

    PubMed

    Chang, Lin-Ching; Koay, Cheng Guan; Basser, Peter J; Pierpaoli, Carlo

    2008-08-01

    The longitudinal relaxation time, T(1), can be estimated from two or more spoiled gradient recalled echo images (SPGR) acquired with different flip angles and/or repetition times (TRs). The function relating signal intensity to flip angle and TR is nonlinear; however, a linear form proposed 30 years ago is currently widely used. Here we show that this linear method provides T(1) estimates that have similar precision but lower accuracy than those obtained with a nonlinear method. We also show that T(1) estimated by the linear method is biased due to improper accounting for noise in the fitting. This bias can be significant for clinical SPGR images; for example, T(1) estimated in brain tissue (800 ms < T(1) < 1600 ms) can be overestimated by 10% to 20%. We propose a weighting scheme that correctly accounts for the noise contribution in the fitting procedure. Monte Carlo simulations of SPGR experiments are used to evaluate the accuracy of the estimated T(1) from the widely-used linear, the proposed weighted-uncertainty linear, and the nonlinear methods. We show that the linear method with weighted uncertainties reduces the bias of the linear method, providing T(1) estimates comparable in precision and accuracy to those of the nonlinear method while reducing computation time significantly. PMID:18666108

  11. A novel method for estimating the number of species within a region

    PubMed Central

    Shtilerman, Elad; Thompson, Colin J.; Stone, Lewi; Bode, Michael; Burgman, Mark

    2014-01-01

    Ecologists are often required to estimate the number of species in a region or designated area. A number of diversity indices are available for this purpose and are based on sampling the area using quadrats or other means, and estimating the total number of species from these samples. In this paper, a novel theory and method for estimating the number of species is developed. The theory involves the use of the Laplace method for approximating asymptotic integrals. The method is shown to be successful by testing random simulated datasets. In addition, several real survey datasets are tested, including forests that contain a large number (tens to hundreds) of tree species, and an aquatic system with a large number of fish species. The method is shown to give accurate results, and in almost all cases found to be superior to existing tools for estimating diversity. PMID:24500169

  12. A simple method to estimate threshold friction velocity of wind erosion in the field

    NASA Astrophysics Data System (ADS)

    Li, Junran; Okin, Gregory S.; Herrick, Jeffrey E.; Belnap, Jayne; Munson, Seth M.; Miller, Mark E.

    2010-05-01

    This study provides a fast and easy-to-apply method to estimate threshold friction velocity (TFV) of wind erosion in the field. Wind tunnel experiments and a variety of ground measurements including air gun, pocket penetrometer, torvane, and roughness chain were conducted in Moab, Utah and cross-validated in the Mojave Desert, California. Patterns between TFV and ground measurements were examined to identify the optimum method for estimating TFV. The results show that TFVs were best predicted using the air gun and penetrometer measurements in the Moab sites. This empirical method, however, systematically underestimated TFVs in the Mojave Desert sites. Further analysis showed that TFVs in the Mojave sites can be satisfactorily estimated with a correction for rock cover, which is presumably the main cause of the underestimation of TFVs. The proposed method may be also applied to estimate TFVs in environments where other non-erodible elements such as postharvest residuals are found.

  13. Recursive least squares method of regression coefficients estimation as a special case of Kalman filter

    NASA Astrophysics Data System (ADS)

    Borodachev, S. M.

    2016-06-01

    The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.

  14. Experimental parameter estimation method for nonlinear viscoelastic composite material models: an application on arterial tissue.

    PubMed

    Sunbuloglu, Emin; Bozdag, Ergun; Toprak, Tuncer; Islak, Civan

    2013-01-01

    This study is aimed at setting a method of experimental parameter estimation for large-deforming nonlinear viscoelastic continuous fibre-reinforced composite material model. Specifically, arterial tissue was investigated during experimental research and parameter estimation studies, due to medical, scientific and socio-economic importance of soft tissue research. Using analytical formulations for specimens under combined inflation/extension/torsion on thick-walled cylindrical tubes, in vitro experiments were carried out with fresh sheep arterial segments, and parameter estimation procedures were carried out on experimental data. Model restrictions were pointed out using outcomes from parameter estimation. Needs for further studies that can be developed are discussed.

  15. A method to estimate weight and dimensions of aircraft gas turbine engines. Volume 1: Method of analysis

    NASA Technical Reports Server (NTRS)

    Pera, R. J.; Onat, E.; Klees, G. W.; Tjonneland, E.

    1977-01-01

    Weight and envelope dimensions of aircraft gas turbine engines are estimated within plus or minus 5% to 10% using a computer method based on correlations of component weight and design features of 29 data base engines. Rotating components are estimated by a preliminary design procedure where blade geometry, operating conditions, material properties, shaft speed, hub-tip ratio, etc., are the primary independent variables used. The development and justification of the method selected, the various methods of analysis, the use of the program, and a description of the input/output data are discussed.

  16. A review and comparison of some commonly used methods of estimating petroleum resource availability

    SciTech Connect

    Herbert, J.H.

    1982-10-01

    The purpose of this pedagogical report is to elucidate the characteristics of the principal methods of estimating the petroleum resource base. Other purposes are to indicate the logical similarities and data requirements of these different methods. The report should serve as a guide for the application and interpretation of the different methods.

  17. Site Effects Estimation by a Transfer-Station Generalized Inversion Method

    NASA Astrophysics Data System (ADS)

    Zhang, Wenbo; Yu, Xiangwei

    2016-04-01

    Site effect is one of the essential factors in characterizing strong ground motion as well as in earthquake engineering design. In this study, the generalized inversion technique (GIT) is applied to estimate site effects. Moreover, the GIT is modified to improve its analytical ability.GIT needs a reference station as a standard. Ideally the reference station is located at a rock site, and its site effect is considered to be a constant. For the same earthquake, the record spectrum of an interested station is divided by that of the reference station, and the source term is eliminated. Thus site effects and the attenuation can be acquired. In the GIT process, the amount of earthquake data available in analysis is limited to that recorded by the reference station, and the stations of which site effects can be estimated are also restricted to those stations which recorded common events with the reference station. In order to improve the limitation of the GIT, a modified GIT is put forward in this study, namely, the transfer-station generalized inversion method (TSGI). Comparing with the GIT, this modified GIT can be used to enlarge data set and increase the number of stations whose site effects can be analyzed. And this makes solution much more stable. To verify the results of GIT, a non-reference method, the genetic algorithms (GA), is applied to estimate absolute site effects. On April 20, 2013, an earthquake with magnitude of MS 7.0 occurred in the Lushan region, China. After this event, more than several hundred aftershocks with ML<3.0 occurred in this region. The purpose of this paper is to investigate the site effects and Q factor for this area based on the aftershock strong motion records from the China National Strong Motion Observation Network System. Our results show that when the TSGI is applied instead of the GIT, the total number of events used in the inversion increases from 31 to 54 and the total number of stations whose site effect can be estimated

  18. ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve.

    PubMed

    Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

    2014-01-01

    In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments.

  19. Method for estimating crack-extension resistance curve from residual strength data

    NASA Technical Reports Server (NTRS)

    Orange, T. W.

    1980-01-01

    A method is presented for estimating the crack extension resistance curve (R curve) from residual strength (maximum load against initial crack length) data for precracked fracture specimens. The method allows additional information to be inferred from simple test results, and that information is used to estimate the failure loads of more complicated structures. Numerical differentiation of the residual strength data is required, and the problems that it may present are discussed.

  20. Monitoring Hawaiian waterbirds: evaluation of sampling methods to produce reliable estimates

    USGS Publications Warehouse

    Camp, Richard J.; Brinck, Kevin W.; Paxton, Eben H.; Leopold, Christina

    2014-01-01

    We conducted field trials to assess several different methods of estimating the abundance of four endangered Hawaiian waterbirds: the Hawaiian duck (Anas wyvilliana), Hawaiian coot (Fulica alai), Hawaiian common moorhen (Gallinula chloropus sandvicensis) and Hawaiian stilt (Himantopus mexicanus knudseni). At two sites on Oʽahu, James Campbell National Wildlife Refuge and Hamakua Marsh, we conducted field trials where both solitary and paired observers counted birds and recorded the distance to observed birds. We then compared the results of estimates using the existing simple count, distance estimates from both point- and line-transect surveys, paired observer count estimates, bounded count, and Overton estimators. Comparing covariate recorded values among simultaneous observations revealed inconsistency between observers. We showed that the variation among simple counts means the current direct count survey, even if interpreted as a proportional index of abundance, incorporates many sources of uncertainty that are not taken into account. Analysis revealed violation of model assumptions that allowed us to discount distance-based estimates as a viable estimation technique. Among the remaining methods, point counts by paired observers produced the most precise estimates while meeting model assumptions. We present an example sampling protocol using paired observer counts. Finally, we suggest further research that will improve abundance estimates of Hawaiian waterbirds.

  1. Design of a Direction-of-Arrival Estimation Method Used for an Automatic Bearing Tracking System

    PubMed Central

    Guo, Feng; Liu, Huawei; Huang, Jingchang; Zhang, Xin; Zu, Xingshui; Li, Baoqing; Yuan, Xiaobing

    2016-01-01

    In this paper, we introduce a sub-band direction-of-arrival (DOA) estimation method suitable for employment within an automatic bearing tracking system. Inspired by the magnitude-squared coherence (MSC), we extend the MSC to the sub-band and propose the sub-band magnitude-squared coherence (SMSC) to measure the coherence between the frequency sub-bands of wideband signals. Then, we design a sub-band DOA estimation method which chooses a sub-band from the wideband signals by SMSC for the bearing tracking system. The simulations demonstrate that the sub-band method has a good tradeoff between the wideband methods and narrowband methods in terms of the estimation accuracy, spatial resolution, and computational cost. The proposed method was also tested in the field environment with the bearing tracking system, which also showed a good performance. PMID:27455267

  2. Design of a Direction-of-Arrival Estimation Method Used for an Automatic Bearing Tracking System.

    PubMed

    Guo, Feng; Liu, Huawei; Huang, Jingchang; Zhang, Xin; Zu, Xingshui; Li, Baoqing; Yuan, Xiaobing

    2016-01-01

    In this paper, we introduce a sub-band direction-of-arrival (DOA) estimation method suitable for employment within an automatic bearing tracking system. Inspired by the magnitude-squared coherence (MSC), we extend the MSC to the sub-band and propose the sub-band magnitude-squared coherence (SMSC) to measure the coherence between the frequency sub-bands of wideband signals. Then, we design a sub-band DOA estimation method which chooses a sub-band from the wideband signals by SMSC for the bearing tracking system. The simulations demonstrate that the sub-band method has a good tradeoff between the wideband methods and narrowband methods in terms of the estimation accuracy, spatial resolution, and computational cost. The proposed method was also tested in the field environment with the bearing tracking system, which also showed a good performance. PMID:27455267

  3. Parameter estimation of copula functions using an optimization-based method

    NASA Astrophysics Data System (ADS)

    Abdi, Amin; Hassanzadeh, Yousef; Talatahari, Siamak; Fakheri-Fard, Ahmad; Mirabbasi, Rasoul

    2016-02-01

    Application of the copulas can be useful for the accurate multivariate frequency analysis of hydrological phenomena. There are many copula functions and some methods were proposed for estimating the copula parameters. Since the copula functions are mathematically complicated, estimating of the copula parameter is an effortful work. In the present study, an optimization-based method (OBM) is proposed to obtain the parameters of copulas. The usefulness of the proposed method is illustrated on drought events. For this purpose, three commonly used copulas of Archimedean family, namely, Clayton, Frank, and Gumbel copulas are used to construct the joint probability distribution of drought characteristics of 60 gauging sites located in East-Azarbaijan province, Iran. The performance of OBM was compared with two conventional methods, namely, method of moments and inference function for margins. The results illustrate the supremacy of the OBM to estimate the copula parameters compared to the other considered methods.

  4. Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo

    2016-04-01

    Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.

  5. Methods, approaches and data sources for estimating stocks of irregular migrants.

    PubMed

    Jandl, Michael

    2011-01-01

    This paper presents a comprehensive review of available methods for sizing irregular migrant populations as a particular group in the study of hidden populations. Based on the existing body of literature on the subject, a generic classification scheme is developed that divides existing estimation procedures into subcategories like “approaches”, “methods” and “estimation techniques”. For each of these categories, basic principles, methodical strengths and weaknesses, as well as practical problems, are identified and discussed with the use of existing examples. Special emphasis is placed on data requirements, data shortcomings and possible estimation biases. In addition, based on the empirical classification and quality assessment of country-specific estimates developed in the CLANDESTINO research project, the potential and requirements for replicating best practice models in other countries are explored. Finally, a number of conclusions on the appropriate design of estimation projects are offered.

  6. One-level prediction-A numerical method for estimating undiscovered metal endowment

    USGS Publications Warehouse

    McCammon, R.B.; Kork, J.O.

    1992-01-01

    One-level prediction has been developed as a numerical method for estimating undiscovered metal endowment within large areas. The method is based on a presumed relationship between a numerical measure of geologic favorability and the spatial distribution of metal endowment. Metal endowment within an unexplored area for which the favorability measure is greater than a favorability threshold level is estimated to be proportional to the area of that unexplored portion. The constant of proportionality is the ratio of the discovered endowment found within a suitably chosen control region, which has been explored, to the area of that explored region. In addition to the estimate of undiscovered endowment, a measure of the error of the estimate is also calculated. One-level prediction has been used to estimate the undiscovered uranium endowment in the San Juan basin, New Mexico, U.S.A. A subroutine to perform the necessary calculations is included. ?? 1992 Oxford University Press.

  7. A test of three methods for estimating stature from immature skeletal remains using long bone lengths.

    PubMed

    Cardoso, Hugo F V

    2009-01-01

    In this study, the accuracy of three methods for stature estimation of children from long bone lengths was investigated. The sample utilized consists of nine identified immature skeletons (seven males and two females) of known cadaver length, aged between 1 and 14 years old. Results show that stature (cadaver length) is consistently underestimated by all three methods (from a minimum of 2.9 cm to a maximum of 19.3 cm). The femur/stature ratio provided the least accurate estimates of stature, and predictions were not significantly improved by the other two methods. Differences between true and estimated stature were also greatest when using the length of lower limb bones. Given that the study sample children grew in less than optimal environmental conditions, compared with the children that contributed to the development of the methods, they are stunted and have proportionally shorter legs. This suggests that stature estimation methods are not universally applicable and that environmental differences within a population (e.g., socioeconomic status differences) or differing levels of modernization and social and economic development between nations are an important source of variation in stature and body proportions of children. The fallibility of stature estimation methods, when they do not consider such variation, can be somewhat minimized if stature is estimated from the length of upper limb bones.

  8. Single Tracking Location Methods Suppress Speckle Noise in Shear Wave Velocity Estimation

    PubMed Central

    Elegbe, Etana C.; McAleavey, Stephen A.

    2014-01-01

    In ultrasound-based elastography methods, the estimation of shear wave velocity typically involves the tracking of speckle motion due to an applied force. The errors in the estimates of tissue displacement, and thus shear wave velocity, are generally attributed to electronic noise and decorrelation due to physical processes. We present our preliminary findings on another source of error, namely, speckle-induced bias in phase estimation. We find that methods that involve tracking in a single location, as opposed to multiple locations, are less sensitive to this source of error since the measurement is differential in nature and cancels out speckle-induced phase errors. PMID:23493611

  9. An easy field method for estimating the abundance of culicid larval instars.

    PubMed

    Carron, Alexandre; Duchet, Claire; Gaven, Bruno; Lagneau, Christophe

    2003-12-01

    A new method is proposed that avoids manual counting of mosquito larvae in order to estimate larval abundance in the field. This method is based on the visual comparison between abundance, in a standardized sampling tray (called an abacus), with 5 (abacus 5) or 10 (abacus 10) diagrammatically prepared abundance classes. Accuracy under laboratory and field conditions and individual bias have been evaluated and both abaci provide a reliable estimation of abundance in both conditions. There is no individual bias, whether people are familiar or not with its use. They could also be used for a quick estimation of larval treatment effectiveness, for the study of population dynamics and spatial distribution.

  10. Combining Neural Networks with Existing Methods to Estimate 1 in 100-Year Flood Event Magnitudes

    NASA Astrophysics Data System (ADS)

    Newson, A.; See, L.

    2005-12-01

    Over the last fifteen years artificial neural networks (ANN) have been shown to be advantageous for the solution of many hydrological modelling problems. The use of ANNs for flood magnitude estimation in ungauged catchments, however, is a relatively new and under researched area. In this paper ANNs are used to make estimates of the magnitude of the 100-year flood event (Q100) for a number of ungauged catchments. The data used in this study were provided by the Centre for Ecology and Hydrology's Flood Estimation Handbook (FEH), which contains information on catchments across the UK. Sixteen catchment descriptors for 719 catchments were used to train an ANN, which was split into a training, validation and test data set. The goodness-of-fit statistics on the test data set indicated good model performance, with an r-squared value of 0.8 and a coefficient of efficiency of 79 percent. Data for twelve ungauged catchments were then put through the trained ANN to produce estimates of Q100. Two other accepted methodologies were also employed: the FEH statistical method and the FSR (Flood Studies Report) design storm technique, both of which are used to produce flood frequency estimates. The advantage of developing an ANN model is that it provides a third figure to aid a hydrologist in making an accurate estimate. For six of the twelve catchments, there was a relatively low spread between estimates. In these instances, an estimate of Q100 could be made with a fair degree of certainty. Of the remaining six catchments, three had areas greater than 1000km2, which means the FSR design storm estimate cannot be used. Armed with the ANN model and the FEH statistical method the hydrologist still has two possible estimates to consider. For these three catchments, the estimates were also fairly similar, providing additional confidence to the estimation. In summary, the findings of this study have shown that an accurate estimation of Q100 can be made using the catchment descriptors of

  11. A new radial strain and strain rate estimation method using autocorrelation for carotid artery

    NASA Astrophysics Data System (ADS)

    Ye, Jihui; Kim, Hoonmin; Park, Jongho; Yeo, Sunmi; Shim, Hwan; Lim, Hyungjoon; Yoo, Yangmo

    2014-03-01

    Atherosclerosis is a leading cause of cardiovascular disease. The early diagnosis of atherosclerosis is of clinical interest since it can prevent any adverse effects of atherosclerotic vascular diseases. In this paper, a new carotid artery radial strain estimation method based on autocorrelation is presented. In the proposed method, the strain is first estimated by the autocorrelation of two complex signals from the consecutive frames. Then, the angular phase from autocorrelation is converted to strain and strain rate and they are analyzed over time. In addition, a 2D strain image over region of interest in a carotid artery can be displayed. To evaluate the feasibility of the proposed radial strain estimation method, radiofrequency (RF) data of 408 frames in the carotid artery of a volunteer were acquired by a commercial ultrasound system equipped with a research package (V10, Samsung Medison, Korea) by using a L5-13IS linear array transducer. From in vivo carotid artery data, the mean strain estimate was -0.1372 while its minimum and maximum values were -2.961 and 0.909, respectively. Moreover, the overall strain estimates are highly correlated with the reconstructed M-mode trace. Similar results were obtained from the estimation of the strain rate change over time. These results indicate that the proposed carotid artery radial strain estimation method is useful for assessing the arterial wall's stiffness noninvasively without increasing the computational complexity.

  12. Estimating the abundance of mouse populations of known size: promises and pitfalls of new methods

    USGS Publications Warehouse

    Conn, P.B.; Arthur, A.D.; Bailey, L.L.; Singleton, G.R.

    2006-01-01

    Knowledge of animal abundance is fundamental to many ecological studies. Frequently, researchers cannot determine true abundance, and so must estimate it using a method such as mark-recapture or distance sampling. Recent advances in abundance estimation allow one to model heterogeneity with individual covariates or mixture distributions and to derive multimodel abundance estimators that explicitly address uncertainty about which model parameterization best represents truth. Further, it is possible to borrow information on detection probability across several populations when data are sparse. While promising, these methods have not been evaluated using mark?recapture data from populations of known abundance, and thus far have largely been overlooked by ecologists. In this paper, we explored the utility of newly developed mark?recapture methods for estimating the abundance of 12 captive populations of wild house mice (Mus musculus). We found that mark?recapture methods employing individual covariates yielded satisfactory abundance estimates for most populations. In contrast, model sets with heterogeneity formulations consisting solely of mixture distributions did not perform well for several of the populations. We show through simulation that a higher number of trapping occasions would have been necessary to achieve good estimator performance in this case. Finally, we show that simultaneous analysis of data from low abundance populations can yield viable abundance estimates.

  13. Validation tests of an improved kernel density estimation method for identifying disease clusters

    SciTech Connect

    Cai, Qiang; Rushton, Gerald; Bhaduri, Budhendra L

    2011-01-01

    The spatial filter method, which belongs to the class of kernel density estimation methods, has been used to make morbidity and mortality maps in several recent studies. We propose improvements in the method that include a spatial basis of support designed to give a constant standard error for the standardized mortality/morbidity rate; a stair-case weight method for weighting observations to reduce estimation bias; and a method for selecting parameters to control three measures of performance of the method: sensitivity, specificity and false discovery rate. We test the performance of the method using Monte Carlo simulations of hypothetical disease clusters over a test area of four counties in Iowa. The simulations include different types of spatial disease patterns and high resolution population distribution data. Results confirm that the new features of the spatial filter method do substantially improve its performance in realistic situations comparable to those where the method is likely to be used.

  14. An indirect transmission measurement-based spectrum estimation method for computed tomography

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Niu, Kai; Schafer, Sebastian; Royalty, Kevin

    2015-01-01

    The characteristics of an x-ray spectrum can greatly influence imaging and related tasks. In practice, due to the pile-up effect of the detector, it’s difficult to directly measure the spectrum of a CT scanner using an energy resolved detector. An alternative solution is to estimate the spectrum using transmission measurements with a step phantom or another CT phantom. In this work, we present a new spectrum estimation method based on indirect transmission measurement and a model spectra mixture approach. The estimated x-ray spectrum was expressed as a weighted summation of a set of model spectra, which can significantly reduce the degrees of freedom of the spectrum estimation problem. Next, an estimated projection was calculated with the assumed spectrum. By iteratively updating the unknown weights, we minimized the difference between the estimated projection data and the raw projection data. The final spectrum was calculated with these calibrated weights and the model spectra. Both simulation and experimental data were used to evaluate the proposed method. In the simulation study, the estimated spectra were compared to the raw spectra which were used to generate the raw projection data. For the experimental study, the ground truth measurement of the raw x-ray spectrum was not available. Therefore, the estimated spectrum was compared against the spectra generated using the SpekCalc software with tube configurations provided by the scanner manufacturer. The results show the proposed method has the potential to accurately estimate x-ray spectra using the raw projection data. The difference between the mean energy of the raw spectra and the mean energy of the estimated spectra was less than 0.5 keV for both the simulation and experimental data. Further tests show the method was robust with respect to the model spectra generator.

  15. The performance of different propensity score methods for estimating marginal hazard ratios.

    PubMed

    Austin, Peter C

    2013-07-20

    Propensity score methods are increasingly being used to reduce or minimize the effects of confounding when estimating the effects of treatments, exposures, or interventions when using observational or non-randomized data. Under the assumption of no unmeasured confounders, previous research has shown that propensity score methods allow for unbiased estimation of linear treatment effects (e.g., differences in means or proportions). However, in biomedical research, time-to-event outcomes occur frequently. There is a paucity of research into the performance of different propensity score methods for estimating the effect of treatment on time-to-event outcomes. Furthermore, propensity score methods allow for the estimation of marginal or population-average treatment effects. We conducted an extensive series of Monte Carlo simulations to examine the performance of propensity score matching (1:1 greedy nearest-neighbor matching within propensity score calipers), stratification on the propensity score, inverse probability of treatment weighting (IPTW) using the propensity score, and covariate adjustment using the propensity score to estimate marginal hazard ratios. We found that both propensity score matching and IPTW using the propensity score allow for the estimation of marginal hazard ratios with minimal bias. Of these two approaches, IPTW using the propensity score resulted in estimates with lower mean squared error when estimating the effect of treatment in the treated. Stratification on the propensity score and covariate adjustment using the propensity score result in biased estimation of both marginal and conditional hazard ratios. Applied researchers are encouraged to use propensity score matching and IPTW using the propensity score when estimating the relative effect of treatment on time-to-event outcomes.

  16. Discriminatory ability of fractal and grey level co-occurrence matrix methods in structural analysis of hippocampus layers.

    PubMed

    Pantic, Igor; Dacic, Sanja; Brkic, Predrag; Lavrnja, Irena; Jovanovic, Tomislav; Pantic, Senka; Pekovic, Sanja

    2015-04-01

    Fractal and grey level co-occurrence matrix (GLCM) analysis represent two mathematical computer-assisted algorithms that are today thought to be able to accurately detect and quantify changes in tissue architecture during various physiological and pathological processes. However, despite their numerous applications in histology and pathology, their sensitivity, specificity and validity regarding evaluation of brain tissue remain unclear. In this article we present the results indicating that certain parameters of fractal and GLCM analysis have high discriminatory ability in distinguishing two morphologically similar regions of rat hippocampus: stratum lacunosum-moleculare and stratum radiatum. Fractal and GLCM algorithms were performed on a total of 240 thionine-stained hippocampus micrographs of 12 male Wistar albino rats. 120 digital micrographs represented stratum lacunosum-moleculare, and another 120 stratum radiatum. For each image, 7 parameters were calculated: fractal dimension, lacunarity, GLCM angular second moment, GLCM contrast, inverse difference moment, GLCM correlation, and GLCM variance. GLCM variance (VAR) resulted in the largest area under the Receiver operating characteristic (ROC) curve of 0.96, demonstrating an outstanding discriminatory power in analysis of stratum lacunosum-moleculare (average VAR equaled 478.1 ± 179.8) and stratum radiatum (average VAR of 145.9 ± 59.2, p < 0.0001). For the criterion VAR ≤ 227.5, sensitivity and specificity were 90% and 86.7%, respectively. GLCM correlation as a parameter also produced large area under the ROC curve of 0.95. Our results are in accordance with the findings of our previous study regarding brain white mass fractal and textural analysis. GLCM algorithm as an image analysis method has potentially high applicability in structural analysis of brain tissue cytoarcitecture.

  17. Methods for estimation of covariance matrices and covariance components for the Hanford Waste Vitrification Plant Process

    SciTech Connect

    Bryan, M.F.; Piepel, G.F.; Simpson, D.B.

    1996-03-01

    The high-level waste (HLW) vitrification plant at the Hanford Site was being designed to transuranic and high-level radioactive waste in borosilicate class. Each batch of plant feed material must meet certain requirements related to plant performance, and the resulting class must meet requirements imposed by the Waste Acceptance Product Specifications. Properties of a process batch and the resultlng glass are largely determined by the composition of the feed material. Empirical models are being developed to estimate some property values from data on feed composition. Methods for checking and documenting compliance with feed and glass requirements must account for various types of uncertainties. This document focuses on the estimation. manipulation, and consequences of composition uncertainty, i.e., the uncertainty inherent in estimates of feed or glass composition. Three components of composition uncertainty will play a role in estimating and checking feed and glass properties: batch-to-batch variability, within-batch uncertainty, and analytical uncertainty. In this document, composition uncertainty and its components are treated in terms of variances and variance components or univariate situations, covariance matrices and covariance components for multivariate situations. The importance of variance and covariance components stems from their crucial role in properly estimating uncertainty In values calculated from a set of observations on a process batch. Two general types of methods for estimating uncertainty are discussed: (1) methods based on data, and (2) methods based on knowledge, assumptions, and opinions about the vitrification process. Data-based methods for estimating variances and covariance matrices are well known. Several types of data-based methods exist for estimation of variance components; those based on the statistical method analysis of variance are discussed, as are the strengths and weaknesses of this approach.

  18. Modification of the method of parametric estimation of atmospheric distortion in MODTRAN model

    NASA Astrophysics Data System (ADS)

    Belov, A. M.

    2015-12-01

    The paper presents a modification of the method of parametric estimation of atmospheric distortion in MODTRAN model as well as experimental research of the method. The experimental research showed that the base method does not take into account physical meaning of atmospheric spherical albedo parameter and presence of outliers in source data that results to overall atmospheric correction accuracy decreasing. Proposed modification improves the accuracy of atmospheric correction in comparison with the base method. The modification consists in the addition of nonnegativity constraint on the atmospheric spherical albedo estimated value and the addition of preprocessing stage aimed to adjust source data.

  19. Comparison of Two New Robust Parameter Estimation Methods for the Power Function Distribution.

    PubMed

    Shakeel, Muhammad; Haq, Muhammad Ahsan Ul; Hussain, Ijaz; Abdulhamid, Alaa Mohamd; Faisal, Muhammad

    2016-01-01

    Estimation of any probability distribution parameters is vital because imprecise and biased estimates can be misleading. In this study, we investigate a flexible power function distribution and introduced new two methods such as, probability weighted moments, and generalized probability weighted methods for its parameters. We compare their results with L-moments, trimmed L-moments by a simulation study and a real data example based on performance measures such as, mean square error and total deviation. We concluded that all the methods perform well in the case of large sample size (n>30), however, the generalized probability weighted moment method performs better for small sample size. PMID:27500404

  20. Comparison of Two New Robust Parameter Estimation Methods for the Power Function Distribution

    PubMed Central

    Shakeel, Muhammad; Haq, Muhammad Ahsan ul; Abdulhamid, Alaa Mohamd; Faisal, Muhammad

    2016-01-01

    Estimation of any probability distribution parameters is vital because imprecise and biased estimates can be misleading. In this study, we investigate a flexible power function distribution and introduced new two methods such as, probability weighted moments, and generalized probability weighted methods for its parameters. We compare their results with L-moments, trimmed L-moments by a simulation study and a real data example based on performance measures such as, mean square error and total deviation. We concluded that all the methods perform well in the case of large sample size (n>30), however, the generalized probability weighted moment method performs better for small sample size. PMID:27500404