ERIC Educational Resources Information Center
Lee, Young-Jin
2012-01-01
This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…
Improving the accuracy of burn-surface estimation.
Nichter, L S; Williams, J; Bryant, C A; Edlich, R F
1985-09-01
A user-friendly computer-assisted method of calculating total body surface area burned (TBSAB) has been developed. This method is more accurate, faster, and subject to less error than conventional methods. For comparison, the ability of 30 physicians to estimate TBSAB was tested. Parameters studied included the effect of prior burn care experience, the influence of burn size, the ability to accurately sketch the size of burns on standard burn charts, and the ability to estimate percent TBSAB from the sketches. Despite the ability for physicians of all levels of training to accurately sketch TBSAB, significant burn size over-estimation (p less than 0.01) and large interrater variability of potential consequence was noted. Direct benefits of a computerized system are many. These include the need for minimal user experience and the ability for wound-trend analysis, permanent record storage, calculation of fluid and caloric requirements, hemodynamic parameters, and the ability to compare meaningfully the different treatment protocols.
An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.
ERIC Educational Resources Information Center
De Ayala, R. J.; And Others
Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…
ERIC Educational Resources Information Center
Sahin, Alper; Ozbasi, Durmus
2017-01-01
Purpose: This study aims to reveal effects of content balancing and item selection method on ability estimation in computerized adaptive tests by comparing Fisher's maximum information (FMI) and likelihood weighted information (LWI) methods. Research Methods: Four groups of examinees (250, 500, 750, 1000) and a bank of 500 items with 10 different…
Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15
ERIC Educational Resources Information Center
Zhang, Jinming
2005-01-01
Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…
Vivas, M; Silveira, S F; Viana, A P; Amaral, A T; Cardoso, D L; Pereira, M G
2014-07-02
Diallel crossing methods provide information regarding the performance of genitors between themselves and their hybrid combinations. However, with a large number of parents, the number of hybrid combinations that can be obtained and evaluated become limited. One option regarding the number of parents involved is the adoption of circulant diallels. However, information is lacking regarding diallel analysis using mixed models. This study aimed to evaluate the efficacy of the method of linear mixed models to estimate, for variable resistance to foliar fungal diseases, components of general and specific combining ability in a circulant table with different s values. Subsequently, 50 diallels were simulated for each s value, and the correlations and estimates of the combining abilities of the different diallel combinations were analyzed. The circulant diallel method using mixed modeling was effective in the classification of genitors regarding their combining abilities relative to the complete diallels. The numbers of crosses in which each genitor(s) will compose the circulant diallel and the estimated heritability affect the combining ability estimates. With three crosses per parent, it is possible to obtain good concordance (correlation above 0.8) between the combining ability estimates.
The Effect of Schooling and Ability on Achievement Test Scores. NBER Working Paper Series.
ERIC Educational Resources Information Center
Hansen, Karsten; Heckman, James J.; Mullen, Kathleen J.
This study developed two methods for estimating the effect of schooling on achievement test scores that control for the endogeneity of schooling by postulating that both schooling and test scores are generated by a common unobserved latent ability. The methods were applied to data on schooling and test scores. Estimates from the two methods are in…
ERIC Educational Resources Information Center
Sen, Sedat
2018-01-01
Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…
ERIC Educational Resources Information Center
de la Torre, Jimmy; Patz, Richard J.
2005-01-01
This article proposes a practical method that capitalizes on the availability of information from multiple tests measuring correlated abilities given in a single test administration. By simultaneously estimating different abilities with the use of a hierarchical Bayesian framework, more precise estimates for each ability dimension are obtained.…
Polania, Jose; Poschenrieder, Charlotte; Rao, Idupulapati; Beebe, Stephen
2016-09-01
Common bean ( Phaseolus vulgaris L.) is the most important food legume, cultivated by small farmers and is usually exposed to unfavorable conditions with minimum use of inputs. Drought and low soil fertility, especially phosphorus and nitrogen (N) deficiencies, are major limitations to bean yield in smallholder systems. Beans can derive part of their required N from the atmosphere through symbiotic nitrogen fixation (SNF). Drought stress severely limits SNF ability of plants. The main objectives of this study were to: (i) test and validate the use of 15 N natural abundance in grain to quantify phenotypic differences in SNF ability for its implementation in breeding programs of common bean with bush growth habit aiming to improve SNF, and (ii) quantify phenotypic differences in SNF under drought to identify superior genotypes that could serve as parents. Field studies were conducted at CIAT-Palmira, Colombia using a set of 36 bean genotypes belonging to the Middle American gene pool for evaluation in two seasons with two levels of water supply (irrigated and drought stress). We used 15 N natural abundance method to compare SNF ability estimated from shoot tissue sampled at mid-pod filling growth stage vs. grain tissue sampled at harvest. Our results showed positive and significant correlation between nitrogen derived from the atmosphere (%Ndfa) estimated using shoot tissue at mid-pod filling and %Ndfa estimated using grain tissue at harvest. Both methods showed phenotypic variability in SNF ability under both drought and irrigated conditions and a significant reduction in SNF ability was observed under drought stress. We suggest that the method of estimating Ndfa using grain tissue (Ndfa-G) could be applied in bean breeding programs to improve SNF ability. Using this method of Ndfa-G, we identified four bean lines (RCB 593, SEA 15, NCB 226 and BFS 29) that combine greater SNF ability with greater grain yield under drought stress and these could serve as potential parents to further improve SNF ability of common bean.
ERIC Educational Resources Information Center
Tsutakawa, Robert K.
This paper presents a method for estimating certain characteristics of test items which are designed to measure ability, or knowledge, in a particular area. Under the assumption that ability parameters are sampled from a normal distribution, the EM algorithm is used to derive maximum likelihood estimates to item parameters of the two-parameter…
Shared-environmental contributions to high cognitive ability.
Kirkpatrick, Robert M; McGue, Matt; Iacono, William G
2009-07-01
Using a combined sample of adolescent twins, biological siblings, and adoptive siblings, we estimated and compared the differential shared-environmentality for high cognitive ability and the shared-environmental variance for the full range of ability during adolescence. Estimates obtained via multiple methods were in the neighborhood of 0.20, and suggest a modest effect of the shared environment on both high and full-range ability. We then examined the association of ability with three measures of the family environment in a subsample of adoptive siblings: parental occupational status, parental education, and disruptive life events. Only parental education showed significant (albeit modest) association with ability in both the biological and adoptive samples. We discuss these results in terms of the need for cognitive-development research to combine genetically sensitive designs and modern statistical methods with broad, thorough environmental measurement.
Magis, David
2014-11-01
In item response theory, the classical estimators of ability are highly sensitive to response disturbances and can return strongly biased estimates of the true underlying ability level. Robust methods were introduced to lessen the impact of such aberrant responses on the estimation process. The computation of asymptotic (i.e., large-sample) standard errors (ASE) for these robust estimators, however, has not yet been fully considered. This paper focuses on a broad class of robust ability estimators, defined by an appropriate selection of the weight function and the residual measure, for which the ASE is derived from the theory of estimating equations. The maximum likelihood (ML) and the robust estimators, together with their estimated ASEs, are then compared in a simulation study by generating random guessing disturbances. It is concluded that both the estimators and their ASE perform similarly in the absence of random guessing, while the robust estimator and its estimated ASE are less biased and outperform their ML counterparts in the presence of random guessing with large impact on the item response process. © 2013 The British Psychological Society.
ERIC Educational Resources Information Center
Lee, Yi-Hsuan; Zhang, Jinming
2008-01-01
The method of maximum-likelihood is typically applied to item response theory (IRT) models when the ability parameter is estimated while conditioning on the true item parameters. In practice, the item parameters are unknown and need to be estimated first from a calibration sample. Lewis (1985) and Zhang and Lu (2007) proposed the expected response…
Item Selection and Ability Estimation Procedures for a Mixed-Format Adaptive Test
ERIC Educational Resources Information Center
Ho, Tsung-Han; Dodd, Barbara G.
2012-01-01
In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…
Individual snag detection using neighborhood attribute filtered airborne lidar data
Brian M. Wing; Martin W. Ritchie; Kevin Boston; Warren B. Cohen; Michael J. Olsen
2015-01-01
The ability to estimate and monitor standing dead trees (snags) has been difficult due to their irregular and sparse distribution, often requiring intensive sampling methods to obtain statistically significant estimates. This study presents a new method for estimating and monitoring snags using neighborhood attribute filtered airborne discrete-return lidar data. The...
Using GIS-based methods and lidar data to estimate rooftop solar technical potential in US cities
NASA Astrophysics Data System (ADS)
Margolis, Robert; Gagnon, Pieter; Melius, Jennifer; Phillips, Caleb; Elmore, Ryan
2017-07-01
We estimate the technical potential of rooftop solar photovoltaics (PV) for select US cities by combining light detection and ranging (lidar) data, a validated analytical method for determining rooftop PV suitability employing geographic information systems, and modeling of PV electricity generation. We find that rooftop PV’s ability to meet estimated city electricity consumption varies widely—from meeting 16% of annual consumption (in Washington, DC) to meeting 88% (in Mission Viejo, CA). Important drivers include average rooftop suitability, household footprint/per-capita roof space, the quality of the solar resource, and the city’s estimated electricity consumption. In addition to city-wide results, we also estimate the ability of aggregations of households to offset their electricity consumption with PV. In a companion article, we will use statistical modeling to extend our results and estimate national rooftop PV technical potential. In addition, our publically available data and methods may help policy makers, utilities, researchers, and others perform customized analyses to meet their specific needs.
Cho, Il Haeng; Park, Kyung S; Lim, Chang Joo
2010-02-01
In this study, we described the characteristics of five different biological age (BA) estimation algorithms, including (i) multiple linear regression, (ii) principal component analysis, and somewhat unique methods developed by (iii) Hochschild, (iv) Klemera and Doubal, and (v) a variant of Klemera and Doubal's method. The objective of this study is to find the most appropriate method of BA estimation by examining the association between Work Ability Index (WAI) and the differences of each algorithm's estimates from chronological age (CA). The WAI was found to be a measure that reflects an individual's current health status rather than the deterioration caused by a serious dependency with the age. Experiments were conducted on 200 Korean male participants using a BA estimation system developed principally under the concept of non-invasive, simple to operate and human function-based. Using the empirical data, BA estimation as well as various analyses including correlation analysis and discriminant function analysis was performed. As a result, it had been confirmed by the empirical data that Klemera and Doubal's method with uncorrelated variables from principal component analysis produces relatively reliable and acceptable BA estimates. 2009 Elsevier Ireland Ltd. All rights reserved.
Using GIS-based methods and lidar data to estimate rooftop solar technical potential in US cities
Margolis, Robert; Gagnon, Pieter; Melius, Jennifer; ...
2017-07-06
Here, we estimate the technical potential of rooftop solar photovoltaics (PV) for select US cities by combining light detection and ranging (lidar) data, a validated analytical method for determining rooftop PV suitability employing geographic information systems, and modeling of PV electricity generation. We find that rooftop PV's ability to meet estimated city electricity consumption varies widely - from meeting 16% of annual consumption (in Washington, DC) to meeting 88% (in Mission Viejo, CA). Important drivers include average rooftop suitability, household footprint/per-capita roof space, the quality of the solar resource, and the city's estimated electricity consumption. In addition to city-wide results,more » we also estimate the ability of aggregations of households to offset their electricity consumption with PV. In a companion article, we will use statistical modeling to extend our results and estimate national rooftop PV technical potential. In addition, our publically available data and methods may help policy makers, utilities, researchers, and others perform customized analyses to meet their specific needs.« less
Using GIS-based methods and lidar data to estimate rooftop solar technical potential in US cities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margolis, Robert; Gagnon, Pieter; Melius, Jennifer
Here, we estimate the technical potential of rooftop solar photovoltaics (PV) for select US cities by combining light detection and ranging (lidar) data, a validated analytical method for determining rooftop PV suitability employing geographic information systems, and modeling of PV electricity generation. We find that rooftop PV's ability to meet estimated city electricity consumption varies widely - from meeting 16% of annual consumption (in Washington, DC) to meeting 88% (in Mission Viejo, CA). Important drivers include average rooftop suitability, household footprint/per-capita roof space, the quality of the solar resource, and the city's estimated electricity consumption. In addition to city-wide results,more » we also estimate the ability of aggregations of households to offset their electricity consumption with PV. In a companion article, we will use statistical modeling to extend our results and estimate national rooftop PV technical potential. In addition, our publically available data and methods may help policy makers, utilities, researchers, and others perform customized analyses to meet their specific needs.« less
Schwabe, Inga; Boomsma, Dorret I; van den Berg, Stéphanie M
2017-12-01
Genotype by environment interaction in behavioral traits may be assessed by estimating the proportion of variance that is explained by genetic and environmental influences conditional on a measured moderating variable, such as a known environmental exposure. Behavioral traits of interest are often measured by questionnaires and analyzed as sum scores on the items. However, statistical results on genotype by environment interaction based on sum scores can be biased due to the properties of a scale. This article presents a method that makes it possible to analyze the actually observed (phenotypic) item data rather than a sum score by simultaneously estimating the genetic model and an item response theory (IRT) model. In the proposed model, the estimation of genotype by environment interaction is based on an alternative parametrization that is uniquely identified and therefore to be preferred over standard parametrizations. A simulation study shows good performance of our method compared to analyzing sum scores in terms of bias. Next, we analyzed data of 2,110 12-year-old Dutch twin pairs on mathematical ability. Genetic models were evaluated and genetic and environmental variance components estimated as a function of a family's socio-economic status (SES). Results suggested that common environmental influences are less important in creating individual differences in mathematical ability in families with a high SES than in creating individual differences in mathematical ability in twin pairs with a low or average SES.
Haptic control with environment force estimation for telesurgery.
Bhattacharjee, Tapomayukh; Son, Hyoung Il; Lee, Doo Yong
2008-01-01
Success of telesurgical operations depends on better position tracking ability of the slave device. Improved position tracking of the slave device can lead to safer and less strenuous telesurgical operations. The two-channel force-position control architecture is widely used for better position tracking ability. This architecture requires force sensors for direct force feedback. Force sensors may not be a good choice in the telesurgical environment because of the inherent noise, and limitation in the deployable place and space. Hence, environment force estimation is developed using the concept of the robot function parameter matrix and a recursive least squares method. Simulation results show efficacy of the proposed method. The slave device successfully tracks the position of the master device, and the estimation error quickly becomes negligible.
Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding
2013-01-01
Background In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. Results The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. Conclusions The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies. PMID:24314298
Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding.
Ould Estaghvirou, Sidi Boubacar; Ogutu, Joseph O; Schulz-Streeck, Torben; Knaak, Carsten; Ouzunova, Milena; Gordillo, Andres; Piepho, Hans-Peter
2013-12-06
In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies.
A comparison of sap flux-based evapotranspiration estimates with catchment-scale water balance
Chelcy R. Ford; Robert M. Hubbard; Brian D. Kloeppel; James M. Vose
2007-01-01
Many researchers are using sap flux to estimate tree-level transpiration, and to scale to stand- and catchment-level transpiration; yet studies evaluating the comparability of sap flux-based estimates of transpiration (E) with alternative methods for estimating Et at this spatial scale are rare. Our ability to...
Application of the Combination Approach for Estimating Evapotranspiration in Puerto Rico
NASA Technical Reports Server (NTRS)
Harmsen, Eric; Luvall, Jeffrey; Gonzalez, Jorge
2005-01-01
The ability to estimate short-term fluxes of water vapor from the land surface is important for validating latent heat flux estimates from high resolution remote sensing techniques. A new, relatively inexpensive method is presented for estimating t h e ground-based values of the surface latent heat flux or evapotranspiration.
Cox, Kieran D; Black, Morgan J; Filip, Natalia; Miller, Matthew R; Mohns, Kayla; Mortimor, James; Freitas, Thaise R; Greiter Loerzer, Raquel; Gerwing, Travis G; Juanes, Francis; Dudas, Sarah E
2017-12-01
Diversity estimates play a key role in ecological assessments. Species richness and abundance are commonly used to generate complex diversity indices that are dependent on the quality of these estimates. As such, there is a long-standing interest in the development of monitoring techniques, their ability to adequately assess species diversity, and the implications for generated indices. To determine the ability of substratum community assessment methods to capture species diversity, we evaluated four methods: photo quadrat, point intercept, random subsampling, and full quadrat assessments. Species density, abundance, richness, Shannon diversity, and Simpson diversity were then calculated for each method. We then conducted a method validation at a subset of locations to serve as an indication for how well each method captured the totality of the diversity present. Density, richness, Shannon diversity, and Simpson diversity estimates varied between methods, despite assessments occurring at the same locations, with photo quadrats detecting the lowest estimates and full quadrat assessments the highest. Abundance estimates were consistent among methods. Sample-based rarefaction and extrapolation curves indicated that differences between Hill numbers (richness, Shannon diversity, and Simpson diversity) were significant in the majority of cases, and coverage-based rarefaction and extrapolation curves confirmed that these dissimilarities were due to differences between the methods, not the sample completeness. Method validation highlighted the inability of the tested methods to capture the totality of the diversity present, while further supporting the notion of extrapolating abundances. Our results highlight the need for consistency across research methods, the advantages of utilizing multiple diversity indices, and potential concerns and considerations when comparing data from multiple sources.
Reich, Christian G; Ryan, Patrick B; Schuemie, Martijn J
2013-10-01
A systematic risk identification system has the potential to test marketed drugs for important Health Outcomes of Interest or HOI. For each HOI, multiple definitions are used in the literature, and some of them are validated for certain databases. However, little is known about the effect of different definitions on the ability of methods to estimate their association with medical products. Alternative definitions of HOI were studied for their effect on the performance of analytical methods in observational outcome studies. A set of alternative definitions for three HOI were defined based on literature review and clinical diagnosis guidelines: acute kidney injury, acute liver injury and acute myocardial infarction. The definitions varied by the choice of diagnostic codes and the inclusion of procedure codes and lab values. They were then used to empirically study an array of analytical methods with various analytical choices in four observational healthcare databases. The methods were executed against predefined drug-HOI pairs to generate an effect estimate and standard error for each pair. These test cases included positive controls (active ingredients with evidence to suspect a positive association with the outcome) and negative controls (active ingredients with no evidence to expect an effect on the outcome). Three different performance metrics where used: (i) Area Under the Receiver Operator Characteristics (ROC) curve (AUC) as a measure of a method's ability to distinguish between positive and negative test cases, (ii) Measure of bias by estimation of distribution of observed effect estimates for the negative test pairs where the true effect can be assumed to be one (no relative risk), and (iii) Minimal Detectable Relative Risk (MDRR) as a measure of whether there is sufficient power to generate effect estimates. In the three outcomes studied, different definitions of outcomes show comparable ability to differentiate true from false control cases (AUC) and a similar bias estimation. However, broader definitions generating larger outcome cohorts allowed more drugs to be studied with sufficient statistical power. Broader definitions are preferred since they allow studying drugs with lower prevalence than the more precise or narrow definitions while showing comparable performance characteristics in differentiation of signal vs. no signal as well as effect size estimation.
2013-01-01
Background Older adults could not safely step over an obstacle unless they correctly estimated their physical ability to be capable of a successful step over action. Thus, incorrect estimation (overestimation) of ability to step over an obstacle could result in severe accident such as falls in older adults. We investigated whether older adults tended to overestimate step-over ability compared with young adults and whether such overestimation in stepping over obstacles was associated with falls. Methods Three groups of adults, young-old (age, 60–74 years; n, 343), old-old (age, >74 years; n, 151), and young (age, 18–35 years; n, 71), performed our original step-over test (SOT). In the SOT, participants observed a horizontal bar at a 7-m distance and estimated the maximum height (EH) that they could step over. After estimation, they performed real SOT trials to measure the actual maximum height (AH). We also identified participants who had experienced falls in the 1 year period before the study. Results Thirty-nine young-old adults (11.4%) and 49 old-old adults (32.5%) failed to step over the bar at EH (overestimation), whereas all young adults succeeded (underestimation). There was a significant negative correlation between actual performance (AH) and self-estimation error (difference between EH and AH) in the older adults, indicating that older adults with lower AH (SOT ability) tended to overestimate actual ability (EH > AH) and vice versa. Furthermore, the percentage of participants who overestimated SOT ability in the fallers (28%) was almost double larger than that in the non-fallers (16%), with the fallers showing significantly lower SOT ability than the non-fallers. Conclusions Older adults appear unaware of age-related physical decline and tended to overestimate step-over ability. Both age-related decline in step-over ability, and more importantly, overestimation or decreased underestimation of this ability may raise potential risk of falls. PMID:23651772
NASA Astrophysics Data System (ADS)
Dmitrienko, S. G.; Popov, S. A.; Chumichkina, Yu. A.; Zolotov, Yu. A.
2011-03-01
New sorbents, polymers with molecular imprints of 2,4-dichlorophenoxyacetic acid (2,4-D), were prepared on the basis of acrylamide. The sorbents were synthesized by thermal polymerization methods with and without the use of ultrasound, photopolymerization, and suspension polymerization. The specific surface area of the products was estimated and their sorption properties were studied. Polymers with molecular imprints prepared by thermal polymerization with the use of ultrasound and by suspension polymerization showed the best ability to repeatedly bind 2,4-D. The selectivity of polymers was estimated for the example of structurally related compounds. It was shown that the method of synthesis decisively influenced not only the ability of sorbents to repeatedly bind 2,4-D but also their selectivity.
A Bayesian Approach to Determination of F, D, and Z Values Used in Steam Sterilization Validation.
Faya, Paul; Stamey, James D; Seaman, John W
2017-01-01
For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the well-known D T , z , and F o values that are used in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these values to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. LAY ABSTRACT: For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the critical process parameters that are evaluated in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these parameters to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. © PDA, Inc. 2017.
Modified microplate method for rapid and efficient estimation of siderophore produced by bacteria.
Arora, Naveen Kumar; Verma, Maya
2017-12-01
In this study, siderophore production by various bacteria amongst the plant-growth-promoting rhizobacteria was quantified by a rapid and efficient method. In total, 23 siderophore-producing bacterial isolates/strains were taken to estimate their siderophore-producing ability by the standard method (chrome azurol sulphonate assay) as well as 96 well microplate method. Production of siderophore was estimated in percent siderophore unit by both the methods. It was observed that data obtained by both methods correlated positively with each other proving the correctness of microplate method. By the modified microplate method, siderophore production by several bacterial strains can be estimated both qualitatively and quantitatively at one go, saving time, chemicals, making it very less tedious, and also being cheaper in comparison with the method currently in use. The modified microtiter plate method as proposed here makes it far easier to screen the plant-growth-promoting character of plant-associated bacteria.
Generalized shrunken type-GM estimator and its application
NASA Astrophysics Data System (ADS)
Ma, C. Z.; Du, Y. L.
2014-03-01
The parameter estimation problem in linear model is considered when multicollinearity and outliers exist simultaneously. A class of new robust biased estimator, Generalized Shrunken Type-GM Estimation, with their calculated methods are established by combination of GM estimator and biased estimator include Ridge estimate, Principal components estimate and Liu estimate and so on. A numerical example shows that the most attractive advantage of these new estimators is that they can not only overcome the multicollinearity of coefficient matrix and outliers but also have the ability to control the influence of leverage points.
Double-survey estimates of bald eagle populations in Oregon
Anthony, R.G.; Garrett, Monte G.; Isaacs, F.B.
1999-01-01
The literature on abundance of birds of prey is almost devoid of population estimates with statistical rigor. Therefore, we surveyed bald eagle (Haliaeetus leucocephalus) populations on the Crooked and lower Columbia rivers of Oregon and used the double-survey method to estimate populations and sighting probabilities for different survey methods (aerial, boat, vehicle) and bald eagle ages (adults vs. subadults). Sighting probabilities were consistently 20%. The results revealed variable and negative bias (percent relative bias = -9 to -70%) of direct counts and emphasized the importance of estimating populations where some measure of precision and ability to conduct inference tests are available. We recommend use of the double-survey method to estimate abundance of bald eagle populations and other raptors in open habitats.
A stopping criterion for the iterative solution of partial differential equations
NASA Astrophysics Data System (ADS)
Rao, Kaustubh; Malan, Paul; Perot, J. Blair
2018-01-01
A stopping criterion for iterative solution methods is presented that accurately estimates the solution error using low computational overhead. The proposed criterion uses information from prior solution changes to estimate the error. When the solution changes are noisy or stagnating it reverts to a less accurate but more robust, low-cost singular value estimate to approximate the error given the residual. This estimator can also be applied to iterative linear matrix solvers such as Krylov subspace or multigrid methods. Examples of the stopping criterion's ability to accurately estimate the non-linear and linear solution error are provided for a number of different test cases in incompressible fluid dynamics.
Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano
2015-01-01
We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926
ERIC Educational Resources Information Center
Haworth, Claire M. A.; Plomin, Robert
2010-01-01
Objective: To consider recent findings from quantitative genetic research in the context of molecular genetic research, especially genome-wide association studies. We focus on findings that go beyond merely estimating heritability. We use learning abilities and disabilities as examples. Method: Recent twin research in the area of learning…
David A. Tallmon; Dave Gregovich; Robin S. Waples; C. Scott Baker; Jennifer Jackson; Barbara L. Taylor; Eric Archer; Karen K. Martien; Fred W. Allendorf; Michael K. Schwartz
2010-01-01
The utility of microsatellite markers for inferring population size and trend has not been rigorously examined, even though these markers are commonly used to monitor the demography of natural populations. We assessed the ability of a linkage disequilibrium estimator of effective population size (Ne) and a simple capture-recapture estimator of abundance (N) to quantify...
ERIC Educational Resources Information Center
Gugel, John F.
A new method for estimating the parameters of the normal ogive three-parameter model for multiple-choice test items--the normalized direct (NDIR) procedure--is examined. The procedure is compared to a more commonly used estimation procedure, Lord's LOGIST, using computer simulations. The NDIR procedure uses the normalized (mid-percentile)…
On the Estimation of Standard Errors in Cognitive Diagnosis Models
ERIC Educational Resources Information Center
Philipp, Michel; Strobl, Carolin; de la Torre, Jimmy; Zeileis, Achim
2018-01-01
Cognitive diagnosis models (CDMs) are an increasingly popular method to assess mastery or nonmastery of a set of fine-grained abilities in educational or psychological assessments. Several inference techniques are available to quantify the uncertainty of model parameter estimates, to compare different versions of CDMs, or to check model…
Pinchi, Vilma; Pradella, Francesco; Vitale, Giulia; Rugo, Dario; Nieri, Michele; Norelli, Gian-Aristide
2016-01-01
The age threshold of 14 years is relevant in Italy as the minimum age for criminal responsibility. It is of utmost importance to evaluate the diagnostic accuracy of every odontological method for age evaluation considering the sensitivity, or the ability to estimate the true positive cases, and the specificity, or the ability to estimate the true negative cases. The research aims to compare the specificity and sensitivity of four commonly adopted methods of dental age estimation - Demirjian, Haavikko, Willems and Cameriere - in a sample of Italian children aged between 11 and 16 years, with an age threshold of 14 years, using receiver operating characteristic curves and the area under the curve (AUC). In addition, new decision criteria are developed to increase the accuracy of the methods. Among the four odontological methods for age estimation adopted in the research, the Cameriere method showed the highest AUC in both female and male cohorts. The Cameriere method shows a high degree of accuracy at the age threshold of 14 years. To adopt the Cameriere method to estimate the 14-year age threshold more accurately, however, it is suggested - according to the Youden index - that the decision criterion be set at the lower value of 12.928 for females and 13.258 years for males, obtaining a sensitivity of 85% and specificity of 88% in females, and a sensitivity of 77% and specificity of 92% in males. If a specificity level >90% is needed, the cut-off point should be set at 12.959 years (82% sensitivity) for females. © The Author(s) 2015.
Guide for evaluating sweetgum sites
W. M. Broadfoot; Roger M. Krinard
1959-01-01
Studies at the Stonevill Research Center1 have established 3 practical methods of estimating the ability of Midsouth soils to grow sweetgum (liquidambar styraciflua). The methods were developed from data collected from 104 sweetgum plots in the area mapped on the opposite page.
Estimating Premorbid Cognitive Abilities in Low-Educated Populations
Apolinario, Daniel; Brucki, Sonia Maria Dozzi; Ferretti, Renata Eloah de Lucena; Farfel, José Marcelo; Magaldi, Regina Miksian; Busse, Alexandre Leopold; Jacob-Filho, Wilson
2013-01-01
Objective To develop an informant-based instrument that would provide a valid estimate of premorbid cognitive abilities in low-educated populations. Methods A questionnaire was drafted by focusing on the premorbid period with a 10-year time frame. The initial pool of items was submitted to classical test theory and a factorial analysis. The resulting instrument, named the Premorbid Cognitive Abilities Scale (PCAS), is composed of questions addressing educational attainment, major lifetime occupation, reading abilities, reading habits, writing abilities, calculation abilities, use of widely available technology, and the ability to search for specific information. The validation sample was composed of 132 older Brazilian adults from the following three demographically matched groups: normal cognitive aging (n = 72), mild cognitive impairment (n = 33), and mild dementia (n = 27). The scores of a reading test and a neuropsychological battery were adopted as construct criteria. Post-mortem inter-informant reliability was tested in a sub-study with two relatives from each deceased individual. Results All items presented good discriminative power, with corrected item-total correlation varying from 0.35 to 0.74. The summed score of the instrument presented high correlation coefficients with global cognitive function (r = 0.73) and reading skills (r = 0.82). Cronbach's alpha was 0.90, showing optimal internal consistency without redundancy. The scores did not decrease across the progressive levels of cognitive impairment, suggesting that the goal of evaluating the premorbid state was achieved. The intraclass correlation coefficient was 0.96, indicating excellent inter-informant reliability. Conclusion The instrument developed in this study has shown good properties and can be used as a valid estimate of premorbid cognitive abilities in low-educated populations. The applicability of the PCAS, both as an estimate of premorbid intelligence and cognitive reserve, is discussed. PMID:23555894
ERIC Educational Resources Information Center
Samejima, Fumiko
A method is proposed that increases the accuracies of estimation of the operating characteristics of discrete item responses, especially when the true operating characteristic is represented by a steep curve, and also at the lower and upper ends of the ability distribution where the estimation tends to be inaccurate because of the smaller number…
ERIC Educational Resources Information Center
Ozyurt, Hacer; Ozyurt, Ozcan; Baki, Adnan
2012-01-01
Assessment is one of the methods used for evaluation of the learning outputs. Nowadays, use of adaptive assessment systems estimating ability level and abilities of the students is becoming widespread instead of traditional assessment systems. Adaptive assessment system evaluates students not only according to their marks that they take in test…
Tobías, Aurelio; Armstrong, Ben; Gasparrini, Antonio
2017-01-01
The minimum mortality temperature from J- or U-shaped curves varies across cities with different climates. This variation conveys information on adaptation, but ability to characterize is limited by the absence of a method to describe uncertainty in estimated minimum mortality temperatures. We propose an approximate parametric bootstrap estimator of confidence interval (CI) and standard error (SE) for the minimum mortality temperature from a temperature-mortality shape estimated by splines. The coverage of the estimated CIs was close to nominal value (95%) in the datasets simulated, although SEs were slightly high. Applying the method to 52 Spanish provincial capital cities showed larger minimum mortality temperatures in hotter cities, rising almost exactly at the same rate as annual mean temperature. The method proposed for computing CIs and SEs for minimums from spline curves allows comparing minimum mortality temperatures in different cities and investigating their associations with climate properly, allowing for estimation uncertainty.
Nonparametric estimation of stochastic differential equations with sparse Gaussian processes.
García, Constantino A; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G
2017-08-01
The application of stochastic differential equations (SDEs) to the analysis of temporal data has attracted increasing attention, due to their ability to describe complex dynamics with physically interpretable equations. In this paper, we introduce a nonparametric method for estimating the drift and diffusion terms of SDEs from a densely observed discrete time series. The use of Gaussian processes as priors permits working directly in a function-space view and thus the inference takes place directly in this space. To cope with the computational complexity that requires the use of Gaussian processes, a sparse Gaussian process approximation is provided. This approximation permits the efficient computation of predictions for the drift and diffusion terms by using a distribution over a small subset of pseudosamples. The proposed method has been validated using both simulated data and real data from economy and paleoclimatology. The application of the method to real data demonstrates its ability to capture the behavior of complex systems.
Comparison of MRI-based estimates of articular cartilage contact area in the tibiofemoral joint.
Henderson, Christopher E; Higginson, Jill S; Barrance, Peter J
2011-01-01
Knee osteoarthritis (OA) detrimentally impacts the lives of millions of older Americans through pain and decreased functional ability. Unfortunately, the pathomechanics and associated deviations from joint homeostasis that OA patients experience are not well understood. Alterations in mechanical stress in the knee joint may play an essential role in OA; however, existing literature in this area is limited. The purpose of this study was to evaluate the ability of an existing magnetic resonance imaging (MRI)-based modeling method to estimate articular cartilage contact area in vivo. Imaging data of both knees were collected on a single subject with no history of knee pathology at three knee flexion angles. Intra-observer reliability and sensitivity studies were also performed to determine the role of operator-influenced elements of the data processing on the results. The method's articular cartilage contact area estimates were compared with existing contact area estimates in the literature. The method demonstrated an intra-observer reliability of 0.95 when assessed using Pearson's correlation coefficient and was found to be most sensitive to changes in the cartilage tracings on the peripheries of the compartment. The articular cartilage contact area estimates at full extension were similar to those reported in the literature. The relationships between tibiofemoral articular cartilage contact area and knee flexion were also qualitatively and quantitatively similar to those previously reported. The MRI-based knee modeling method was found to have high intra-observer reliability, sensitivity to peripheral articular cartilage tracings, and agreeability with previous investigations when using data from a single healthy adult. Future studies will implement this modeling method to investigate the role that mechanical stress may play in progression of knee OA through estimation of articular cartilage contact area.
Kjeldsen, Henrik D.; Kaiser, Marcus; Whittington, Miles A.
2015-01-01
Background Brain function is dependent upon the concerted, dynamical interactions between a great many neurons distributed over many cortical subregions. Current methods of quantifying such interactions are limited by consideration only of single direct or indirect measures of a subsample of all neuronal population activity. New method Here we present a new derivation of the electromagnetic analogy to near-field acoustic holography allowing high-resolution, vectored estimates of interactions between sources of electromagnetic activity that significantly improves this situation. In vitro voltage potential recordings were used to estimate pseudo-electromagnetic energy flow vector fields, current and energy source densities and energy dissipation in reconstruction planes at depth into the neural tissue parallel to the recording plane of the microelectrode array. Results The properties of the reconstructed near-field estimate allowed both the utilization of super-resolution techniques to increase the imaging resolution beyond that of the microelectrode array, and facilitated a novel approach to estimating causal relationships between activity in neocortical subregions. Comparison with existing methods The holographic nature of the reconstruction method allowed significantly better estimation of the fine spatiotemporal detail of neuronal population activity, compared with interpolation alone, beyond the spatial resolution of the electrode arrays used. Pseudo-energy flow vector mapping was possible with high temporal precision, allowing a near-realtime estimate of causal interaction dynamics. Conclusions Basic near-field electromagnetic holography provides a powerful means to increase spatial resolution from electrode array data with careful choice of spatial filters and distance to reconstruction plane. More detailed approaches may provide the ability to volumetrically reconstruct activity patterns on neuronal tissue, but the ability to extract vectored data with the method presented already permits the study of dynamic causal interactions without bias from any prior assumptions on anatomical connectivity. PMID:26026581
ERIC Educational Resources Information Center
Li, Ming; Harring, Jeffrey R.
2017-01-01
Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…
Some Empirical Evidence for Latent Trait Model Selection.
ERIC Educational Resources Information Center
Hutten, Leah R.
The results of this study suggest that for purposes of estimating ability by latent trait methods, the Rasch model compares favorably with the three-parameter logistic model. Using estimated parameters to make predictions about 25 actual number-correct score distributions with samples of 1,000 cases each, those predicted by the Rasch model fit the…
Tracking of electrochemical impedance of batteries
NASA Astrophysics Data System (ADS)
Piret, H.; Granjon, P.; Guillet, N.; Cattin, V.
2016-04-01
This paper presents an evolutionary battery impedance estimation method, which can be easily embedded in vehicles or nomad devices. The proposed method not only allows an accurate frequency impedance estimation, but also a tracking of its temporal evolution contrary to classical electrochemical impedance spectroscopy methods. Taking into account constraints of cost and complexity, we propose to use the existing electronics of current control to perform a frequency evolutionary estimation of the electrochemical impedance. The developed method uses a simple wideband input signal, and relies on a recursive local average of Fourier transforms. The averaging is controlled by a single parameter, managing a trade-off between tracking and estimation performance. This normalized parameter allows to correctly adapt the behavior of the proposed estimator to the variations of the impedance. The advantage of the proposed method is twofold: the method is easy to embed into a simple electronic circuit, and the battery impedance estimator is evolutionary. The ability of the method to monitor the impedance over time is demonstrated on a simulator, and on a real Lithium ion battery, on which a repeatability study is carried out. The experiments reveal good tracking results, and estimation performance as accurate as the usual laboratory approaches.
Equating Scores from Adaptive to Linear Tests
ERIC Educational Resources Information Center
van der Linden, Wim J.
2006-01-01
Two local methods for observed-score equating are applied to the problem of equating an adaptive test to a linear test. In an empirical study, the methods were evaluated against a method based on the test characteristic function (TCF) of the linear test and traditional equipercentile equating applied to the ability estimates on the adaptive test…
Li, Zenghui; Xu, Bin; Yang, Jian; Song, Jianshe
2015-01-01
This paper focuses on suppressing spectral overlap for sub-band spectral estimation, with which we can greatly decrease the computational complexity of existing spectral estimation algorithms, such as nonlinear least squares spectral analysis and non-quadratic regularized sparse representation. Firstly, our study shows that the nominal ability of the high-order analysis filter to suppress spectral overlap is greatly weakened when filtering a finite-length sequence, because many meaningless zeros are used as samples in convolution operations. Next, an extrapolation-based filtering strategy is proposed to produce a series of estimates as the substitutions of the zeros and to recover the suppression ability. Meanwhile, a steady-state Kalman predictor is applied to perform a linearly-optimal extrapolation. Finally, several typical methods for spectral analysis are applied to demonstrate the effectiveness of the proposed strategy. PMID:25609038
Zhang, Ying; Alonzo, Todd A
2016-11-01
In diagnostic medicine, the volume under the receiver operating characteristic (ROC) surface (VUS) is a commonly used index to quantify the ability of a continuous diagnostic test to discriminate between three disease states. In practice, verification of the true disease status may be performed only for a subset of subjects under study since the verification procedure is invasive, risky, or expensive. The selection for disease examination might depend on the results of the diagnostic test and other clinical characteristics of the patients, which in turn can cause bias in estimates of the VUS. This bias is referred to as verification bias. Existing verification bias correction in three-way ROC analysis focuses on ordinal tests. We propose verification bias-correction methods to construct ROC surface and estimate the VUS for a continuous diagnostic test, based on inverse probability weighting. By applying U-statistics theory, we develop asymptotic properties for the estimator. A Jackknife estimator of variance is also derived. Extensive simulation studies are performed to evaluate the performance of the new estimators in terms of bias correction and variance. The proposed methods are used to assess the ability of a biomarker to accurately identify stages of Alzheimer's disease. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Cloutier, L; Pomar, C; Létourneau Montminy, M P; Bernier, J F; Pomar, J
2015-04-01
The implementation of precision feeding in growing-finishing facilities requires accurate estimates of the animals' nutrient requirements. The objectives of the current study was to validate a method for estimating the real-time individual standardized ileal digestible (SID) lysine (Lys) requirements of growing-finishing pigs and the ability of this method to estimate the Lys requirements of pigs with different feed intake and growth patterns. Seventy-five pigs from a terminal cross and 72 pigs from a maternal cross were used in two 28-day experimental phases beginning at 25.8 (±2.5) and 73.3 (±5.2) kg BW, respectively. Treatments were randomly assigned to pigs within each experimental phase according to a 2×4 factorial design in which the two genetic lines and four dietary SID Lys levels (70%, 85%, 100% and 115% of the requirements estimated by the factorial method developed for precision feeding) were the main factors. Individual pigs' Lys requirements were estimated daily using a factorial approach based on their feed intake, BW and weight gain patterns. From 25 to 50 kg BW, this method slightly underestimated the pigs' SID Lys requirements, given that maximum protein deposition and weight gain were achieved at 115% of SID Lys requirements. However, the best gain-to-feed ratio (G : F) was obtained at a level of 85% or more of the estimated Lys requirement. From 70 to 100 kg, the method adequately estimated the pigs' individual requirements, given that maximum performance was achieved at 100% of Lys requirements. Terminal line pigs ate more (P=0.04) during the first experimental phase and tended to eat more (P=0.10) during the second phase than the maternal line pigs but both genetic lines had similar ADG and protein deposition rates during the two phases. The factorial method used in this study to estimate individual daily SID Lys requirements was able to accommodate the small genetic differences in feed intake, and it was concluded that this method can be used in precision feeding systems without adjustments. However, the method's ability to accommodate large genetic differences in feed intake and protein deposition patterns needs to be studied further.
A New Online Calibration Method Based on Lord's Bias-Correction.
He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei
2017-09-01
Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.
ERIC Educational Resources Information Center
Besser, Jana; Zekveld, Adriana A.; Kramer, Sophia E.; Ronnberg, Jerker; Festen, Joost M.
2012-01-01
Purpose: In this research, the authors aimed to increase the analogy between Text Reception Threshold (TRT; Zekveld, George, Kramer, Goverts, & Houtgast, 2007) and Speech Reception Threshold (SRT; Plomp & Mimpen, 1979) and to examine the TRT's value in estimating cognitive abilities that are important for speech comprehension in noise. Method: The…
ERIC Educational Resources Information Center
Cliffordson, Christina; Gustafsson, Jan-Eric
2008-01-01
The effects of age and schooling on different aspects of intellectual performance, taking track of study into account, are investigated. The analyses were based on military enlistment test scores, obtained by 48,269 males, measuring Fluid ability (Gf), Crystallized intelligence (Gc), and General visualization (Gv) ability. A regression method,…
Optimal filtering and Bayesian detection for friction-based diagnostics in machines.
Ray, L R; Townsend, J R; Ramasubramanian, A
2001-01-01
Non-model-based diagnostic methods typically rely on measured signals that must be empirically related to process behavior or incipient faults. The difficulty in interpreting a signal that is indirectly related to the fundamental process behavior is significant. This paper presents an integrated non-model and model-based approach to detecting when process behavior varies from a proposed model. The method, which is based on nonlinear filtering combined with maximum likelihood hypothesis testing, is applicable to dynamic systems whose constitutive model is well known, and whose process inputs are poorly known. Here, the method is applied to friction estimation and diagnosis during motion control in a rotating machine. A nonlinear observer estimates friction torque in a machine from shaft angular position measurements and the known input voltage to the motor. The resulting friction torque estimate can be analyzed directly for statistical abnormalities, or it can be directly compared to friction torque outputs of an applicable friction process model in order to diagnose faults or model variations. Nonlinear estimation of friction torque provides a variable on which to apply diagnostic methods that is directly related to model variations or faults. The method is evaluated experimentally by its ability to detect normal load variations in a closed-loop controlled motor driven inertia with bearing friction and an artificially-induced external line contact. Results show an ability to detect statistically significant changes in friction characteristics induced by normal load variations over a wide range of underlying friction behaviors.
Transient Stability Output Margin Estimation Based on Energy Function Method
NASA Astrophysics Data System (ADS)
Miwa, Natsuki; Tanaka, Kazuyuki
In this paper, a new method of estimating critical generation margin (CGM) in power systems is proposed from the viewpoint of transient stability diagnostic. The proposed method has the capability to directly compute the stability limit output for a given contingency based on transient energy function method (TEF). Since CGM can be directly obtained by the limit output using estimated P-θ curves and is easy to understand, it is more useful rather than conventional critical clearing time (CCT) of energy function method. The proposed method can also estimate CGM as its negative value that means unstable in present load profile, then negative CGM can be directly utilized as generator output restriction. The proposed method is verified its accuracy and fast solution ability by applying to simple 3-machine model and IEEJ EAST10-machine standard model. Furthermore the useful application to severity ranking of transient stability for a lot of contingency cases is discussed by using CGM.
NASA Astrophysics Data System (ADS)
Korolenko, E. A.; Korolik, E. V.; Korolik, A. K.; Kirkovskii, V. V.
2007-07-01
We present results from an investigation of the binding ability of the main transport proteins (albumin, lipoproteins, and α-1-acid glycoprotein) of blood plasma from patients at different stages of liver cirrhosis by the fluorescent probe method. We used the hydrophobic fluorescent probes anionic 8-anilinonaphthalene-1-sulfonate, which interacts in blood plasma mainly with albumin; cationic Quinaldine red, which interacts with α-1-acid glycoprotein; and neutral Nile red, which redistributes between lipoproteins and albumin in whole blood plasma. We show that the binding ability of albumin and α-1-acid glycoprotein to negatively charged and positively charged hydrophobic metabolites, respectively, increases in the compensation stage of liver cirrhosis. As the pathology process deepens and transitions into the decompensation stage, the transport abilities of albumin and α-1-acid glycoprotein decrease whereas the binding ability of lipoproteins remains high.
ERIC Educational Resources Information Center
Austin, Peter C.
2012-01-01
Researchers are increasingly using observational or nonrandomized data to estimate causal treatment effects. Essential to the production of high-quality evidence is the ability to reduce or minimize the confounding that frequently occurs in observational studies. When using the potential outcome framework to define causal treatment effects, one…
ERIC Educational Resources Information Center
Matthews-Lopez, Joy L.; Hombo, Catherine M.
The purpose of this study was to examine the recovery of item parameters in simulated Automatic Item Generation (AIG) conditions, using Markov chain Monte Carlo (MCMC) estimation methods to attempt to recover the generating distributions. To do this, variability in item and ability parameters was manipulated. Realistic AIG conditions were…
Methods of Measurement the Quality Metrics in a Printing System
NASA Astrophysics Data System (ADS)
Varepo, L. G.; Brazhnikov, A. Yu; Nagornova, I. V.; Novoselskaya, O. A.
2018-04-01
One of the main criteria for choosing ink as a component of printing system is scumming ability of the ink. The realization of algorithm for estimating the quality metrics in a printing system is shown. The histograms of ink rate of various printing systems are presented. A quantitative estimation of stability of offset inks emulsifiability is given.
ERIC Educational Resources Information Center
Kim, Kyung Yong; Lee, Won-Chan
2017-01-01
This article provides a detailed description of three factors (specification of the ability distribution, numerical integration, and frame of reference for the item parameter estimates) that might affect the item parameter estimation of the three-parameter logistic model, and compares five item calibration methods, which are combinations of the…
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
ERIC Educational Resources Information Center
Malcolm, Peter
2013-01-01
The ability and to make good estimates is essential, as is the ability to assess the reasonableness of estimates. These abilities are becoming increasingly important as digital technologies transform the ways in which people work. To estimate is to provide an approximation to a problem that is mathematical in nature, and the ability to estimate is…
NASA Astrophysics Data System (ADS)
Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi
2010-01-01
The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.
ERIC Educational Resources Information Center
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook Henry
2015-01-01
The purpose of this inquiry was to investigate the effectiveness of item response theory (IRT) proficiency estimators in terms of estimation bias and error under multistage testing (MST). We chose a 2-stage MST design in which 1 adaptation to the examinees' ability levels takes place. It includes 4 modules (1 at Stage 1, 3 at Stage 2) and 3 paths…
Finding and estimating chemical property data for environmental assessment.
Boethling, Robert S; Howard, Philip H; Meylan, William M
2004-10-01
The ability to predict the behavior of a chemical substance in a biological or environmental system largely depends on knowledge of the physicochemical properties and reactivity of that substance. We focus here on properties, with the objective of providing practical guidance for finding measured values and using estimation methods when necessary. Because currently available computer software often makes it more convenient to estimate than to retrieve measured values, we try to discourage irrational exuberance for these tools by including comprehensive lists of Internet and hard-copy data resources. Guidance for assessors is presented in the form of a process to obtain data that includes establishment of chemical identity, identification of data sources, assessment of accuracy and reliability, substructure searching for analogs when experimental data are unavailable, and estimation from chemical structure. Regarding property estimation, we cover estimation from close structural analogs in addition to broadly applicable methods requiring only the chemical structure. For the latter, we list and briefly discuss the most widely used methods. Concluding thoughts are offered concerning appropriate directions for future work on estimation methods, again with an emphasis on practical applications.
Bayesian Parameter Estimation for Heavy-Duty Vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Eric; Konan, Arnaud; Duran, Adam
2017-03-28
Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less
A Comparison of Four Methods of IRT Subscoring
ERIC Educational Resources Information Center
de la Torre, Jimmy; Song, Hao; Hong, Yuan
2011-01-01
Lack of sufficient reliability is the primary impediment for generating and reporting subtest scores. Several current methods of subscore estimation do so either by incorporating the correlational structure among the subtest abilities or by using the examinee's performance on the overall test. This article conducted a systematic comparison of four…
ERIC Educational Resources Information Center
Ho, Tsung-Han
2010-01-01
Computerized adaptive testing (CAT) provides a highly efficient alternative to the paper-and-pencil test. By selecting items that match examinees' ability levels, CAT not only can shorten test length and administration time but it can also increase measurement precision and reduce measurement error. In CAT, maximum information (MI) is the most…
Nichols, J.D.
2004-01-01
The EURING meetings and the scientists who have attended them have contributed substantially to the growth of knowledge in the field of estimating parameters of animal populations. The contributions of David R. Anderson to process modeling, parameter estimation and decision analysis are briefly reviewed. Metrics are considered for assessing individual contributions to a field of inquiry, and it is concluded that Anderson’s contributions have been substantial. Important characteristics of Anderson and his career are the ability to identify and focus on important topics, the premium placed on dissemination of new methods to prospective users, the ability to assemble teams of complementary researchers, and the innovation and vision that characterized so much of his work. The paper concludes with a list of interesting current research topics for consideration by EURING participants.
Functional Mixed Effects Model for Small Area Estimation.
Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou
2016-09-01
Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.
NASA Astrophysics Data System (ADS)
El Sharif, H.; Teegavarapu, R. S.
2012-12-01
Spatial interpolation methods used for estimation of missing precipitation data at a site seldom check for their ability to preserve site and regional statistics. Such statistics are primarily defined by spatial correlations and other site-to-site statistics in a region. Preservation of site and regional statistics represents a means of assessing the validity of missing precipitation estimates at a site. This study evaluates the efficacy of a fuzzy-logic methodology for infilling missing historical daily precipitation data in preserving site and regional statistics. Rain gauge sites in the state of Kentucky, USA, are used as a case study for evaluation of this newly proposed method in comparison to traditional data infilling techniques. Several error and performance measures will be used to evaluate the methods and trade-offs in accuracy of estimation and preservation of site and regional statistics.
Murrell, Ebony G.; Juliano, Steven A.
2012-01-01
Resource competition theory predicts that R*, the equilibrium resource amount yielding zero growth of a consumer population, should predict species' competitive abilities for that resource. This concept has been supported for unicellular organisms, but has not been well-tested for metazoans, probably due to the difficulty of raising experimental populations to equilibrium and measuring population growth rates for species with long or complex life cycles. We developed an index (Rindex) of R* based on demography of one insect cohort, growing from egg to adult in a non-equilibrium setting, and tested whether Rindex yielded accurate predictions of competitive abilities using mosquitoes as a model system. We estimated finite rate of increase (λ′) from demographic data for cohorts of three mosquito species raised with different detritus amounts, and estimated each species' Rindex using nonlinear regressions of λ′ vs. initial detritus amount. All three species' Rindex differed significantly, and accurately predicted competitive hierarchy of the species determined in simultaneous pairwise competition experiments. Our Rindex could provide estimates and rigorous statistical comparisons of competitive ability for organisms for which typical chemostat methods and equilibrium population conditions are impractical. PMID:22970128
Stochastic Gabor reflectivity and acoustic impedance inversion
NASA Astrophysics Data System (ADS)
Hariri Naghadeh, Diako; Morley, Christopher Keith; Ferguson, Angus John
2018-02-01
To delineate subsurface lithology to estimate petrophysical properties of a reservoir, it is possible to use acoustic impedance (AI) which is the result of seismic inversion. To change amplitude to AI, removal of wavelet effects from the seismic signal in order to get a reflection series, and subsequently transforming those reflections to AI, is vital. To carry out seismic inversion correctly it is important to not assume that the seismic signal is stationary. However, all stationary deconvolution methods are designed following that assumption. To increase temporal resolution and interpretation ability, amplitude compensation and phase correction are inevitable. Those are pitfalls of stationary reflectivity inversion. Although stationary reflectivity inversion methods are trying to estimate reflectivity series, because of incorrect assumptions their estimations will not be correct, but may be useful. Trying to convert those reflection series to AI, also merging with the low frequency initial model, can help us. The aim of this study was to apply non-stationary deconvolution to eliminate time variant wavelet effects from the signal and to convert the estimated reflection series to the absolute AI by getting bias from well logs. To carry out this aim, stochastic Gabor inversion in the time domain was used. The Gabor transform derived the signal’s time-frequency analysis and estimated wavelet properties from different windows. Dealing with different time windows gave an ability to create a time-variant kernel matrix, which was used to remove matrix effects from seismic data. The result was a reflection series that does not follow the stationary assumption. The subsequent step was to convert those reflections to AI using well information. Synthetic and real data sets were used to show the ability of the introduced method. The results highlight that the time cost to get seismic inversion is negligible related to general Gabor inversion in the frequency domain. Also, obtaining bias could help the method to estimate reliable AI. To justify the effect of random noise on deterministic and stochastic inversion results, a stationary noisy trace with signal-to-noise ratio equal to 2 was used. The results highlight the inability of deterministic inversion in dealing with a noisy data set even using a high number of regularization parameters. Also, despite the low level of signal, stochastic Gabor inversion not only can estimate correctly the wavelet’s properties but also, because of bias from well logs, the inversion result is very close to the real AI. Comparing deterministic and introduced inversion results on a real data set shows that low resolution results, especially in the deeper parts of seismic sections using deterministic inversion, creates significant reliability problems for seismic prospects, but this pitfall is solved completely using stochastic Gabor inversion. The estimated AI using Gabor inversion in the time domain is much better and faster than general Gabor inversion in the frequency domain. This is due to the extra number of windows required to analyze the time-frequency information and also the amount of temporal increment between windows. In contrast, stochastic Gabor inversion can estimate trustable physical properties close to the real characteristics. Applying to a real data set could give an ability to detect the direction of volcanic intrusion and the ability of lithology distribution delineation along the fan. Comparing the inversion results highlights the efficiency of stochastic Gabor inversion to delineate lateral lithology changes because of the improved frequency content and zero phasing of the final inversion volume.
Eppenhof, Koen A J; Pluim, Josien P W
2018-04-01
Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.
Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation
ERIC Educational Resources Information Center
Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting
2011-01-01
Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…
Pillai, Nikhil; Craig, Morgan; Dokoumetzidis, Aristeidis; Schwartz, Sorell L; Bies, Robert; Freedman, Immanuel
2018-06-19
In mathematical pharmacology, models are constructed to confer a robust method for optimizing treatment. The predictive capability of pharmacological models depends heavily on the ability to track the system and to accurately determine parameters with reference to the sensitivity in projected outcomes. To closely track chaotic systems, one may choose to apply chaos synchronization. An advantageous byproduct of this methodology is the ability to quantify model parameters. In this paper, we illustrate the use of chaos synchronization combined with Nelder-Mead search to estimate parameters of the well-known Kirschner-Panetta model of IL-2 immunotherapy from noisy data. Chaos synchronization with Nelder-Mead search is shown to provide more accurate and reliable estimates than Nelder-Mead search based on an extended least squares (ELS) objective function. Our results underline the strength of this approach to parameter estimation and provide a broader framework of parameter identification for nonlinear models in pharmacology. Copyright © 2018 Elsevier Ltd. All rights reserved.
Processing Satellite Data for Slant Total Electron Content Measurements
NASA Technical Reports Server (NTRS)
Stephens, Philip John (Inventor); Komjathy, Attila (Inventor); Wilson, Brian D. (Inventor); Mannucci, Anthony J. (Inventor)
2016-01-01
A method, system, and apparatus provide the ability to estimate ionospheric observables using space-borne observations. Space-borne global positioning system (GPS) data of ionospheric delay are obtained from a satellite. The space-borne GPS data are combined with ground-based GPS observations. The combination is utilized in a model to estimate a global three-dimensional (3D) electron density field.
Sinkewicz, Marilyn; Garfinkel, Irwin
2009-05-01
We present new estimates of unwed fathers' ability to pay child support. Prior research relied on surveys that drastically undercounted nonresident unwed fathers and provided no link to their children who lived in separate households. To overcome these limitations, previous research assumed assortative mating and that each mother partnered with one father who was actually eligible to pay support and had no other child support obligations. Because the Fragile Families and Child Wellbeing Study contains data on couples, multiple-partner fertility, and a rich array of other previously unmeasured characteristics of fathers, it is uniquely suited to address the limitations of previous research. We also use an improved method of dealing with missing data. Our findings suggest that previous research overestimated the aggregate ability of unwed nonresident fathers to pay child support by 33% to 60%.
Low Reynolds number wind tunnel measurements - The importance of being earnest
NASA Technical Reports Server (NTRS)
Mueller, Thomas J.; Batill, Stephen M.; Brendel, Michael; Perry, Mark L.; Bloch, Diane R.
1986-01-01
A method for obtaining two-dimensional aerodynamic force coefficients at low Reynolds numbers using a three-component external platform balance is presented. Regardless of method, however, the importance of understanding the possible influence of the test facility and instrumentation on the final results cannot be overstated. There is an uncertainty in the ability of the facility to simulate a two-dimensional flow environment due to the confinement effect of the wind tunnel and the method used to mount the airfoil. Additionally, the ability of the instrumentation to accurately measure forces and pressures has an associated uncertainty. This paper focuses on efforts taken to understand the errors introduced by the techniques and apparatus used at the University of Notre Dame, and, the importance of making an earnest estimate of the uncertainty. Although quantitative estimates of facility induced errors are difficult to obtain, the uncertainty in measured results can be handled in a straightforward manner and provide the experimentalist, and others, with a basis to evaluate experimental results.
Combining computer adaptive testing technology with cognitively diagnostic assessment.
McGlohen, Meghan; Chang, Hua-Hua
2008-08-01
A major advantage of computerized adaptive testing (CAT) is that it allows the test to home in on an examinee's ability level in an interactive manner. The aim of the new area of cognitive diagnosis is to provide information about specific content areas in which an examinee needs help. The goal of this study was to combine the benefit of specific feedback from cognitively diagnostic assessment with the advantages of CAT. In this study, three approaches to combining these were investigated: (1) item selection based on the traditional ability level estimate (theta), (2) item selection based on the attribute mastery feedback provided by cognitively diagnostic assessment (alpha), and (3) item selection based on both the traditional ability level estimate (theta) and the attribute mastery feedback provided by cognitively diagnostic assessment (alpha). The results from these three approaches were compared for theta estimation accuracy, attribute mastery estimation accuracy, and item exposure control. The theta- and alpha-based condition outperformed the alpha-based condition regarding theta estimation, attribute mastery pattern estimation, and item exposure control. Both the theta-based condition and the theta- and alpha-based condition performed similarly with regard to theta estimation, attribute mastery estimation, and item exposure control, but the theta- and alpha-based condition has an additional advantage in that it uses the shadow test method, which allows the administrator to incorporate additional constraints in the item selection process, such as content balancing, item type constraints, and so forth, and also to select items on the basis of both the current theta and alpha estimates, which can be built on top of existing 3PL testing programs.
Revised motion estimation algorithm for PROPELLER MRI.
Pipe, James G; Gibbs, Wende N; Li, Zhiqiang; Karis, John P; Schar, Michael; Zwart, Nicholas R
2014-08-01
To introduce a new algorithm for estimating data shifts (used for both rotation and translation estimates) for motion-corrected PROPELLER MRI. The method estimates shifts for all blades jointly, emphasizing blade-pair correlations that are both strong and more robust to noise. The heads of three volunteers were scanned using a PROPELLER acquisition while they exhibited various amounts of motion. All data were reconstructed twice, using motion estimates from the original and new algorithm. Two radiologists independently and blindly compared 216 image pairs from these scans, ranking the left image as substantially better or worse than, slightly better or worse than, or equivalent to the right image. In the aggregate of 432 scores, the new method was judged substantially better than the old method 11 times, and was never judged substantially worse. The new algorithm compared favorably with the old in its ability to estimate bulk motion in a limited study of volunteer motion. A larger study of patients is planned for future work. Copyright © 2013 Wiley Periodicals, Inc.
Pailian, Hrag; Halberda, Justin
2015-04-01
We investigated the psychometric properties of the one-shot change detection task for estimating visual working memory (VWM) storage capacity-and also introduced and tested an alternative flicker change detection task for estimating these limits. In three experiments, we found that the one-shot whole-display task returns estimates of VWM storage capacity (K) that are unreliable across set sizes-suggesting that the whole-display task is measuring different things at different set sizes. In two additional experiments, we found that the one-shot single-probe variant shows improvements in the reliability and consistency of K estimates. In another additional experiment, we found that a one-shot whole-display-with-click task (requiring target localization) also showed improvements in reliability and consistency. The latter results suggest that the one-shot task can return reliable and consistent estimates of VWM storage capacity (K), and they highlight the possibility that the requirement to localize the changed target is what engenders this enhancement. Through a final series of four experiments, we introduced and tested an alternative flicker change detection method that also requires the observer to localize the changing target and that generates, from response times, an estimate of VWM storage capacity (K). We found that estimates of K from the flicker task correlated with estimates from the traditional one-shot task and also had high reliability and consistency. We highlight the flicker method's ability to estimate executive functions as well as VWM storage capacity, and discuss the potential for measuring multiple abilities with the one-shot and flicker tasks.
Advancement of Latent Trait Theory.
1988-02-01
if I am the principal investigator, I find it practically impossible to include and systematize all the important findings and implications within a...methods are described in [1.21. Two important features of the principal investigator’s approach are the following. (1) It does not assume any specific...were described in the preceding chapter, the maximum likelihood estimate 0 of ability 0 , and also f of the transformed ability r play important roles
Bayesian Analysis of Longitudinal Data Using Growth Curve Models
ERIC Educational Resources Information Center
Zhang, Zhiyong; Hamagami, Fumiaki; Wang, Lijuan Lijuan; Nesselroade, John R.; Grimm, Kevin J.
2007-01-01
Bayesian methods for analyzing longitudinal data in social and behavioral research are recommended for their ability to incorporate prior information in estimating simple and complex models. We first summarize the basics of Bayesian methods before presenting an empirical example in which we fit a latent basis growth curve model to achievement data…
Selecting a sampling method to aid in vegetation management decisions in loblolly pine plantations
David R. Weise; Glenn R. Glover
1993-01-01
Objective methods to evaluate hardwood competition in young loblolly pine (Pinustaeda L.) plantations are not widely used in the southeastern United States. Ability of common sampling rules to accurately estimate hardwood rootstock attributes at low sampling intensities and across varying rootstock spatial distributions is unknown. Fixed area plot...
The Development of MST Test Information for the Prediction of Test Performances
ERIC Educational Resources Information Center
Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.
2017-01-01
The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…
Whitbeck, M.; Grace, J.B.
2006-01-01
The estimation of aboveground biomass is important in the management of natural resources. Direct measurements by clipping, drying, and weighing of herbaceous vegetation are time-consuming and costly. Therefore, non-destructive methods for efficiently and accurately estimating biomass are of interest. We compared two non-destructive methods, visual obstruction and light penetration, for estimating aboveground biomass in marshes of the upper Texas, USA coast. Visual obstruction was estimated using the Robel pole method, which primarily measures the density and height of the canopy. Light penetration through the canopy was measured using a Decagon light wand, with readings taken above the vegetation and at the ground surface. Clip plots were also taken to provide direct estimates of total aboveground biomass. Regression relationships between estimated and clipped biomass were significant using both methods. However, the light penetration method was much more strongly correlated with clipped biomass under these conditions (R2 value 0.65 compared to 0.35 for the visual obstruction approach). The primary difference between the two methods in this situation was the ability of the light-penetration method to account for variations in plant litter. These results indicate that light-penetration measurements may be better for estimating biomass in marshes when plant litter is an important component. We advise that, in all cases, investigators should calibrate their methods against clip plots to evaluate applicability to their situation. ?? 2006, The Society of Wetland Scientists.
Improving cluster-based missing value estimation of DNA microarray data.
Brás, Lígia P; Menezes, José C
2007-06-01
We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.
Jenkinson, Toni-Marie; Muncer, Steven; Wheeler, Miranda; Brechin, Don; Evans, Stephen
2018-06-01
Neuropsychological assessment requires accurate estimation of an individual's premorbid cognitive abilities. Oral word reading tests, such as the test of premorbid functioning (TOPF), and demographic variables, such as age, sex, and level of education, provide a reasonable indication of premorbid intelligence, but their ability to predict other related cognitive abilities is less well understood. This study aimed to develop regression equations, based on the TOPF and demographic variables, to predict scores on tests of verbal fluency and naming ability. A sample of 119 healthy adults provided demographic information and were tested using the TOPF, FAS, animal naming test (ANT), and graded naming test (GNT). Multiple regression analyses, using the TOPF and demographics as predictor variables, were used to estimate verbal fluency and naming ability test scores. Change scores and cases of significant impairment were calculated for two clinical samples with diagnosed neurological conditions (TBI and meningioma) using the method in Knight, McMahon, Green, and Skeaff (). Demographic variables provided a significant contribution to the prediction of all verbal fluency and naming ability test scores; however, adding TOPF score to the equation considerably improved prediction beyond that afforded by demographic variables alone. The percentage of variance accounted for by demographic variables and/or TOPF score varied from 19 per cent (FAS), 28 per cent (ANT), and 41 per cent (GNT). Change scores revealed significant differences in performance in the clinical groups, particularity the TBI group. Demographic variables, particularly education level, and scores on the TOPF should be taken into consideration when interpreting performance on tests of verbal fluency and naming ability. © 2017 The British Psychological Society.
Evaluation of methods for freeway operational analysis.
DOT National Transportation Integrated Search
2001-10-01
The ability to estimate accurately the operational performance of roadway segments has become increasingly critical as we move from a period of new construction into one of operations, maintenance, and, in some cases, reconstruction. In addition to m...
Ionospheric Slant Total Electron Content Analysis Using Global Positioning System Based Estimation
NASA Technical Reports Server (NTRS)
Komjathy, Attila (Inventor); Mannucci, Anthony J. (Inventor); Sparks, Lawrence C. (Inventor)
2017-01-01
A method, system, apparatus, and computer program product provide the ability to analyze ionospheric slant total electron content (TEC) using global navigation satellite systems (GNSS)-based estimation. Slant TEC is estimated for a given set of raypath geometries by fitting historical GNSS data to a specified delay model. The accuracy of the specified delay model is estimated by computing delay estimate residuals and plotting a behavior of the delay estimate residuals. An ionospheric threat model is computed based on the specified delay model. Ionospheric grid delays (IGDs) and grid ionospheric vertical errors (GIVEs) are computed based on the ionospheric threat model.
Surface smoothness: cartilage biomarkers for knee OA beyond the radiologist
NASA Astrophysics Data System (ADS)
Tummala, Sudhakar; Dam, Erik B.
2010-03-01
Fully automatic imaging biomarkers may allow quantification of patho-physiological processes that a radiologist would not be able to assess reliably. This can introduce new insight but is problematic to validate due to lack of meaningful ground truth expert measurements. Rather than quantification accuracy, such novel markers must therefore be validated against clinically meaningful end-goals such as the ability to allow correct diagnosis. We present a method for automatic cartilage surface smoothness quantification in the knee joint. The quantification is based on a curvature flow method used on tibial and femoral cartilage compartments resulting from an automatic segmentation scheme. These smoothness estimates are validated for their ability to diagnose osteoarthritis and compared to smoothness estimates based on manual expert segmentations and to conventional cartilage volume quantification. We demonstrate that the fully automatic markers eliminate the time required for radiologist annotations, and in addition provide a diagnostic marker superior to the evaluated semi-manual markers.
NASA Astrophysics Data System (ADS)
Gillam, Thomas P. S.; Lester, Christopher G.
2014-11-01
We consider current and alternative approaches to setting limits on new physics signals having backgrounds from misidentified objects; for example jets misidentified as leptons, b-jets or photons. Many ATLAS and CMS analyses have used a heuristic "matrix method" for estimating the background contribution from such sources. We demonstrate that the matrix method suffers from statistical shortcomings that can adversely affect its ability to set robust limits. A rigorous alternative method is discussed, and is seen to produce fake rate estimates and limits with better qualities, but is found to be too costly to use. Having investigated the nature of the approximations used to derive the matrix method, we propose a third strategy that is seen to marry the speed of the matrix method to the performance and physicality of the more rigorous approach.
NASA Astrophysics Data System (ADS)
Sica, R. J.; Haefele, A.; Jalali, A.; Gamage, S.; Farhani, G.
2018-04-01
The optimal estimation method (OEM) has a long history of use in passive remote sensing, but has only recently been applied to active instruments like lidar. The OEM's advantage over traditional techniques includes obtaining a full systematic and random uncertainty budget plus the ability to work with the raw measurements without first applying instrument corrections. In our meeting presentation we will show you how to use the OEM for temperature and composition retrievals for Rayleigh-scatter, Ramanscatter and DIAL lidars.
Tumor response estimation in radar-based microwave breast cancer detection.
Kurrant, Douglas J; Fear, Elise C; Westwick, David T
2008-12-01
Radar-based microwave imaging techniques have been proposed for early stage breast cancer detection. A considerable challenge for the successful implementation of these techniques is the reduction of clutter, or components of the signal originating from objects other than the tumor. In particular, the reduction of clutter from the late-time scattered fields is required in order to detect small (subcentimeter diameter) tumors. In this paper, a method to estimate the tumor response contained in the late-time scattered fields is presented. The method uses a parametric function to model the tumor response. A maximum a posteriori estimation approach is used to evaluate the optimal values for the estimates of the parameters. A pattern classification technique is then used to validate the estimation. The ability of the algorithm to estimate a tumor response is demonstrated by using both experimental and simulated data obtained with a tissue sensing adaptive radar system.
Effects of Technology on Experienced Job Characteristics and Job Satisfaction.
1980-07-01
Ability to discriminate between odors (sense of smell) 23. Ability to discriminate between salty , sour, sweet (sense of taste ) 24. Ability to remember...Ability to estimate speed Ability to estimate quality Sense of touch Sense of smell Sense of taste Cognitive .833 Ability to remember names Ability to
Satellite estimation of incident photosynthetically active radiation using ultraviolet reflectance
NASA Technical Reports Server (NTRS)
Eck, Thomas F.; Dye, Dennis G.
1991-01-01
A new satellite remote sensing method for estimating the amount of photosynthetically active radiation (PAR, 400-700 nm) incident at the earth's surface is described and tested. Potential incident PAR for clear sky conditions is computed from an existing spectral model. A major advantage of the UV approach over existing visible band approaches to estimating insolation is the improved ability to discriminate clouds from high-albedo background surfaces. UV spectral reflectance data from the Total Ozone Mapping Spectrometer (TOMS) were used to test the approach for three climatically distinct, midlatitude locations. Estimates of monthly total incident PAR from the satellite technique differed from values computed from ground-based pyranometer measurements by less than 6 percent. This UV remote sensing method can be applied to estimate PAR insolation over ocean and land surfaces which are free of ice and snow.
Inertial sensor-based methods in walking speed estimation: a systematic review.
Yang, Shuozhi; Li, Qingguo
2012-01-01
Self-selected walking speed is an important measure of ambulation ability used in various clinical gait experiments. Inertial sensors, i.e., accelerometers and gyroscopes, have been gradually introduced to estimate walking speed. This research area has attracted a lot of attention for the past two decades, and the trend is continuing due to the improvement of performance and decrease in cost of the miniature inertial sensors. With the intention of understanding the state of the art of current development in this area, a systematic review on the exiting methods was done in the following electronic engines/databases: PubMed, ISI Web of Knowledge, SportDiscus and IEEE Xplore. Sixteen journal articles and papers in proceedings focusing on inertial sensor based walking speed estimation were fully reviewed. The existing methods were categorized by sensor specification, sensor attachment location, experimental design, and walking speed estimation algorithm.
Inertial Sensor-Based Methods in Walking Speed Estimation: A Systematic Review
Yang, Shuozhi; Li, Qingguo
2012-01-01
Self-selected walking speed is an important measure of ambulation ability used in various clinical gait experiments. Inertial sensors, i.e., accelerometers and gyroscopes, have been gradually introduced to estimate walking speed. This research area has attracted a lot of attention for the past two decades, and the trend is continuing due to the improvement of performance and decrease in cost of the miniature inertial sensors. With the intention of understanding the state of the art of current development in this area, a systematic review on the exiting methods was done in the following electronic engines/databases: PubMed, ISI Web of Knowledge, SportDiscus and IEEE Xplore. Sixteen journal articles and papers in proceedings focusing on inertial sensor based walking speed estimation were fully reviewed. The existing methods were categorized by sensor specification, sensor attachment location, experimental design, and walking speed estimation algorithm. PMID:22778632
Unwed Fathers’ Ability to Pay Child Support: New Estimates Accounting for Multiple-Partner Fertility
SINKEWICZ, MARILYN; GARFINKEL, IRWIN
2009-01-01
We present new estimates of unwed fathers’ ability to pay child support. Prior research relied on surveys that drastically undercounted nonresident unwed fathers and provided no link to their children who lived in separate households. To overcome these limitations, previous research assumed assortative mating and that each mother partnered with one father who was actually eligible to pay support and had no other child support obligations. Because the Fragile Families and Child Wellbeing Study contains data on couples, multiple-partner fertility, and a rich array of other previously unmeasured characteristics of fathers, it is uniquely suited to address the limitations of previous research. We also use an improved method of dealing with missing data. Our findings suggest that previous research overestimated the aggregate ability of unwed nonresident fathers to pay child support by 33% to 60%. PMID:21305392
Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco
2016-01-01
Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population's sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns.
Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco
2016-01-01
Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population’s sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns. PMID:27441554
Day-Williams, Aaron G.; McLay, Kirsten; Drury, Eleanor; Edkins, Sarah; Coffey, Alison J.; Palotie, Aarno; Zeggini, Eleftheria
2011-01-01
Pooled sequencing can be a cost-effective approach to disease variant discovery, but its applicability in association studies remains unclear. We compare sequence enrichment methods coupled to next-generation sequencing in non-indexed pools of 1, 2, 10, 20 and 50 individuals and assess their ability to discover variants and to estimate their allele frequencies. We find that pooled resequencing is most usefully applied as a variant discovery tool due to limitations in estimating allele frequency with high enough accuracy for association studies, and that in-solution hybrid-capture performs best among the enrichment methods examined regardless of pool size. PMID:22069447
ERIC Educational Resources Information Center
Zhang, Jinming; Lu, Ting
2007-01-01
In practical applications of item response theory (IRT), item parameters are usually estimated first from a calibration sample. After treating these estimates as fixed and known, ability parameters are then estimated. However, the statistical inferences based on the estimated abilities can be misleading if the uncertainty of the item parameter…
Kjeldsen, Henrik D; Kaiser, Marcus; Whittington, Miles A
2015-09-30
Brain function is dependent upon the concerted, dynamical interactions between a great many neurons distributed over many cortical subregions. Current methods of quantifying such interactions are limited by consideration only of single direct or indirect measures of a subsample of all neuronal population activity. Here we present a new derivation of the electromagnetic analogy to near-field acoustic holography allowing high-resolution, vectored estimates of interactions between sources of electromagnetic activity that significantly improves this situation. In vitro voltage potential recordings were used to estimate pseudo-electromagnetic energy flow vector fields, current and energy source densities and energy dissipation in reconstruction planes at depth into the neural tissue parallel to the recording plane of the microelectrode array. The properties of the reconstructed near-field estimate allowed both the utilization of super-resolution techniques to increase the imaging resolution beyond that of the microelectrode array, and facilitated a novel approach to estimating causal relationships between activity in neocortical subregions. The holographic nature of the reconstruction method allowed significantly better estimation of the fine spatiotemporal detail of neuronal population activity, compared with interpolation alone, beyond the spatial resolution of the electrode arrays used. Pseudo-energy flow vector mapping was possible with high temporal precision, allowing a near-realtime estimate of causal interaction dynamics. Basic near-field electromagnetic holography provides a powerful means to increase spatial resolution from electrode array data with careful choice of spatial filters and distance to reconstruction plane. More detailed approaches may provide the ability to volumetrically reconstruct activity patterns on neuronal tissue, but the ability to extract vectored data with the method presented already permits the study of dynamic causal interactions without bias from any prior assumptions on anatomical connectivity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Adaptive Filtering Using Recurrent Neural Networks
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.
2005-01-01
A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.
Bad data detection in two stage estimation using phasor measurements
NASA Astrophysics Data System (ADS)
Tarali, Aditya
The ability of the Phasor Measurement Unit (PMU) to directly measure the system state, has led to steady increase in the use of PMU in the past decade. However, in spite of its high accuracy and the ability to measure the states directly, they cannot completely replace the conventional measurement units due to high cost. Hence it is necessary for the modern estimators to use both conventional and phasor measurements together. This thesis presents an alternative method to incorporate the new PMU measurements into the existing state estimator in a systematic manner such that no major modification is necessary to the existing algorithm. It is also shown that if PMUs are placed appropriately, the phasor measurements can be used to detect and identify the bad data associated with critical measurements by using this model, which cannot be detected by conventional state estimation algorithm. The developed model is tested on IEEE 14, IEEE 30 and IEEE 118 bus under various conditions.
Slade, Jeffrey W.; Adams, Jean V.; Christie, Gavin C.; Cuddy, Douglas W.; Fodale, Michael F.; Heinrich, John W.; Quinlan, Henry R.; Weise, Jerry G.; Weisser, John W.; Young, Robert J.
2003-01-01
Before 1995, Great Lakes streams were selected for lampricide treatment based primarily on qualitative measures of the relative abundance of larval sea lampreys, Petromyzon marinus. New integrated pest management approaches required standardized quantitative measures of sea lamprey. This paper evaluates historical larval assessment techniques and data and describes how new standardized methods for estimating abundance of larval and metamorphosed sea lampreys were developed and implemented. These new methods have been used to estimate larval and metamorphosed sea lamprey abundance in about 100 Great Lakes streams annually and to rank them for lampricide treatment since 1995. Implementation of these methods has provided a quantitative means of selecting streams for treatment based on treatment cost and estimated production of metamorphosed sea lampreys, provided managers with a tool to estimate potential recruitment of sea lampreys to the Great Lakes and the ability to measure the potential consequences of not treating streams, resulting in a more justifiable allocation of resources. The empirical data produced can also be used to simulate the impacts of various control scenarios.
Li, Jun; Lin, Qiu-Hua; Kang, Chun-Yu; Wang, Kai; Yang, Xiu-Ting
2018-03-18
Direction of arrival (DOA) estimation is the basis for underwater target localization and tracking using towed line array sonar devices. A method of DOA estimation for underwater wideband weak targets based on coherent signal subspace (CSS) processing and compressed sensing (CS) theory is proposed. Under the CSS processing framework, wideband frequency focusing is accompanied by a two-sided correlation transformation, allowing the DOA of underwater wideband targets to be estimated based on the spatial sparsity of the targets and the compressed sensing reconstruction algorithm. Through analysis and processing of simulation data and marine trial data, it is shown that this method can accomplish the DOA estimation of underwater wideband weak targets. Results also show that this method can considerably improve the spatial spectrum of weak target signals, enhancing the ability to detect them. It can solve the problems of low directional resolution and unreliable weak-target detection in traditional beamforming technology. Compared with the conventional minimum variance distortionless response beamformers (MVDR), this method has many advantages, such as higher directional resolution, wider detection range, fewer required snapshots and more accurate detection for weak targets.
ERIC Educational Resources Information Center
ARCUS, PETER; HEADY, EARL O.
THE PURPOSE OF THIS STUDY IS TO ESTIMATE THE MANPOWER REQUIREMENTS FOR THE NATION FOR 144 REGIONS THE TYPES OF SKILLS AND WORK ABILITIES REQUIRED BY AGRICULTURE IN THE NEXT 15 YEARS, AND THE TYPES AND AMOUNTS OF EDUCATION NEEDED. THE QUANTITATIVE ANALYSIS IS BEING MADE BY METHODS APPROPRIATE TO THE PHASES OF THE STUDY--(1) INTERRELATIONS AMONG…
Inferences about landbird abundance from count data: recent advances and future directions
Nichols, J.D.; Thomas, L.; Conn, P.B.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.
2009-01-01
We summarize results of a November 2006 workshop dealing with recent research on the estimation of landbird abundance from count data. Our conceptual framework includes a decomposition of the probability of detecting a bird potentially exposed to sampling efforts into four separate probabilities. Primary inference methods are described and include distance sampling, multiple observers, time of detection, and repeated counts. The detection parameters estimated by these different approaches differ, leading to different interpretations of resulting estimates of density and abundance. Simultaneous use of combinations of these different inference approaches can not only lead to increased precision but also provides the ability to decompose components of the detection process. Recent efforts to test the efficacy of these different approaches using natural systems and a new bird radio test system provide sobering conclusions about the ability of observers to detect and localize birds in auditory surveys. Recent research is reported on efforts to deal with such potential sources of error as bird misclassification, measurement error, and density gradients. Methods for inference about spatial and temporal variation in avian abundance are outlined. Discussion topics include opinions about the need to estimate detection probability when drawing inference about avian abundance, methodological recommendations based on the current state of knowledge and suggestions for future research.
Evaluation of multiple tracer methods to estimate low groundwater flow velocities.
Reimus, Paul W; Arnold, Bill W
2017-04-01
Four different tracer methods were used to estimate groundwater flow velocity at a multiple-well site in the saturated alluvium south of Yucca Mountain, Nevada: (1) two single-well tracer tests with different rest or "shut-in" periods, (2) a cross-hole tracer test with an extended flow interruption, (3) a comparison of two tracer decay curves in an injection borehole with and without pumping of a downgradient well, and (4) a natural-gradient tracer test. Such tracer methods are potentially very useful for estimating groundwater velocities when hydraulic gradients are flat (and hence uncertain) and also when water level and hydraulic conductivity data are sparse, both of which were the case at this test location. The purpose of the study was to evaluate the first three methods for their ability to provide reasonable estimates of relatively low groundwater flow velocities in such low-hydraulic-gradient environments. The natural-gradient method is generally considered to be the most robust and direct method, so it was used to provide a "ground truth" velocity estimate. However, this method usually requires several wells, so it is often not practical in systems with large depths to groundwater and correspondingly high well installation costs. The fact that a successful natural gradient test was conducted at the test location offered a unique opportunity to compare the flow velocity estimates obtained by the more easily deployed and lower risk methods with the ground-truth natural-gradient method. The groundwater flow velocity estimates from the four methods agreed very well with each other, suggesting that the first three methods all provided reasonably good estimates of groundwater flow velocity at the site. The advantages and disadvantages of the different methods, as well as some of the uncertainties associated with them are discussed. Published by Elsevier B.V.
Urschler, Martin; Grassegger, Sabine; Štern, Darko
2015-01-01
Age estimation of individuals is important in human biology and has various medical and forensic applications. Recent interest in MR-based methods aims to investigate alternatives for established methods involving ionising radiation. Automatic, software-based methods additionally promise improved estimation objectivity. To investigate how informative automatically selected image features are regarding their ability to discriminate age, by exploring a recently proposed software-based age estimation method for MR images of the left hand and wrist. One hundred and two MR datasets of left hand images are used to evaluate age estimation performance, consisting of bone and epiphyseal gap volume localisation, computation of one age regression model per bone mapping image features to age and fusion of individual bone age predictions to a final age estimate. Quantitative results of the software-based method show an age estimation performance with a mean absolute difference of 0.85 years (SD = 0.58 years) to chronological age, as determined by a cross-validation experiment. Qualitatively, it is demonstrated how feature selection works and which image features of skeletal maturation are automatically chosen to model the non-linear regression function. Feasibility of automatic age estimation based on MRI data is shown and selected image features are found to be informative for describing anatomical changes during physical maturation in male adolescents.
Sheppard, David P; Pirogovsky-Turk, Eva; Woods, Steven Paul; Holden, Heather M; Nicoll, Diane R; Filoteo, J Vincent; Corey-Bloom, Jody; Gilbert, Paul E
2017-01-01
One important limitation of prior studies examining functional decline in Huntington's disease (HD) has been the reliance on self-reported measures of ability. Since report-based methods can be biased by lack of insight, depression, and cognitive impairment, contrasting self-reported ability with measures that assess capacity may lead to a more comprehensive estimation of real-world functioning. The present study examined self-reported ability to perform instrumental activities of daily living (iADLs) and performance-based financial management capacity in 20 patients diagnosed with mild-moderate Huntington's disease (HD) and 20 demographically similar healthy adults. HD patients reported significantly greater declines in their ability to manage finances. On the capacity measure of financial management, HD patients performed significantly below healthy adults. Additionally, in the HD group there were no significant correlations between self-reported ability and capacity measures of financial management. HD patients endorsed declines in global iADL ability and exhibited deficits in functional capacity when performing a financial management task. Capacity measures may aid in assessing the extent to which HD patients accurately estimate real-world iADL performance, and the present findings suggest that such measures of capacity may be related to the cognitive, but not motor or affective, symptoms of HD.
PockDrug-Server: a new web server for predicting pocket druggability on holo and apo proteins
Hussein, Hiba Abi; Borrel, Alexandre; Geneix, Colette; Petitjean, Michel; Regad, Leslie; Camproux, Anne-Claude
2015-01-01
Predicting protein pocket's ability to bind drug-like molecules with high affinity, i.e. druggability, is of major interest in the target identification phase of drug discovery. Therefore, pocket druggability investigations represent a key step of compound clinical progression projects. Currently computational druggability prediction models are attached to one unique pocket estimation method despite pocket estimation uncertainties. In this paper, we propose ‘PockDrug-Server’ to predict pocket druggability, efficient on both (i) estimated pockets guided by the ligand proximity (extracted by proximity to a ligand from a holo protein structure) and (ii) estimated pockets based solely on protein structure information (based on amino atoms that form the surface of potential binding cavities). PockDrug-Server provides consistent druggability results using different pocket estimation methods. It is robust with respect to pocket boundary and estimation uncertainties, thus efficient using apo pockets that are challenging to estimate. It clearly distinguishes druggable from less druggable pockets using different estimation methods and outperformed recent druggability models for apo pockets. It can be carried out from one or a set of apo/holo proteins using different pocket estimation methods proposed by our web server or from any pocket previously estimated by the user. PockDrug-Server is publicly available at: http://pockdrug.rpbs.univ-paris-diderot.fr. PMID:25956651
PockDrug-Server: a new web server for predicting pocket druggability on holo and apo proteins.
Hussein, Hiba Abi; Borrel, Alexandre; Geneix, Colette; Petitjean, Michel; Regad, Leslie; Camproux, Anne-Claude
2015-07-01
Predicting protein pocket's ability to bind drug-like molecules with high affinity, i.e. druggability, is of major interest in the target identification phase of drug discovery. Therefore, pocket druggability investigations represent a key step of compound clinical progression projects. Currently computational druggability prediction models are attached to one unique pocket estimation method despite pocket estimation uncertainties. In this paper, we propose 'PockDrug-Server' to predict pocket druggability, efficient on both (i) estimated pockets guided by the ligand proximity (extracted by proximity to a ligand from a holo protein structure) and (ii) estimated pockets based solely on protein structure information (based on amino atoms that form the surface of potential binding cavities). PockDrug-Server provides consistent druggability results using different pocket estimation methods. It is robust with respect to pocket boundary and estimation uncertainties, thus efficient using apo pockets that are challenging to estimate. It clearly distinguishes druggable from less druggable pockets using different estimation methods and outperformed recent druggability models for apo pockets. It can be carried out from one or a set of apo/holo proteins using different pocket estimation methods proposed by our web server or from any pocket previously estimated by the user. PockDrug-Server is publicly available at: http://pockdrug.rpbs.univ-paris-diderot.fr. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Gitifar, Vahid; Eslamloueyan, Reza; Sarshar, Mohammad
2013-11-01
In this study, pretreatment of sugarcane bagasse and subsequent enzymatic hydrolysis is investigated using two categories of pretreatment methods: dilute acid (DA) pretreatment and combined DA-ozonolysis (DAO) method. Both methods are accomplished at different solid ratios, sulfuric acid concentrations, autoclave residence times, bagasse moisture content, and ozonolysis time. The results show that the DAO pretreatment can significantly increase the production of glucose compared to DA method. Applying k-fold cross validation method, two optimal artificial neural networks (ANNs) are trained for estimations of glucose concentrations for DA and DAO pretreatment methods. Comparing the modeling results with experimental data indicates that the proposed ANNs have good estimation abilities. Copyright © 2013 Elsevier Ltd. All rights reserved.
Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang
2016-01-01
Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter. PMID:27212938
Nelson, Jennifer Clark; Marsh, Tracey; Lumley, Thomas; Larson, Eric B; Jackson, Lisa A; Jackson, Michael L
2013-08-01
Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased owing to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. We applied two such methods, namely imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method's ability to reduce bias using the control time period before influenza circulation. Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not use the validation sample confounders. Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from health care database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which the data can be imputed or reweighted using the additional validation sample information. Copyright © 2013 Elsevier Inc. All rights reserved.
Search for gravitational waves from LIGO-Virgo science run and data interpretation
NASA Astrophysics Data System (ADS)
Biswas, Rahul
Search for gravitational wave events was performed on data jointly taken during LIGO's fifth science run (S5) and Virgo's first science mn (VSR1). The data taken during this period was broken down into five separate months. I shall report the analysis performed on one of these months. Apart from the search, I shall describe the work related to estimation of rate based on the loudest event in the search. I shall demonstrate methods used in construction of rate intervals at 90% confidence level and combination of rates from multiple experiments of similar duration. To have confidence in our detection, accurate estimation of false alarm probability (F.A.P.) associated with the event candidate is required. Current false alarm estimation techniques limit our ability to measure the F.A.P. to about 1 in 100. I shall describe a method that significantly improves this estimate using information from multiple detectors. Besides accurate knowledge of F.A.P., detection is also dependent on our ability to distinguish real signals to those from noise. Several tests exist which use the quality of the signal to differentiate between real and noise signal. The chi-square test is one such computationally expensive test applied in our search; we shall understand the dependence of the chi-square parameter on the signal to noise ratio (SNR) for a given signal, which will help us to model the chi-square parameter based on SNR. The two detectors at Hanford, WA, H1(4km) and H2(2km), share the same vacuum system and hence their noise is correlated. Our present method of background estimation cannot capture this correlation and often underestimates the background when only H1 and H2 are operating. I shall describe a novel method of time reversed filtering to correctly estimate the background.
NASA Astrophysics Data System (ADS)
Li, Yi; Abdel-Monem, Mohamed; Gopalakrishnan, Rahul; Berecibar, Maitane; Nanini-Maury, Elise; Omar, Noshin; van den Bossche, Peter; Van Mierlo, Joeri
2018-01-01
This paper proposes an advanced state of health (SoH) estimation method for high energy NMC lithium-ion batteries based on the incremental capacity (IC) analysis. IC curves are used due to their ability of detect and quantify battery degradation mechanism. A simple and robust smoothing method is proposed based on Gaussian filter to reduce the noise on IC curves, the signatures associated with battery ageing can therefore be accurately identified. A linear regression relationship is found between the battery capacity with the positions of features of interest (FOIs) on IC curves. Results show that the developed SoH estimation function from one single battery cell is able to evaluate the SoH of other batteries cycled under different cycling depth with less than 2.5% maximum errors, which proves the robustness of the proposed method on SoH estimation. With this technique, partial charging voltage curves can be used for SoH estimation and the testing time can be therefore largely reduced. This method shows great potential to be applied in reality, as it only requires static charging curves and can be easily implemented in battery management system (BMS).
Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.
Farsani, Zahra Amini; Schmid, Volker J
2017-01-01
In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.
Sakurai, Ryota; Fujiwara, Yoshinori; Ishihara, Masami; Yasunaga, Masashi; Ogawa, Susumu; Suzuki, Hiroyuki; Imanaka, Kuniyasu
2017-07-01
Older adults tend to overestimate their step-over ability. However, it is unclear as to whether this is caused by inaccurate self-estimation of physical ability or inaccurate perception of height. We, therefore, measured both visual height perception ability and self-estimation of step-over ability among young and older adults. Forty-seven older and 16 young adults performed a height perception test (HPT) and a step-over test (SOT). Participants visually judged the height of vertical bars from distances of 7 and 1 m away in the HPT, then self-estimated and, subsequently, actually performed a step-over action in the SOT. The results showed no significant difference between young and older adults in visual height perception. In the SOT, young adults tended to underestimate their step-over ability, whereas older adults either overestimated their abilities or underestimated them to a lesser extent than did the young adults. Moreover, visual height perception was not correlated with the self-estimation of step-over ability in both young and older adults. These results suggest that the self-overestimation of step-over ability which appeared in some healthy older adults may not be caused by the nature of visual height perception, but by other factor(s), such as the likely age-related nature of self-estimation of physical ability, per se.
Usual Dietary Intakes: Food Intakes, U.S. Population, 2001-04
The NCI Method provides the capability to estimate the distribution of usual food intakes in the U.S. population to greatly enhance the ability to monitor diets relative to recommendations and to assess the scope of dietary deficiencies and excesses.
Selected Intakes as Ratios of Energy Intake, U.S. Population, 2001-04
The NCI Method provides the capability to estimate the distribution of usual food intakes in the US population to greatly enhance the ability to monitor diets relative to recommendations and to assess the scope of dietary deficiencies and excesses.
Verdin, Andrew; Funk, Christopher C.; Rajagopalan, Balaji; Kleiber, William
2016-01-01
Robust estimates of precipitation in space and time are important for efficient natural resource management and for mitigating natural hazards. This is particularly true in regions with developing infrastructure and regions that are frequently exposed to extreme events. Gauge observations of rainfall are sparse but capture the precipitation process with high fidelity. Due to its high resolution and complete spatial coverage, satellite-derived rainfall data are an attractive alternative in data-sparse regions and are often used to support hydrometeorological early warning systems. Satellite-derived precipitation data, however, tend to underrepresent extreme precipitation events. Thus, it is often desirable to blend spatially extensive satellite-derived rainfall estimates with high-fidelity rain gauge observations to obtain more accurate precipitation estimates. In this research, we use two different methods, namely, ordinary kriging and κ-nearest neighbor local polynomials, to blend rain gauge observations with the Climate Hazards Group Infrared Precipitation satellite-derived precipitation estimates in data-sparse Central America and Colombia. The utility of these methods in producing blended precipitation estimates at pentadal (five-day) and monthly time scales is demonstrated. We find that these blending methods significantly improve the satellite-derived estimates and are competitive in their ability to capture extreme precipitation.
ERIC Educational Resources Information Center
Samejima, Fumiko; Changas, Paul S.
The methods and approaches for estimating the operating characteristics of the discrete item responses without assuming any mathematical form have been developed and expanded. It has been made possible that, even if the test information function of a given test is not constant for the interval of ability of interest, it is used as the Old Test.…
Spatio-temporal models of mental processes from fMRI.
Janoos, Firdaus; Machiraju, Raghu; Singh, Shantanu; Morocz, Istvan Ákos
2011-07-15
Understanding the highly complex, spatially distributed and temporally organized phenomena entailed by mental processes using functional MRI is an important research problem in cognitive and clinical neuroscience. Conventional analysis methods focus on the spatial dimension of the data discarding the information about brain function contained in the temporal dimension. This paper presents a fully spatio-temporal multivariate analysis method using a state-space model (SSM) for brain function that yields not only spatial maps of activity but also its temporal structure along with spatially varying estimates of the hemodynamic response. Efficient algorithms for estimating the parameters along with quantitative validations are given. A novel low-dimensional feature-space for representing the data, based on a formal definition of functional similarity, is derived. Quantitative validation of the model and the estimation algorithms is provided with a simulation study. Using a real fMRI study for mental arithmetic, the ability of this neurophysiologically inspired model to represent the spatio-temporal information corresponding to mental processes is demonstrated. Moreover, by comparing the models across multiple subjects, natural patterns in mental processes organized according to different mental abilities are revealed. Copyright © 2011 Elsevier Inc. All rights reserved.
Galinsky, Vitaly L; Martinez, Antigona; Paulus, Martin P; Frank, Lawrence R
2018-04-13
In this letter, we present a new method for integration of sensor-based multifrequency bands of electroencephalography and magnetoencephalography data sets into a voxel-based structural-temporal magnetic resonance imaging analysis by utilizing the general joint estimation using entropy regularization (JESTER) framework. This allows enhancement of the spatial-temporal localization of brain function and the ability to relate it to morphological features and structural connectivity. This method has broad implications for both basic neuroscience research and clinical neuroscience focused on identifying disease-relevant biomarkers by enhancing the spatial-temporal resolution of the estimates derived from current neuroimaging modalities, thereby providing a better picture of the normal human brain in basic neuroimaging experiments and variations associated with disease states.
Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding
NASA Astrophysics Data System (ADS)
Oh, Kwan-Jung; Oh, Byung Tae
2015-04-01
We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.
Psychophysics with children: Investigating the effects of attentional lapses on threshold estimates.
Manning, Catherine; Jones, Pete R; Dekker, Tessa M; Pellicano, Elizabeth
2018-03-26
When assessing the perceptual abilities of children, researchers tend to use psychophysical techniques designed for use with adults. However, children's poorer attentiveness might bias the threshold estimates obtained by these methods. Here, we obtained speed discrimination threshold estimates in 6- to 7-year-old children in UK Key Stage 1 (KS1), 7- to 9-year-old children in Key Stage 2 (KS2), and adults using three psychophysical procedures: QUEST, a 1-up 2-down Levitt staircase, and Method of Constant Stimuli (MCS). We estimated inattentiveness using responses to "easy" catch trials. As expected, children had higher threshold estimates and made more errors on catch trials than adults. Lower threshold estimates were obtained from psychometric functions fit to the data in the QUEST condition than the MCS and Levitt staircases, and the threshold estimates obtained when fitting a psychometric function to the QUEST data were also lower than when using the QUEST mode. This suggests that threshold estimates cannot be compared directly across methods. Differences between the procedures did not vary significantly with age group. Simulations indicated that inattentiveness biased threshold estimates particularly when threshold estimates were computed as the QUEST mode or the average of staircase reversals. In contrast, thresholds estimated by post-hoc psychometric function fitting were less biased by attentional lapses. Our results suggest that some psychophysical methods are more robust to attentiveness, which has important implications for assessing the perception of children and clinical groups.
Blind estimation of reverberation time
NASA Astrophysics Data System (ADS)
Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; O'Brien, William D.; Lansing, Charissa R.; Feng, Albert S.
2003-11-01
The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. Many state-of-the-art audio signal processing algorithms, for example in hearing-aids and telephony, are expected to have the ability to characterize the listening environment, and turn on an appropriate processing strategy accordingly. Thus, a method for characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, a method for estimating RT without prior knowledge of sound sources or room geometry is presented. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time-constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.
Online estimation of room reverberation time
NASA Astrophysics Data System (ADS)
Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; Feng, Albert S.
2003-04-01
The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. State-of-the-art signal processing algorithms for hearing aids are expected to have the ability to evaluate the characteristics of the listening environment and turn on an appropriate processing strategy accordingly. Thus, a method for the characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method or regression, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, we describe a method for estimating RT without prior knowledge of sound sources or room geometry. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.
NASA Astrophysics Data System (ADS)
Rakkapao, Suttida; Prasitpong, Singha; Arayathanitkul, Kwan
2016-12-01
This study investigated the multiple-choice test of understanding of vectors (TUV), by applying item response theory (IRT). The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming unidimensionality and local independence. Moreover, all distractors of the TUV were analyzed from item response curves (IRC) that represent simplified IRT. Data were gathered on 2392 science and engineering freshmen, from three universities in Thailand. The results revealed IRT analysis to be useful in assessing the test since its item parameters are independent of the ability parameters. The IRT framework reveals item-level information, and indicates appropriate ability ranges for the test. Moreover, the IRC analysis can be used to assess the effectiveness of the test's distractors. Both IRT and IRC approaches reveal test characteristics beyond those revealed by the classical analysis methods of tests. Test developers can apply these methods to diagnose and evaluate the features of items at various ability levels of test takers.
Probabilistic liver atlas construction.
Dura, Esther; Domingo, Juan; Ayala, Guillermo; Marti-Bonmati, Luis; Goceri, E
2017-01-13
Anatomical atlases are 3D volumes or shapes representing an organ or structure of the human body. They contain either the prototypical shape of the object of interest together with other shapes representing its statistical variations (statistical atlas) or a probability map of belonging to the object (probabilistic atlas). Probabilistic atlases are mostly built with simple estimations only involving the data at each spatial location. A new method for probabilistic atlas construction that uses a generalized linear model is proposed. This method aims to improve the estimation of the probability to be covered by the liver. Furthermore, all methods to build an atlas involve previous coregistration of the sample of shapes available. The influence of the geometrical transformation adopted for registration in the quality of the final atlas has not been sufficiently investigated. The ability of an atlas to adapt to a new case is one of the most important quality criteria that should be taken into account. The presented experiments show that some methods for atlas construction are severely affected by the previous coregistration step. We show the good performance of the new approach. Furthermore, results suggest that extremely flexible registration methods are not always beneficial, since they can reduce the variability of the atlas and hence its ability to give sensible values of probability when used as an aid in segmentation of new cases.
Nelson, Jennifer C.; Marsh, Tracey; Lumley, Thomas; Larson, Eric B.; Jackson, Lisa A.; Jackson, Michael
2014-01-01
Objective Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased due to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. Study Design and Setting We applied two such methods, imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method’s ability to reduce bias using the control time period prior to influenza circulation. Results Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not utilize the validation sample confounders. Conclusion Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from healthcare database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which data can be imputed or reweighted using the additional validation sample information. PMID:23849144
Performance of Trajectory Models with Wind Uncertainty
NASA Technical Reports Server (NTRS)
Lee, Alan G.; Weygandt, Stephen S.; Schwartz, Barry; Murphy, James R.
2009-01-01
Typical aircraft trajectory predictors use wind forecasts but do not account for the forecast uncertainty. A method for generating estimates of wind prediction uncertainty is described and its effect on aircraft trajectory prediction uncertainty is investigated. The procedure for estimating the wind prediction uncertainty relies uses a time-lagged ensemble of weather model forecasts from the hourly updated Rapid Update Cycle (RUC) weather prediction system. Forecast uncertainty is estimated using measures of the spread amongst various RUC time-lagged ensemble forecasts. This proof of concept study illustrates the estimated uncertainty and the actual wind errors, and documents the validity of the assumed ensemble-forecast accuracy relationship. Aircraft trajectory predictions are made using RUC winds with provision for the estimated uncertainty. Results for a set of simulated flights indicate this simple approach effectively translates the wind uncertainty estimate into an aircraft trajectory uncertainty. A key strength of the method is the ability to relate uncertainty to specific weather phenomena (contained in the various ensemble members) allowing identification of regional variations in uncertainty.
Noninvasive estimation of assist pressure for direct mechanical ventricular actuation
NASA Astrophysics Data System (ADS)
An, Dawei; Yang, Ming; Gu, Xiaotong; Meng, Fan; Yang, Tianyue; Lin, Shujing
2018-02-01
Direct mechanical ventricular actuation is effective to reestablish the ventricular function with non-blood contact. Due to the energy loss within the driveline of the direct cardiac compression device, it is necessary to acquire the accurate value of assist pressure acting on the heart surface. To avoid myocardial trauma induced by invasive sensors, the noninvasive estimation method is developed and the experimental device is designed to measure the sample data for fitting the estimation models. By examining the goodness of fit numerically and graphically, the polynomial model presents the best behavior among the four alternative models. Meanwhile, to verify the effect of the noninvasive estimation, the simplified lumped parameter model is utilized to calculate the pre-support and the post-support left ventricular pressure. Furthermore, by adjusting the driving pressure beyond the range of the sample data, the assist pressure is estimated with the similar waveform and the post-support left ventricular pressure approaches the value of the adult healthy heart, indicating the good generalization ability of the noninvasive estimation method.
Estimation of Cloud Fraction Profile in Shallow Convection Using a Scanning Cloud Radar
Oue, Mariko; Kollias, Pavlos; North, Kirk W.; ...
2016-10-18
Large spatial heterogeneities in shallow convection result in uncertainties in estimations of domain-averaged cloud fraction profiles (CFP). This issue is addressed using large eddy simulations of shallow convection over land coupled with a radar simulator. Results indicate that zenith profiling observations are inadequate to provide reliable CFP estimates. Use of Scanning Cloud Radar (SCR), performing a sequence of cross-wind horizon-to-horizon scans, is not straightforward due to the strong dependence of radar sensitivity to target distance. An objective method for estimating domain-averaged CFP is proposed that uses observed statistics of SCR hydrometeor detection with height to estimate optimum sampling regions. Thismore » method shows good agreement with the model CFP. Results indicate that CFP estimates require more than 35 min of SCR scans to converge on the model domain average. Lastly, the proposed technique is expected to improve our ability to compare model output with cloud radar observations in shallow cumulus cloud conditions.« less
Howard, Steven J; Woodcock, Stuart; Ehrich, John; Bokosmaty, Sahar
2017-03-01
A fundamental aim of standardized educational assessment is to achieve reliable discrimination between students differing in the knowledge, skills and abilities assessed. However, questions of the purity with which these tests index students' genuine abilities have arisen. Specifically, literacy and numeracy assessments may also engage unintentionally assessed capacities. The current study investigated the extent to which domain-general processes - working memory (WM) and non-verbal reasoning - contribute to students' standardized test performance and the pathway(s) through which they exert this influence. Participants were 91 Grade 2 students recruited from five regional and metropolitan primary schools in Australia. Participants completed measures of WM and non-verbal reasoning, as well as literacy and numeracy subtests of a national standardized educational assessment. Path analysis of Rasch-derived ability estimates and residuals with domain-general cognitive abilities indicated: (1) a consistent indirect pathway from WM to literacy and numeracy ability, through non-verbal reasoning; (2) direct paths from phonological WM and literacy ability to numeracy ability estimates; and (3) a direct path from WM to spelling test residuals. Results suggest that the constitution of this nationwide standardized assessment confounded non-targeted abilities with those that were the target of assessment. This appears to extend beyond the effect of WM on learning more generally, to the demands of different assessment types and methods. This has implications for students' abilities to demonstrate genuine competency in assessed areas and the educational supports and provisions they are provided on the basis of these results. © 2016 The British Psychological Society.
Zhou, Xiang
2017-12-01
Linear mixed models (LMMs) are among the most commonly used tools for genetic association studies. However, the standard method for estimating variance components in LMMs-the restricted maximum likelihood estimation method (REML)-suffers from several important drawbacks: REML requires individual-level genotypes and phenotypes from all samples in the study, is computationally slow, and produces downward-biased estimates in case control studies. To remedy these drawbacks, we present an alternative framework for variance component estimation, which we refer to as MQS. MQS is based on the method of moments (MoM) and the minimal norm quadratic unbiased estimation (MINQUE) criterion, and brings two seemingly unrelated methods-the renowned Haseman-Elston (HE) regression and the recent LD score regression (LDSC)-into the same unified statistical framework. With this new framework, we provide an alternative but mathematically equivalent form of HE that allows for the use of summary statistics. We provide an exact estimation form of LDSC to yield unbiased and statistically more efficient estimates. A key feature of our method is its ability to pair marginal z -scores computed using all samples with SNP correlation information computed using a small random subset of individuals (or individuals from a proper reference panel), while capable of producing estimates that can be almost as accurate as if both quantities are computed using the full data. As a result, our method produces unbiased and statistically efficient estimates, and makes use of summary statistics, while it is computationally efficient for large data sets. Using simulations and applications to 37 phenotypes from 8 real data sets, we illustrate the benefits of our method for estimating and partitioning SNP heritability in population studies as well as for heritability estimation in family studies. Our method is implemented in the GEMMA software package, freely available at www.xzlab.org/software.html.
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
Estimating the value of non-use benefits from small changes in the provision of ecosystem services.
Dutton, Adam; Edwards-Jones, Gareth; Macdonald, David W
2010-12-01
The unit of trade in ecosystem services is usually the use of a proportion of the parcels of land associated with a given service. Valuing small changes in the provision of an ecosystem service presents obstacles, particularly when the service provides non-use benefits, as is the case with conservation of most plants and animals. Quantifying non-use values requires stated-preference valuations. Stated-preference valuations can provide estimates of the public's willingness to pay for a broad conservation goal. Nevertheless, stated-preference valuations can be expensive and do not produce consistent measures for varying levels of provision of a service. Additionally, the unit of trade, land use, is not always linearly related to the level of ecosystem services the land might provide. To overcome these obstacles, we developed a method to estimate the value of a marginal change in the provision of a non-use ecosystem service--in this case conservation of plants or animals associated with a given land-cover type. Our method serves as a tool for calculating transferable valuations of small changes in the provision of ecosystem services relative to the existing provision. Valuation is achieved through stated-preference investigations, calculation of a unit value for a parcel of land, and the weighting of this parcel by its ability to provide the desired ecosystem service and its effect on the ability of the surrounding land parcels to provide the desired service. We used the water vole (Arvicola terrestris) as a case study to illustrate the method. The average present value of a meter of water vole habitat was estimated at UK £ 12, but the marginal value of a meter (based on our methods) could range between £ 0 and £ 40 or more. © 2010 Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Wang, L.-P.; Ochoa-Rodríguez, S.; Onof, C.; Willems, P.
2015-09-01
Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field) that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive) technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2) (Edinburgh, UK) during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban drainage system's dynamics, particularly of peak runoff flows.
Hwang, Beomsoo; Jeon, Doyoung
2015-04-09
In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.
Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane
2017-07-12
The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less
Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2013-01-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method. PMID:23750314
Adaptive distributed video coding with correlation estimation using expectation propagation
NASA Astrophysics Data System (ADS)
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-15
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
Middleton, John; Vaks, Jeffrey E
2007-04-01
Errors of calibrator-assigned values lead to errors in the testing of patient samples. The ability to estimate the uncertainties of calibrator-assigned values and other variables minimizes errors in testing processes. International Organization of Standardization guidelines provide simple equations for the estimation of calibrator uncertainty with simple value-assignment processes, but other methods are needed to estimate uncertainty in complex processes. We estimated the assigned-value uncertainty with a Monte Carlo computer simulation of a complex value-assignment process, based on a formalized description of the process, with measurement parameters estimated experimentally. This method was applied to study uncertainty of a multilevel calibrator value assignment for a prealbumin immunoassay. The simulation results showed that the component of the uncertainty added by the process of value transfer from the reference material CRM470 to the calibrator is smaller than that of the reference material itself (<0.8% vs 3.7%). Varying the process parameters in the simulation model allowed for optimizing the process, while keeping the added uncertainty small. The patient result uncertainty caused by the calibrator uncertainty was also found to be small. This method of estimating uncertainty is a powerful tool that allows for estimation of calibrator uncertainty for optimization of various value assignment processes, with a reduced number of measurements and reagent costs, while satisfying the requirements to uncertainty. The new method expands and augments existing methods to allow estimation of uncertainty in complex processes.
Schalk, Stefan G; Demi, Libertario; Bouhouch, Nabil; Kuenen, Maarten P J; Postema, Arnoud W; de la Rosette, Jean J M C H; Wijkstra, Hessel; Tjalkens, Tjalling J; Mischi, Massimo
2017-03-01
The role of angiogenesis in cancer growth has stimulated research aimed at noninvasive cancer detection by blood perfusion imaging. Recently, contrast ultrasound dispersion imaging was proposed as an alternative method for angiogenesis imaging. After the intravenous injection of an ultrasound-contrast-agent bolus, dispersion can be indirectly estimated from the local similarity between neighboring time-intensity curves (TICs) measured by ultrasound imaging. Up until now, only linear similarity measures have been investigated. Motivated by the promising results of this approach in prostate cancer (PCa), we developed a novel dispersion estimation method based on mutual information, thus including nonlinear similarity, to further improve its ability to localize PCa. First, a simulation study was performed to establish the theoretical link between dispersion and mutual information. Next, the method's ability to localize PCa was validated in vivo in 23 patients (58 datasets) referred for radical prostatectomy by comparison with histology. A monotonic relationship between dispersion and mutual information was demonstrated. The in vivo study resulted in a receiver operating characteristic (ROC) curve area equal to 0.77, which was superior (p = 0.21-0.24) to that obtained by linear similarity measures (0.74-0.75) and (p <; 0.05) to that by conventional perfusion parameters (≤0.70). Mutual information between neighboring time-intensity curves can be used to indirectly estimate contrast dispersion and can lead to more accurate PCa localization. An improved PCa localization method can possibly lead to better grading and staging of tumors, and support focal-treatment guidance. Moreover, future employment of the method in other types of angiogenic cancer can be considered.
An Analysis of the Differences among Log Scaling Methods and Actual Log Volume
R. Edward Thomas; Neal D. Bennett
2017-01-01
Log rules estimate the volume of green lumber that can be expected to result from the sawing of a log. As such, this ability to reliably predict lumber recovery forms the foundation of log sales and purchase. The more efficient a sawmill, the less the scaling methods reflect the actual volume recovery and the greater the overrun factor. Using high-resolution scanned...
Smart Fluids in Hydrology: Use of Non-Newtonian Fluids for Pore Structure Characterization
NASA Astrophysics Data System (ADS)
Abou Najm, M. R.; Atallah, N. M.; Selker, J. S.; Roques, C.; Stewart, R. D.; Rupp, D. E.; Saad, G.; El-Fadel, M.
2015-12-01
Classic porous media characterization relies on typical infiltration experiments with Newtonian fluids (i.e., water) to estimate hydraulic conductivity. However, such experiments are generally not able to discern important characteristics such as pore size distribution or pore structure. We show that introducing non-Newtonian fluids provides additional unique flow signatures that can be used for improved pore structure characterization while still representing the functional hydraulic behavior of real porous media. We present a new method for experimentally estimating the pore structure of porous media using a combination of Newtonian and non-Newtonian fluids. The proposed method transforms results of N infiltration experiments using water and N-1 non-Newtonian solutions into a system of equations that yields N representative radii (Ri) and their corresponding percent contribution to flow (wi). This method allows for estimating the soil retention curve using only saturated experiments. Experimental and numerical validation comparing the functional flow behavior of different soils to their modeled flow with N representative radii revealed the ability of the proposed method to represent the water retention and infiltration behavior of real soils. The experimental results showed the ability of such fluids to outsmart Newtonian fluids and infer pore size distribution and unsaturated behavior using simple saturated experiments. Specifically, we demonstrate using synthetic porous media that the use of different non-Newtonian fluids enables the definition of the radii and corresponding percent contribution to flow of multiple representative pores, thus improving the ability of pore-scale models to mimic the functional behavior of real porous media in terms of flow and porosity. The results advance the knowledge towards conceptualizing the complexity of porous media and can potentially impact applications in fields like irrigation efficiencies, vadose zone hydrology, soil-root-plant continuum, carbon sequestration into geologic formations, soil remediation, petroleum reservoir engineering, oil exploration and groundwater modeling.
2018-01-01
Direction of arrival (DOA) estimation is the basis for underwater target localization and tracking using towed line array sonar devices. A method of DOA estimation for underwater wideband weak targets based on coherent signal subspace (CSS) processing and compressed sensing (CS) theory is proposed. Under the CSS processing framework, wideband frequency focusing is accompanied by a two-sided correlation transformation, allowing the DOA of underwater wideband targets to be estimated based on the spatial sparsity of the targets and the compressed sensing reconstruction algorithm. Through analysis and processing of simulation data and marine trial data, it is shown that this method can accomplish the DOA estimation of underwater wideband weak targets. Results also show that this method can considerably improve the spatial spectrum of weak target signals, enhancing the ability to detect them. It can solve the problems of low directional resolution and unreliable weak-target detection in traditional beamforming technology. Compared with the conventional minimum variance distortionless response beamformers (MVDR), this method has many advantages, such as higher directional resolution, wider detection range, fewer required snapshots and more accurate detection for weak targets. PMID:29562642
Pacini, Clare; Ajioka, James W; Micklem, Gos
2017-04-12
Correlation matrices are important in inferring relationships and networks between regulatory or signalling elements in biological systems. With currently available technology sample sizes for experiments are typically small, meaning that these correlations can be difficult to estimate. At a genome-wide scale estimation of correlation matrices can also be computationally demanding. We develop an empirical Bayes approach to improve covariance estimates for gene expression, where we assume the covariance matrix takes a block diagonal form. Our method shows lower false discovery rates than existing methods on simulated data. Applied to a real data set from Bacillus subtilis we demonstrate it's ability to detecting known regulatory units and interactions between them. We demonstrate that, compared to existing methods, our method is able to find significant covariances and also to control false discovery rates, even when the sample size is small (n=10). The method can be used to find potential regulatory networks, and it may also be used as a pre-processing step for methods that calculate, for example, partial correlations, so enabling the inference of the causal and hierarchical structure of the networks.
A Bayesian kriging approach for blending satellite and ground precipitation observations
Verdin, Andrew P.; Rajagopalan, Balaji; Kleiber, William; Funk, Christopher C.
2015-01-01
Drought and flood management practices require accurate estimates of precipitation. Gauge observations, however, are often sparse in regions with complicated terrain, clustered in valleys, and of poor quality. Consequently, the spatial extent of wet events is poorly represented. Satellite-derived precipitation data are an attractive alternative, though they tend to underestimate the magnitude of wet events due to their dependency on retrieval algorithms and the indirect relationship between satellite infrared observations and precipitation intensities. Here we offer a Bayesian kriging approach for blending precipitation gauge data and the Climate Hazards Group Infrared Precipitation satellite-derived precipitation estimates for Central America, Colombia, and Venezuela. First, the gauge observations are modeled as a linear function of satellite-derived estimates and any number of other variables—for this research we include elevation. Prior distributions are defined for all model parameters and the posterior distributions are obtained simultaneously via Markov chain Monte Carlo sampling. The posterior distributions of these parameters are required for spatial estimation, and thus are obtained prior to implementing the spatial kriging model. This functional framework is applied to model parameters obtained by sampling from the posterior distributions, and the residuals of the linear model are subject to a spatial kriging model. Consequently, the posterior distributions and uncertainties of the blended precipitation estimates are obtained. We demonstrate this method by applying it to pentadal and monthly total precipitation fields during 2009. The model's performance and its inherent ability to capture wet events are investigated. We show that this blending method significantly improves upon the satellite-derived estimates and is also competitive in its ability to represent wet events. This procedure also provides a means to estimate a full conditional distribution of the “true” observed precipitation value at each grid cell.
Estimation of color modification in digital images by CFA pattern change.
Choi, Chang-Hee; Lee, Hae-Yeoun; Lee, Heung-Kyu
2013-03-10
Extensive studies have been carried out for detecting image forgery such as copy-move, re-sampling, blurring, and contrast enhancement. Although color modification is a common forgery technique, there is no reported forensic method for detecting this type of manipulation. In this paper, we propose a novel algorithm for estimating color modification in images acquired from digital cameras when the images are modified. Most commercial digital cameras are equipped with a color filter array (CFA) for acquiring the color information of each pixel. As a result, the images acquired from such digital cameras include a trace from the CFA pattern. This pattern is composed of the basic red green blue (RGB) colors, and it is changed when color modification is carried out on the image. We designed an advanced intermediate value counting method for measuring the change in the CFA pattern and estimating the extent of color modification. The proposed method is verified experimentally by using 10,366 test images. The results confirmed the ability of the proposed method to estimate color modification with high accuracy. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
A Comparison of a Bayesian and a Maximum Likelihood Tailored Testing Procedure.
ERIC Educational Resources Information Center
McKinley, Robert L.; Reckase, Mark D.
A study was conducted to compare tailored testing procedures based on a Bayesian ability estimation technique and on a maximum likelihood ability estimation technique. The Bayesian tailored testing procedure selected items so as to minimize the posterior variance of the ability estimate distribution, while the maximum likelihood tailored testing…
Do Self-Efficacy and Ability Self-Estimate Scores Reflect Distinct Facets of Ability Judgments?
ERIC Educational Resources Information Center
Hansen, Jo-Ida C.; Bubany, Shawn T.
2008-01-01
Vocational psychology has generated a number of concepts and assessment instruments considered to reflect ability self-concept (i.e., one's view of one's own abilities) relevant to career development. These concepts and measures often are categorized as either self efficacy beliefs or self-estimated (i.e., self-rated, self-evaluated) abilities.…
ERIC Educational Resources Information Center
Magis, David; Raiche, Gilles
2012-01-01
This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…
Libertus, Melissa E.; Odic, Darko; Feigenson, Lisa; Halberda, Justin
2016-01-01
Children can represent number in at least two ways: by using their non-verbal, intuitive Approximate Number System (ANS), and by using words and symbols to count and represent numbers exactly. Further, by the time they are five years old, children can map between the ANS and number words, as evidenced by their ability to verbally estimate numbers of items without counting. How does the quality of the mapping between approximate and exact numbers relate to children’s math abilities? The role of the ANS-number word mapping in math competence remains controversial for at least two reasons. First, previous work has not examined the relation between verbal estimation and distinct subtypes of math abilities. Second, previous work has not addressed how distinct components of verbal estimation – mapping accuracy and variability – might each relate to math performance. Here, we address these gaps by measuring individual differences in ANS precision, verbal number estimation, and formal and informal math abilities in 5- to 7-year-old children. We found that verbal estimation variability, but not estimation accuracy, predicted formal math abilities even when controlling for age, expressive vocabulary, and ANS precision, and that it mediated the link between ANS precision and overall math ability. These findings suggest that variability in the ANS-number word mapping may be especially important for formal math abilities. PMID:27348475
Libertus, Melissa E; Odic, Darko; Feigenson, Lisa; Halberda, Justin
2016-10-01
Children can represent number in at least two ways: by using their non-verbal, intuitive approximate number system (ANS) and by using words and symbols to count and represent numbers exactly. Furthermore, by the time they are 5years old, children can map between the ANS and number words, as evidenced by their ability to verbally estimate numbers of items without counting. How does the quality of the mapping between approximate and exact numbers relate to children's math abilities? The role of the ANS-number word mapping in math competence remains controversial for at least two reasons. First, previous work has not examined the relation between verbal estimation and distinct subtypes of math abilities. Second, previous work has not addressed how distinct components of verbal estimation-mapping accuracy and variability-might each relate to math performance. Here, we addressed these gaps by measuring individual differences in ANS precision, verbal number estimation, and formal and informal math abilities in 5- to 7-year-old children. We found that verbal estimation variability, but not estimation accuracy, predicted formal math abilities, even when controlling for age, expressive vocabulary, and ANS precision, and that it mediated the link between ANS precision and overall math ability. These findings suggest that variability in the ANS-number word mapping may be especially important for formal math abilities. Copyright © 2016 Elsevier Inc. All rights reserved.
Career Interests and Self-Estimated Abilities of Young Adults with Disabilities
ERIC Educational Resources Information Center
Turner, Sherri; Unkefer, Lesley Craig; Cichy, Bryan Ervin; Peper, Christine; Juang, Ju-Ping
2011-01-01
The purpose of this study was to ascertain vocational interests and self-estimated work-relevant abilities of young adults with disabilities. Results showed that young adults with both low incidence and high incidence disabilities have a wide range of interests and self-estimated work-relevant abilities that are comparable to those in the general…
A Note on the Reliability Coefficients for Item Response Model-Based Ability Estimates
ERIC Educational Resources Information Center
Kim, Seonghoon
2012-01-01
Assuming item parameters on a test are known constants, the reliability coefficient for item response theory (IRT) ability estimates is defined for a population of examinees in two different ways: as (a) the product-moment correlation between ability estimates on two parallel forms of a test and (b) the squared correlation between the true…
Evaluation of multiple tracer methods to estimate low groundwater flow velocities
Reimus, Paul W.; Arnold, Bill W.
2017-02-20
Here, four different tracer methods were used to estimate groundwater flow velocity at a multiple-well site in the saturated alluvium south of Yucca Mountain, Nevada: (1) two single-well tracer tests with different rest or “shut-in” periods, (2) a cross-hole tracer test with an extended flow interruption, (3) a comparison of two tracer decay curves in an injection borehole with and without pumping of a downgradient well, and (4) a natural-gradient tracer test. Such tracer methods are potentially very useful for estimating groundwater velocities when hydraulic gradients are flat (and hence uncertain) and also when water level and hydraulic conductivity datamore » are sparse, both of which were the case at this test location. The purpose of the study was to evaluate the first three methods for their ability to provide reasonable estimates of relatively low groundwater flow velocities in such low-hydraulic-gradient environments. The natural-gradient method is generally considered to be the most robust and direct method, so it was used to provide a “ground truth” velocity estimate. However, this method usually requires several wells, so it is often not practical in systems with large depths to groundwater and correspondingly high well installation costs. The fact that a successful natural gradient test was conducted at the test location offered a unique opportunity to compare the flow velocity estimates obtained by the more easily deployed and lower risk methods with the ground-truth natural-gradient method. The groundwater flow velocity estimates from the four methods agreed very well with each other, suggesting that the first three methods all provided reasonably good estimates of groundwater flow velocity at the site. We discuss the advantages and disadvantages of the different methods, as well as some of the uncertainties associated with them.« less
Evaluation of multiple tracer methods to estimate low groundwater flow velocities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reimus, Paul W.; Arnold, Bill W.
Here, four different tracer methods were used to estimate groundwater flow velocity at a multiple-well site in the saturated alluvium south of Yucca Mountain, Nevada: (1) two single-well tracer tests with different rest or “shut-in” periods, (2) a cross-hole tracer test with an extended flow interruption, (3) a comparison of two tracer decay curves in an injection borehole with and without pumping of a downgradient well, and (4) a natural-gradient tracer test. Such tracer methods are potentially very useful for estimating groundwater velocities when hydraulic gradients are flat (and hence uncertain) and also when water level and hydraulic conductivity datamore » are sparse, both of which were the case at this test location. The purpose of the study was to evaluate the first three methods for their ability to provide reasonable estimates of relatively low groundwater flow velocities in such low-hydraulic-gradient environments. The natural-gradient method is generally considered to be the most robust and direct method, so it was used to provide a “ground truth” velocity estimate. However, this method usually requires several wells, so it is often not practical in systems with large depths to groundwater and correspondingly high well installation costs. The fact that a successful natural gradient test was conducted at the test location offered a unique opportunity to compare the flow velocity estimates obtained by the more easily deployed and lower risk methods with the ground-truth natural-gradient method. The groundwater flow velocity estimates from the four methods agreed very well with each other, suggesting that the first three methods all provided reasonably good estimates of groundwater flow velocity at the site. We discuss the advantages and disadvantages of the different methods, as well as some of the uncertainties associated with them.« less
An Application of the Rasch Model to Computerized Adaptive Testing.
ERIC Educational Resources Information Center
Wisniewski, Dennis R.
Three questions concerning the Binary Search Method (BSM) of computerized adaptive testing were studied: (1) whether it provided a reliable and valid estimation of examinee ability; (2) its effect on examinee attitudes toward computerized adaptive testing and conventional paper-and-pencil testing; and (3) the relationship between item response…
Decomposing University Grades: A Longitudinal Study of Students and Their Instructors
ERIC Educational Resources Information Center
Beenstock, Michael; Feldman, Dan
2018-01-01
First-degree course grades for a cohort of social science students are matched to their instructors, and are statistically decomposed into departmental, course, instructor, and student components. Student ability is measured alternatively by university acceptance scores, or by fixed effects estimated using panel data methods. After controlling for…
Formats for Assessing Students' Self-Assessment Abilities.
ERIC Educational Resources Information Center
Miller, Maurice; Turner, Tamrah
The paper examines some self-assessment techniques used with handicapped students and discusses the advantages and disadvantages of these techniques. The use of self-rating scales is reviewed, and questionable results are cited. Another method, in which students view an item and estimate whether they can perform it before attempting it…
A Comparison of Item Selection Techniques for Testlets
ERIC Educational Resources Information Center
Murphy, Daniel L.; Dodd, Barbara G.; Vaughn, Brandon K.
2010-01-01
This study examined the performance of the maximum Fisher's information, the maximum posterior weighted information, and the minimum expected posterior variance methods for selecting items in a computerized adaptive testing system when the items were grouped in testlets. A simulation study compared the efficiency of ability estimation among the…
Gold nanoparticles synthesis and biological activity estimation in vitro and in vivo.
Rieznichenko, L S; Dybkova, S M; Gruzina, T G; Ulberg, Z R; Todor, I N; Lukyanova, N Yu; Shpyleva, S I; Chekhun, V F
2012-01-01
The aim of the work was the synthesis of gold nanoparticles (GNP) of different sizes and the estimation of their biological activity in vitro and in vivo. Water dispersions of gold nanoparticles of different sizes have been synthesized by Davis method and characterized by laser-correlation spectroscopy and transmission electron microscopy methods. The GNP interaction with tumor cells has been visualized by confocal microscopy method. The enzyme activity was determined by standard biochemical methods. GNP distribution and content in organs and tissues have been determined via atomic-absorption spectrometry method; genotoxic influence has been estimated by "Comet-assay" method. The GNP size-dependent accumulation in cultured U937 tumor cells and their ability to modulate U937 cell membrane Na(+),K(+)-АТР-ase activity value has been revealed in vitro. Using in vivo model of Guerin carcinoma it has been shown that GNP possess high affinity to tumor cells. Our results indicate the perspectives of use of the synthesized GNP water dispersions for cancer diagnostics and treatment. It's necessary to take into account a size-dependent biosafety level of nanoparticles.
Wasim, Fatima; Mahmood, Tariq; Ayub, Khurshid
2016-07-28
Density functional theory (DFT) calculations have been performed to study the response of polypyrrole towards nitrate ions in gas and aqueous phases. First, an accurate estimate of interaction energies is obtained by methods calibrated against the gold standard CCSD(T) method. Then, a number of low cost DFT methods are also evaluated for their ability to accurately estimate the binding energies of polymer-nitrate complexes. The low cost methods evaluated here include dispersion corrected potential (DCP), Grimme's D3 correction, counterpoise correction of the B3LYP method, and Minnesota functionals (M05-2X). The interaction energies calculated using the counterpoise (CP) correction and DCP methods at the B3LYP level are in better agreement with the interaction energies calculated using the calibrated methods. The interaction energies of an infinite polymer (polypyrrole) with nitrate ions are calculated by a variety of low cost methods in order to find the associated errors. The electronic and spectroscopic properties of polypyrrole oligomers nPy (where n = 1-9) and nPy-NO3(-) complexes are calculated, and then extrapolated for an infinite polymer through a second degree polynomial fit. Charge analysis, frontier molecular orbital (FMO) analysis and density of state studies also reveal the sensing ability of polypyrrole towards nitrate ions. Interaction energies, charge analysis and density of states analyses illustrate that the response of polypyrrole towards nitrate ions is considerably reduced in the aqueous medium (compared to the gas phase).
Optical Estimation of Depth and Current in a Ebb Tidal Delta Environment
NASA Astrophysics Data System (ADS)
Holman, R. A.; Stanley, J.
2012-12-01
A key limitation to our ability to make nearshore environmental predictions is the difficulty of obtaining up-to-date bathymetry measurements at a reasonable cost and frequency. Due to the high cost and complex logistics of in-situ methods, research into remote sensing approaches has been steady and has finally yielded fairly robust methods like the cBathy algorithm for optical Argus data that show good performance on simple barred beach profiles and near immunity to noise and signal problems. In May, 2012, data were collected in a more complex ebb tidal delta environment during the RIVET field experiment at New River Inlet, NC. The presence of strong reversing tidal currents led to significant errors in cBathy depths that were phase-locked to the tide. In this paper we will test methods for the robust estimation of both depths and vector currents in a tidal delta domain. In contrast to previous Fourier methods, wavenumber estimation in cBathy can be done on small enough scales to resolve interesting nearshore features.
Unbiased estimates of galaxy scaling relations from photometric redshift surveys
NASA Astrophysics Data System (ADS)
Rossi, Graziano; Sheth, Ravi K.
2008-06-01
Many physical properties of galaxies correlate with one another, and these correlations are often used to constrain galaxy formation models. Such correlations include the colour-magnitude relation, the luminosity-size relation, the fundamental plane, etc. However, the transformation from observable (e.g. angular size, apparent brightness) to physical quantity (physical size, luminosity) is often distance dependent. Noise in the distance estimate will lead to biased estimates of these correlations, thus compromising the ability of photometric redshift surveys to constrain galaxy formation models. We describe two methods which can remove this bias. One is a generalization of the Vmax method, and the other is a maximum-likelihood approach. We illustrate their effectiveness by studying the size-luminosity relation in a mock catalogue, although both methods can be applied to other scaling relations as well. We show that if one simply uses photometric redshifts one obtains a biased relation; our methods correct for this bias and recover the true relation.
Inertial Sensor-Based Motion Analysis of Lower Limbs for Rehabilitation Treatments
Sun, Tongyang; Duan, Lihong; Wang, Yulong
2017-01-01
The hemiplegic rehabilitation state diagnosing performed by therapists can be biased due to their subjective experience, which may deteriorate the rehabilitation effect. In order to improve this situation, a quantitative evaluation is proposed. Though many motion analysis systems are available, they are too complicated for practical application by therapists. In this paper, a method for detecting the motion of human lower limbs including all degrees of freedom (DOFs) via the inertial sensors is proposed, which permits analyzing the patient's motion ability. This method is applicable to arbitrary walking directions and tracks of persons under study, and its results are unbiased, as compared to therapist qualitative estimations. Using the simplified mathematical model of a human body, the rotation angles for each lower limb joint are calculated from the input signals acquired by the inertial sensors. Finally, the rotation angle versus joint displacement curves are constructed, and the estimated values of joint motion angle and motion ability are obtained. The experimental verification of the proposed motion detection and analysis method was performed, which proved that it can efficiently detect the differences between motion behaviors of disabled and healthy persons and provide a reliable quantitative evaluation of the rehabilitation state. PMID:29065575
Estimation of body mass index from the metrics of the first metatarsal
NASA Astrophysics Data System (ADS)
Dunn, Tyler E.
Estimation of the biological profile from as many skeletal elements as possible is a necessity in both forensic and bioarchaeological contexts; this includes non-standard aspects of the biological profile, such as body mass index (BMI). BMI is a measure that allows for understanding of the composition of an individual and is traditionally divided into four groups: underweight, normal weight, overweight, and obese. BMI estimation incorporates both estimation of stature and body mass. The estimation of stature from skeletal elements is commonly included into the standard biological profile but the estimation of body mass needs to be further statistically validated to be consistently included. The bones of the foot, specifically the first metatarsal, may have the ability to estimate BMI given an allometric relationship to stature and the mechanical relationship to body mass. There are two commonly used methods for stature estimation, the anatomical method and the regression method. The anatomical method takes into account all of the skeletal elements that contribute to stature while the regression method relies on the allometric relationship between a skeletal element and living stature. A correlation between the metrics of the first metatarsal and living stature has been observed, and proposed as a method for valid stature estimation from the boney foot (Byers et al., 1989). Body mass estimation from skeletal elements relies on two theoretical frameworks: the morphometric and the mechanical approaches. The morphometric approach relies on the size relationship of the individual to body mass; the basic relationship between volume, density, and weight allows for body mass estimation. The body is thought of as a cylinder, and in order to understand the volume of this cylinder the diameter is needed. A commonly used proxy for this in the human body is skeletal bi-iliac breadth from rearticulated pelvic girdle. The mechanical method of body mass estimation relies on the ideas of biomechanical bone remodeling; the elements of the skeleton that are under higher forces, including weight, will remodel to minimize stress. A commonly used metric for the mechanical method of body mass estimation is the diameter of the head of the femur. The foot experiences nearly the entire weight force of the individual at any point in the gait cycle and is subject to the biomechanical remodeling that this force would induce. Therefore, the application of the mechanical framework for body mass estimation could stand true for the elements of the foot. The morphometric and mechanical approaches have been validated against one another on a large, geographically disparate population (Auerbach and Ruff, 2004), but have yet to be validated on a sample of known body mass. DeGroote and Humphrey (2011) test the ability of the first metatarsal to estimate femoral head diameter, body mass, and femoral length. The estimated femoral head diameter from the first metatarsal is used to estimate body mass via the morphometric approach and the femoral length is used to estimate living stature. The authors find that body mass and stature estimation methods from more commonly used skeletal elements compared well with the methods developed from the first metatarsal. This study examines 388 `White' individuals from the William M. Bass donated skeletal collection to test the reliability of the body mass estimates from femoral head diameter and bi-iliac breadth, stature from maximum femoral length, and body mass and stature from the metrics of the first metatarsal. This sample included individuals from all four of the BMI classes. This study finds that all of the skeletal indicators compare well with one another; there is no statistical difference in the stature estimates from the first metatarsal and the maximum length of the femur, and there is no statistical between all three of the body mass estimation methods. When compared to the forensic estimates of stature neither of the tested methods had statistical difference. Conversely, when the body mass estimates are compared to forensic body mass there was a statistical difference and when further investigated the most difference in the body mass estimates was in the extremes of body mass (the underweight and obese categories). These findings indicate that the estimation of stature from both the maximum femoral length and the metrics of the metatarsal are accurate methods. Furthermore, the estimation of body mass is accurate when the individual is in the middle range of the BMI spectrum while these methods for outlying individuals are inaccurate. These findings have implications for the application of stature and body mass estimation in the fields of bioarchaeology, forensic anthropology, and paleoanthropology.
A level set method for multiple sclerosis lesion segmentation.
Zhao, Yue; Guo, Shuxu; Luo, Min; Shi, Xue; Bilello, Michel; Zhang, Shaoxiang; Li, Chunming
2018-06-01
In this paper, we present a level set method for multiple sclerosis (MS) lesion segmentation from FLAIR images in the presence of intensity inhomogeneities. We use a three-phase level set formulation of segmentation and bias field estimation to segment MS lesions and normal tissue region (including GM and WM) and CSF and the background from FLAIR images. To save computational load, we derive a two-phase formulation from the original multi-phase level set formulation to segment the MS lesions and normal tissue regions. The derived method inherits the desirable ability to precisely locate object boundaries of the original level set method, which simultaneously performs segmentation and estimation of the bias field to deal with intensity inhomogeneity. Experimental results demonstrate the advantages of our method over other state-of-the-art methods in terms of segmentation accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.
Improving the Discipline of Cost Estimation and Analysis
NASA Technical Reports Server (NTRS)
Piland, William M.; Pine, David J.; Wilson, Delano M.
2000-01-01
The need to improve the quality and accuracy of cost estimates of proposed new aerospace systems has been widely recognized. The industry has done the best job of maintaining related capability with improvements in estimation methods and giving appropriate priority to the hiring and training of qualified analysts. Some parts of Government, and National Aeronautics and Space Administration (NASA) in particular, continue to need major improvements in this area. Recently, NASA recognized that its cost estimation and analysis capabilities had eroded to the point that the ability to provide timely, reliable estimates was impacting the confidence in planning man), program activities. As a result, this year the Agency established a lead role for cost estimation and analysis. The Independent Program Assessment Office located at the Langley Research Center was given this responsibility.
Chen, Te; Chen, Long; Xu, Xing; Cai, Yingfeng; Jiang, Haobin; Sun, Xiaoqiang
2018-04-20
Exact estimation of longitudinal force and sideslip angle is important for lateral stability and path-following control of four-wheel independent driven electric vehicle. This paper presents an effective method for longitudinal force and sideslip angle estimation by observer iteration and information fusion for four-wheel independent drive electric vehicles. The electric driving wheel model is introduced into the vehicle modeling process and used for longitudinal force estimation, the longitudinal force reconstruction equation is obtained via model decoupling, the a Luenberger observer and high-order sliding mode observer are united for longitudinal force observer design, and the Kalman filter is applied to restrain the influence of noise. Via the estimated longitudinal force, an estimation strategy is then proposed based on observer iteration and information fusion, in which the Luenberger observer is applied to achieve the transcendental estimation utilizing less sensor measurements, the extended Kalman filter is used for a posteriori estimation with higher accuracy, and a fuzzy weight controller is used to enhance the adaptive ability of observer system. Simulations and experiments are carried out, and the effectiveness of proposed estimation method is verified.
Chen, Long; Xu, Xing; Cai, Yingfeng; Jiang, Haobin; Sun, Xiaoqiang
2018-01-01
Exact estimation of longitudinal force and sideslip angle is important for lateral stability and path-following control of four-wheel independent driven electric vehicle. This paper presents an effective method for longitudinal force and sideslip angle estimation by observer iteration and information fusion for four-wheel independent drive electric vehicles. The electric driving wheel model is introduced into the vehicle modeling process and used for longitudinal force estimation, the longitudinal force reconstruction equation is obtained via model decoupling, the a Luenberger observer and high-order sliding mode observer are united for longitudinal force observer design, and the Kalman filter is applied to restrain the influence of noise. Via the estimated longitudinal force, an estimation strategy is then proposed based on observer iteration and information fusion, in which the Luenberger observer is applied to achieve the transcendental estimation utilizing less sensor measurements, the extended Kalman filter is used for a posteriori estimation with higher accuracy, and a fuzzy weight controller is used to enhance the adaptive ability of observer system. Simulations and experiments are carried out, and the effectiveness of proposed estimation method is verified. PMID:29677124
Kang, Le; Chen, Weijie; Petrick, Nicholas A.; Gallas, Brandon D.
2014-01-01
The area under the receiver operating characteristic (ROC) curve (AUC) is often used as a summary index of the diagnostic ability in evaluating biomarkers when the clinical outcome (truth) is binary. When the clinical outcome is right-censored survival time, the C index, motivated as an extension of AUC, has been proposed by Harrell as a measure of concordance between a predictive biomarker and the right-censored survival outcome. In this work, we investigate methods for statistical comparison of two diagnostic or predictive systems, of which they could either be two biomarkers or two fixed algorithms, in terms of their C indices. We adopt a U-statistics based C estimator that is asymptotically normal and develop a nonparametric analytical approach to estimate the variance of the C estimator and the covariance of two C estimators. A z-score test is then constructed to compare the two C indices. We validate our one-shot nonparametric method via simulation studies in terms of the type I error rate and power. We also compare our one-shot method with resampling methods including the jackknife and the bootstrap. Simulation results show that the proposed one-shot method provides almost unbiased variance estimations and has satisfactory type I error control and power. Finally, we illustrate the use of the proposed method with an example from the Framingham Heart Study. PMID:25399736
ERIC Educational Resources Information Center
Xu, Xueli; Jia, Yue
2011-01-01
Estimation of item response model parameters and ability distribution parameters has been, and will remain, an important topic in the educational testing field. Much research has been dedicated to addressing this task. Some studies have focused on item parameter estimation when the latent ability was assumed to follow a normal distribution,…
Moura, Fernando Silva; Aya, Julio Cesar Ceballos; Fleury, Agenor Toledo; Amato, Marcelo Britto Passos; Lima, Raul Gonzalez
2010-02-01
One of the electrical impedance tomography objectives is to estimate the electrical resistivity distribution in a domain based only on electrical potential measurements at its boundary generated by an imposed electrical current distribution into the boundary. One of the methods used in dynamic estimation is the Kalman filter. In biomedical applications, the random walk model is frequently used as evolution model and, under this conditions, poor tracking ability of the extended Kalman filter (EKF) is achieved. An analytically developed evolution model is not feasible at this moment. The paper investigates the identification of the evolution model in parallel to the EKF and updating the evolution model with certain periodicity. The evolution model transition matrix is identified using the history of the estimated resistivity distribution obtained by a sensitivity matrix based algorithm and a Newton-Raphson algorithm. To numerically identify the linear evolution model, the Ibrahim time-domain method is used. The investigation is performed by numerical simulations of a domain with time-varying resistivity and by experimental data collected from the boundary of a human chest during normal breathing. The obtained dynamic resistivity values lie within the expected values for the tissues of a human chest. The EKF results suggest that the tracking ability is significantly improved with this approach.
The Scottish way - getting results in soil spectroscopy without spending money
NASA Astrophysics Data System (ADS)
Aitkenhead, Matt; Cameron, Clare; Gaskin, Graham; Choisy, Bastien; Coull, Malcolm; Black, Helaina
2016-04-01
Achieving soil characterisation using spectroscopy requires several things. These include soil data to develop or train a calibration model, a method of capturing spectra, the ability to actually develop a calibration model and also additional data to reinforce the model by introducing some form of stratification or site-specific information. Each of these steps requires investment in both time and money. Here we present an approach developed at the James Hutton Institute that achieves the end goal with minimal cost, by making as much use as possible of existing soil and environmental datasets for Scotland. The spectroscopy device that has been developed is PHYLIS (Prototype HYperspectral Low-cost Imaging System) that was constructed using inexpensive optical components, and uses a basic digital camera to produce visible-range spectra. The results show that for a large number of soil parameters, it is possible to estimate values either very well (RSQ > 0.9) (LOI, C, exchangeable H), well (RSQ > 0.75) (N, pH) or moderately (RSQ > 0.5) (Mg, Na, K, Fe, Al, sand, silt, clay). The methods used to achieve these results are described. A number of additional parameters were not well estimated (elemental concentrations), and we describe how work is ongoing to improve our ability to estimate these using similar technology and data.
Properties of fiber reinforced plastics about static and dynamic loadings
NASA Astrophysics Data System (ADS)
Kudinov, Vladimir V.; Korneeva, Natalia V.
2016-05-01
A method for investigation of impact toughness of anisotropic polymer composite materials (reinforced plastics) with the help of CM model sample in the configuration of microplastic (micro plastic) and impact pendulum-type testing machine under static and dynamic loadings has been developed. The method is called "Break by Impact" (Impact Break IB). The estimation of impact resistance CFRP by this method showed that an increase in loading velocity ~104 times the largest changes occurs in impact toughness and deformation ability of a material.
Combined methods of tolerance increasing for embedded SRAM
NASA Astrophysics Data System (ADS)
Shchigorev, L. A.; Shagurin, I. I.
2016-10-01
The abilities of combined use of different methods of fault tolerance increasing for SRAM such as error detection and correction codes, parity bits, and redundant elements are considered. Area penalties due to using combinations of these methods are investigated. Estimation is made for different configurations of 4K x 128 RAM memory block for 28 nm manufacturing process. Evaluation of the effectiveness of the proposed combinations is also reported. The results of these investigations can be useful for designing fault-tolerant “system on chips”.
Yu, Jihnhee; Yang, Luge; Vexler, Albert; Hutson, Alan D
2016-06-15
The receiver operating characteristic (ROC) curve is a popular technique with applications, for example, investigating an accuracy of a biomarker to delineate between disease and non-disease groups. A common measure of accuracy of a given diagnostic marker is the area under the ROC curve (AUC). In contrast with the AUC, the partial area under the ROC curve (pAUC) looks into the area with certain specificities (i.e., true negative rate) only, and it can be often clinically more relevant than examining the entire ROC curve. The pAUC is commonly estimated based on a U-statistic with the plug-in sample quantile, making the estimator a non-traditional U-statistic. In this article, we propose an accurate and easy method to obtain the variance of the nonparametric pAUC estimator. The proposed method is easy to implement for both one biomarker test and the comparison of two correlated biomarkers because it simply adapts the existing variance estimator of U-statistics. In this article, we show accuracy and other advantages of the proposed variance estimation method by broadly comparing it with previously existing methods. Further, we develop an empirical likelihood inference method based on the proposed variance estimator through a simple implementation. In an application, we demonstrate that, depending on the inferences by either the AUC or pAUC, we can make a different decision on a prognostic ability of a same set of biomarkers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
van Dijk, Joris D; Groothuis-Oudshoorn, Catharina G M; Marshall, Deborah A; IJzerman, Maarten J
2016-06-01
Previous studies have been inconclusive regarding the validity and reliability of preference elicitation methods. The aim of this study was to compare the metrics obtained from a discrete choice experiment (DCE) and profile-case best-worst scaling (BWS) with respect to hip replacement. We surveyed the general US population of men aged 45 to 65 years, and potentially eligible for hip replacement surgery. The survey included sociodemographic questions, eight DCE questions, and twelve BWS questions. Attributes were the probability of a first and second revision, pain relief, ability to participate in sports and perform daily activities, and length of hospital stay. Conditional logit analysis was used to estimate attribute weights, level preferences, and the maximum acceptable risk (MAR) for undergoing revision surgery in six hypothetical treatment scenarios with different attribute levels. A total of 429 (96%) respondents were included. Comparable attribute weights and level preferences were found for both BWS and DCE. Preferences were greatest for hip replacement surgery with high pain relief and the ability to participate in sports and perform daily activities. Although the estimated MARs for revision surgery followed the same trend, the MARs were systematically higher in five of the six scenarios using DCE. This study confirms previous findings that BWS or DCEs are comparable in estimating attribute weights and level preferences. However, the risk tolerance threshold based on the estimation of MAR differs between these methods, possibly leading to inconsistency in comparing treatment scenarios. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Evaluation of microarray data normalization procedures using spike-in experiments
Rydén, Patrik; Andersson, Henrik; Landfors, Mattias; Näslund, Linda; Hartmanová, Blanka; Noppa, Laila; Sjöstedt, Anders
2006-01-01
Background Recently, a large number of methods for the analysis of microarray data have been proposed but there are few comparisons of their relative performances. By using so-called spike-in experiments, it is possible to characterize the analyzed data and thereby enable comparisons of different analysis methods. Results A spike-in experiment using eight in-house produced arrays was used to evaluate established and novel methods for filtration, background adjustment, scanning, channel adjustment, and censoring. The S-plus package EDMA, a stand-alone tool providing characterization of analyzed cDNA-microarray data obtained from spike-in experiments, was developed and used to evaluate 252 normalization methods. For all analyses, the sensitivities at low false positive rates were observed together with estimates of the overall bias and the standard deviation. In general, there was a trade-off between the ability of the analyses to identify differentially expressed genes (i.e. the analyses' sensitivities) and their ability to provide unbiased estimators of the desired ratios. Virtually all analysis underestimated the magnitude of the regulations; often less than 50% of the true regulations were observed. Moreover, the bias depended on the underlying mRNA-concentration; low concentration resulted in high bias. Many of the analyses had relatively low sensitivities, but analyses that used either the constrained model (i.e. a procedure that combines data from several scans) or partial filtration (a novel method for treating data from so-called not-found spots) had with few exceptions high sensitivities. These methods gave considerable higher sensitivities than some commonly used analysis methods. Conclusion The use of spike-in experiments is a powerful approach for evaluating microarray preprocessing procedures. Analyzed data are characterized by properties of the observed log-ratios and the analysis' ability to detect differentially expressed genes. If bias is not a major problem; we recommend the use of either the CM-procedure or partial filtration. PMID:16774679
A CRITICAL ASSESSMENT OF BIODOSIMETRY METHODS FOR LARGE-SCALE INCIDENTS
Swartz, Harold M.; Flood, Ann Barry; Gougelet, Robert M.; Rea, Michael E.; Nicolalde, Roberto J.; Williams, Benjamin B.
2014-01-01
Recognition is growing regarding the possibility that terrorism or large-scale accidents could result in potential radiation exposure of hundreds of thousands of people and that the present guidelines for evaluation after such an event are seriously deficient. Therefore, there is a great and urgent need for after-the-fact biodosimetric methods to estimate radiation dose. To accomplish this goal, the dose estimates must be at the individual level, timely, accurate, and plausibly obtained in large-scale disasters. This paper evaluates current biodosimetry methods, focusing on their strengths and weaknesses in estimating human radiation exposure in large-scale disasters at three stages. First, the authors evaluate biodosimetry’s ability to determine which individuals did not receive a significant exposure so they can be removed from the acute response system. Second, biodosimetry’s capacity to classify those initially assessed as needing further evaluation into treatment-level categories is assessed. Third, we review biodosimetry’s ability to guide treatment, both short- and long-term, is reviewed. The authors compare biodosimetric methods that are based on physical vs. biological parameters and evaluate the features of current dosimeters (capacity, speed and ease of getting information, and accuracy) to determine which are most useful in meeting patients’ needs at each of the different stages. Results indicate that the biodosimetry methods differ in their applicability to the three different stages, and that combining physical and biological techniques may sometimes be most effective. In conclusion, biodosimetry techniques have different properties, and knowledge of their properties for meeting the different needs for different stages will result in their most effective use in a nuclear disaster mass-casualty event. PMID:20065671
Muñoz, David J.; Miller, David A.W.; Sutherland, Chris; Grant, Evan H. Campbell
2016-01-01
The cryptic behavior and ecology of herpetofauna make estimating the impacts of environmental change on demography difficult; yet, the ability to measure demographic relationships is essential for elucidating mechanisms leading to the population declines reported for herpetofauna worldwide. Recently developed spatial capture–recapture (SCR) methods are well suited to standard herpetofauna monitoring approaches. Individually identifying animals and their locations allows accurate estimates of population densities and survival. Spatial capture–recapture methods also allow estimation of parameters describing space-use and movement, which generally are expensive or difficult to obtain using other methods. In this paper, we discuss the basic components of SCR models, the available software for conducting analyses, and the experimental designs based on common herpetological survey methods. We then apply SCR models to Red-backed Salamander (Plethodon cinereus), to determine differences in density, survival, dispersal, and space-use between adult male and female salamanders. By highlighting the capabilities of SCR, and its advantages compared to traditional methods, we hope to give herpetologists the resource they need to apply SCR in their own systems.
Fundamental Rotorcraft Acoustic Modeling From Experiments (FRAME)
NASA Technical Reports Server (NTRS)
Greenwood, Eric
2011-01-01
A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.
Estimation of Land Surface Temperature from GCOM-W1 AMSR2 Data over the Chinese Landmass
NASA Astrophysics Data System (ADS)
Zhou, Ji; Dai, Fengnan; Zhang, Xiaodong
2016-04-01
As one of the most important parameter at the interface between the earth's surface and atmosphere, land surface temperature (LST) plays a crucial role in many fields, such as climate change monitoring and hydrological modeling. Satellite remote sensing provides the unique possibility to observe LST of large regions at diverse spatial and temporal scales. Compared with thermal infrared (TIR) remote sensing, passive microwave (PW) remote sensing has a better ability in overcoming the influences of clouds; thus, it can be used to improve the temporal resolution of current satellite TIR LST. However, most of current methods for estimation LST from PW remote sensing are empirical and have unsatisfied generalization. In this study, a semi-empirical method is proposed to estimate LST from the observation of the Advanced Microwave Scanning Radiometer 2 (AMSR2) on board the Global Change Observation Mission 1st-WATER "SHIZUKU" satellite (GCOM-W1). The method is based on the PW radiation transfer equation, which is simplified based on (1) the linear relationship between the emissivities of horizontal and vertical polarization channels at the same frequency and (2) the significant relationship between atmospheric parameters and the atmospheric water vapor content. An iteration approach is used to best fit the pixel-based coefficients in the simplified radiation transfer equation of the horizontal and vertical polarization channels at each frequency. Then an integration approach is proposed to generate the ensemble estimation from estimations of multiple frequencies for different land cover types. This method is trained with the AMSR2 brightness temperature and MODIS LST in 2013 over the entire Chinese landmass and then it is tested with the data in 2014. Validation based on in situ LSTs measured in northwestern China demonstrates that the proposed method has a better accuracy than the polarization radiation method, with a root-mean squared error of 3 K. Although the proposal method is applied to AMSR2 data, it has good ability to extend to other satellite PW sensors, such as the Advanced Microwave Scanning Radiometer for EOS (AMSR-E) on board the Aqua satellite and the Special Sensor Microwave/Imager (SSM/I) on board the Defense Meteorological Satellite Program (DMSP) satellite. It would be beneficial in providing LST to applications at continental and global scales.
Efficient species-level monitoring at the landscape scale
Barry R. Noon; Larissa L. Bailey; Thomas D. Sisk; Kevin S. McKelvey
2012-01-01
Monitoring the population trends of multiple animal species at a landscape scale is prohibitively expensive. However, advances in survey design, statistical methods, and the ability to estimate species presence on the basis of detectionÂnondetection data have greatly increased the feasibility of species-level monitoring. For example, recent advances in monitoring make...
Gender and Transportation Access among Community-Dwelling Seniors
ERIC Educational Resources Information Center
Dupuis, Josette; Weiss, Deborah R.; Wolfson, Christina
2007-01-01
Purpose: This study estimates the prevalence of problems with transportation in a sample of community-dwelling seniors residing in an urban setting and investigates the role that gender plays in the ability of seniors to remain mobile in their communities. Design and Methods: Data collected as part of a study assessing the prevalence and…
Abdelnour, A. Farras; Huppert, Theodore
2009-01-01
Near-infrared spectroscopy is a non-invasive neuroimaging method which uses light to measure changes in cerebral blood oxygenation associated with brain activity. In this work, we demonstrate the ability to record and analyze images of brain activity in real-time using a 16-channel continuous wave optical NIRS system. We propose a novel real-time analysis framework using an adaptive Kalman filter and a state–space model based on a canonical general linear model of brain activity. We show that our adaptive model has the ability to estimate single-trial brain activity events as we apply this method to track and classify experimental data acquired during an alternating bilateral self-paced finger tapping task. PMID:19457389
Measuring and Specifying Combinatorial Coverage of Test Input Configurations
Kuhn, D. Richard; Kacker, Raghu N.; Lei, Yu
2015-01-01
A key issue in testing is how many tests are needed for a required level of coverage or fault detection. Estimates are often based on error rates in initial testing, or on code coverage. For example, tests may be run until a desired level of statement or branch coverage is achieved. Combinatorial methods present an opportunity for a different approach to estimating required test set size, using characteristics of the test set. This paper describes methods for estimating the coverage of, and ability to detect, t-way interaction faults of a test set based on a covering array. We also develop a connection between (static) combinatorial coverage and (dynamic) code coverage, such that if a specific condition is satisfied, 100% branch coverage is assured. Using these results, we propose practical recommendations for using combinatorial coverage in specifying test requirements. PMID:28133442
Carlock, Jon M; Smith, Sarah L; Hartman, Michael J; Morris, Robert T; Ciroslan, Dragomir A; Pierce, Kyle C; Newton, Robert U; Harman, Everett A; Sands, William A; Stone, Michael H
2004-08-01
The purpose of this study was to assess the usefulness of the vertical jump and estimated vertical-jump power as a field test for weightlifting. Estimated PP output from the vertical jump was correlated with lifting ability among 64 USA national-level weightlifters (junior and senior men and women). Vertical jump was measured using the Kinematic Measurement System, consisting of a switch mat interfaced with a laptop computer. Vertical jumps were measured using a hands-on-hips method. A counter-movement vertical jump (CMJ) and a static vertical jump (SJ, 90 degrees knee angle) were measured. Two trials were given for each condition. Test-retest reliability for jump height was intra-class correlation (ICC) = 0.98 (CMJ) and ICC = 0.96 (SJ). Athletes warmed up on their own for 2-3 minutes, followed by 2 practice jumps at each condition. Peak power (PP) was estimated using the equations developed by Sayers et al. (24). The athletes' current lifting capabilities were assessed by a questionnaire, and USA national coaches checked the listed values. Differences between groups (i.e., men versus women, juniors versus resident lifters) were determined using t-tests (p < or = 0.05). Correlations were determined using Pearson's r. Results indicate that vertical jumping PP is strongly associated with weightlifting ability. Thus, these results indicate that PP derived from the vertical jump (CMJ or SJ) can be a valuable tool in assessing weightlifting performance.
A Method to Accurately Estimate the Muscular Torques of Human Wearing Exoskeletons by Torque Sensors
Hwang, Beomsoo; Jeon, Doyoung
2015-01-01
In exoskeletal robots, the quantification of the user’s muscular effort is important to recognize the user’s motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users’ muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user’s limb accurately from the measured torque. The user’s limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user’s muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions. PMID:25860074
NASA Technical Reports Server (NTRS)
Gnoffo, P. A.
1978-01-01
An investigation has been made into the ability of a method of integral relations to calculate inviscid zero degree angle of attack, radiative heating distributions over blunt, sonic corner bodies for some representative outer planet entry conditions is investigated. Comparisons have been made with a more detailed numerical method, a time asymptotic technique, using the same equilibrium chemistry and radiation transport subroutines. An effort to produce a second order approximation (two-strip) method of integral relations code to aid in this investigation is also described and a modified two-strip routine is presented. Results indicate that the one-strip method of integral relations cannot be used to obtain accurate estimates of the radiative heating distribution because of its inability to resolve thermal gradients near the wall. The two-strip method can sometimes be used to improve these estimates; however, the two-strip method has only a small range of conditions over which it will yield significant improvement over the one-strip method.
Flood Extent Mapping Using Dual-Polarimetric SENTINEL-1 Synthetic Aperture Radar Imagery
NASA Astrophysics Data System (ADS)
Jo, M.-J.; Osmanoglu, B.; Zhang, B.; Wdowinski, S.
2018-04-01
Rapid generation of synthetic aperture radar (SAR) based flood extent maps provide valuable data in disaster response efforts thanks to the cloud penetrating ability of microwaves. We present a method using dual-polarimetric SAR imagery acquired on Sentinel-1a/b satellites. A false-colour map is generated using pre- and post- disaster imagery, allowing operators to distinguish between existing standing water pre-flooding, and recently flooded areas. The method works best in areas of standing water and provides mixed results in urban areas. A flood depth map is also estimated by using an external DEM. We will present the methodology, it's estimated accuracy as well as investigations into improving the response in urban areas.
Method of estimation of scanning system quality
NASA Astrophysics Data System (ADS)
Larkin, Eugene; Kotov, Vladislav; Kotova, Natalya; Privalov, Alexander
2018-04-01
Estimation of scanner parameters is an important part in developing electronic document management system. This paper suggests considering the scanner as a system that contains two main channels: a photoelectric conversion channel and a channel for measuring spatial coordinates of objects. Although both of channels consist of the same elements, the testing of their parameters should be executed separately. The special structure of the two-dimensional reference signal is offered for this purpose. In this structure, the fields for testing various parameters of the scanner are sp atially separated. Characteristics of the scanner are associated with the loss of information when a document is digitized. The methods to test grayscale transmitting ability, resolution and aberrations level are offered.
The legacy of disadvantage: multigenerational neighborhood effects on cognitive ability.
Sharkey, Patrick; Elwert, Felix
2011-05-01
This study examines how the neighborhood environments experienced over multiple generations of a family influence children's cognitive ability. Building on recent research showing strong continuity in neighborhood environments across generations of family members, the authors argue for a revised perspective on "neighborhood effects" that considers the ways in which the neighborhood environment in one generation may have a lingering impact on the next generation. To analyze multigenerational effects, the authors use newly developed methods designed to estimate unbiased treatment effects when treatments and confounders vary over time. The results confirm a powerful link between neighborhoods and cognitive ability that extends across generations. A family's exposure to neighborhood poverty across two consecutive generations reduces child cognitive ability by more than half a standard deviation. A formal sensitivity analysis suggests that results are robust to unobserved selection bias.
Testing of Gyroless Estimation Algorithms for the Fuse Spacecraft
NASA Technical Reports Server (NTRS)
Harman, R.; Thienel, J.; Oshman, Yaakov
2004-01-01
This paper documents the testing and development of magnetometer-based gyroless attitude and rate estimation algorithms for the Far Ultraviolet Spectroscopic Explorer (FUSE). The results of two approaches are presented, one relies on a kinematic model for propagation, a method used in aircraft tracking, and the other is a pseudolinear Kalman filter that utilizes Euler's equations in the propagation of the estimated rate. Both algorithms are tested using flight data collected over a few months after the failure of two of the reaction wheels. The question of closed-loop stability is addressed. The ability of the controller to meet the science slew requirements, without the gyros, is analyzed.
Agrillo, Christian; Piffer, Laura; Adriano, Andrea
2013-07-01
A significant debate surrounds the nature of the cognitive mechanisms involved in non-symbolic number estimation. Several studies have suggested the existence of the same cognitive system for estimation of time, space, and number, called "a theory of magnitude" (ATOM). In addition, researchers have proposed the theory that non-symbolic number abilities might support our mathematical skills. Despite the large number of studies carried out, no firm conclusions can be drawn on either topic. In the present study, we correlated the performance of adults on non-symbolic magnitude estimations and symbolic numerical tasks. Non-symbolic magnitude abilities were assessed by asking participants to estimate which auditory tone lasted longer (time), which line was longer (space), and which group of dots was more numerous (number). To assess symbolic numerical abilities, participants were required to perform mental calculations and mathematical reasoning. We found a positive correlation between non-symbolic and symbolic numerical abilities. On the other hand, no correlation was found among non-symbolic estimations of time, space, and number. Our study supports the idea that mathematical abilities rely on rudimentary numerical skills that predate verbal language. By contrast, the lack of correlation among non-symbolic estimations of time, space, and number is incompatible with the idea that these magnitudes are entirely processed by the same cognitive system.
How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.
Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J
2014-09-01
Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...
2017-08-25
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
Estimated maximal and current brain volume predict cognitive ability in old age
Royle, Natalie A.; Booth, Tom; Valdés Hernández, Maria C.; Penke, Lars; Murray, Catherine; Gow, Alan J.; Maniega, Susana Muñoz; Starr, John; Bastin, Mark E.; Deary, Ian J.; Wardlaw, Joanna M.
2013-01-01
Brain tissue deterioration is a significant contributor to lower cognitive ability in later life; however, few studies have appropriate data to establish how much influence prior brain volume and prior cognitive performance have on this association. We investigated the associations between structural brain imaging biomarkers, including an estimate of maximal brain volume, and detailed measures of cognitive ability at age 73 years in a large (N = 620), generally healthy, community-dwelling population. Cognitive ability data were available from age 11 years. We found positive associations (r) between general cognitive ability and estimated brain volume in youth (male, 0.28; females, 0.12), and in measured brain volume in later life (males, 0.27; females, 0.26). Our findings show that cognitive ability in youth is a strong predictor of estimated prior and measured current brain volume in old age but that these effects were the same for both white and gray matter. As 1 of the largest studies of associations between brain volume and cognitive ability with normal aging, this work contributes to the wider understanding of how some early-life factors influence cognitive aging. PMID:23850342
The geography of hospital admission in a national health service with patient choice.
Fabbri, Daniele; Robone, Silvana
2010-09-01
Each year about 20% of the 10 million hospital inpatients in Italy get admitted to hospitals outside the Local Health Authority of residence. In this paper we carefully explore this phenomenon and estimate gravity equations for 'trade' in hospital care using a Poisson pseudo-maximum likelihood method. Consistency of the PPML estimator is guaranteed under the null of independence provided that the conditional mean is correctly specified. In our case we find that patients' flows are affected by network autocorrelation. We correct for it by relying upon spatial filtering. Our results suggest that the gravity model is a good framework for explaining patient mobility in most of the examined diagnostic groups. We find that the ability to restrain patients' outflows increases with the size of the pool of enrollees. Moreover, the ability to attract patients' inflows is reduced by the size of pool of enrollees for all LHAs except for the very big LHAs. For LHAs in the top quintile of size of enrollees, the ability to attract inflows increases with the size of the pool. Copyright (c) 2010 John Wiley & Sons, Ltd.
Azevedo Peixoto, Leonardo de; Laviola, Bruno Galvêas; Alves, Alexandre Alonso; Rosado, Tatiana Barbosa; Bhering, Leonardo Lopes
2017-01-01
Genomic wide selection is a promising approach for improving the selection accuracy in plant breeding, particularly in species with long life cycles, such as Jatropha. Therefore, the objectives of this study were to estimate the genetic parameters for grain yield (GY) and the weight of 100 seeds (W100S) using restricted maximum likelihood (REML); to compare the performance of GWS methods to predict GY and W100S; and to estimate how many markers are needed to train the GWS model to obtain the maximum accuracy. Eight GWS models were compared in terms of predictive ability. The impact that the marker density had on the predictive ability was investigated using a varying number of markers, from 2 to 1,248. Because the genetic variance between evaluated genotypes was significant, it was possible to obtain selection gain. All of the GWS methods tested in this study can be used to predict GY and W100S in Jatropha. A training model fitted using 1,000 and 800 markers is sufficient to capture the maximum genetic variance and, consequently, maximum prediction ability of GY and W100S, respectively. This study demonstrated the applicability of genome-wide prediction to identify useful genetic sources of GY and W100S for Jatropha breeding. Further research is needed to confirm the applicability of the proposed approach to other complex traits.
Optimum sensor placement for microphone arrays
NASA Astrophysics Data System (ADS)
Rabinkin, Daniel V.
Microphone arrays can be used for high-quality sound pickup in reverberant and noisy environments. Sound capture using conventional single microphone methods suffers severe degradation under these conditions. The beamforming capabilities of microphone array systems allow highly directional sound capture, providing enhanced signal-to-noise ratio (SNR) when compared to single microphone performance. The overall performance of an array system is governed by its ability to locate and track sound sources and its ability to capture sound from desired spatial volumes. These abilities are strongly affected by the spatial placement of microphone sensors. A method is needed to optimize placement for a specified number of sensors in a given acoustical environment. The objective of the optimization is to obtain the greatest average system SNR for sound capture in the region of interest. A two-step sound source location method is presented. In the first step, time delay of arrival (TDOA) estimates for select microphone pairs are determined using a modified version of the Omologo-Svaizer cross-power spectrum phase expression. In the second step, the TDOA estimates are used in a least-mean-squares gradient descent search algorithm to obtain a location estimate. Statistics for TDOA estimate error as a function of microphone pair/sound source geometry and acoustic environment are gathered from a set of experiments. These statistics are used to model position estimation accuracy for a given array geometry. The effectiveness of sound source capture is also dependent on array geometry and the acoustical environment. Simple beamforming and time delay compensation (TDC) methods provide spatial selectivity but suffer performance degradation in reverberant environments. Matched filter array (MFA) processing can mitigate the effects of reverberation. The shape and gain advantage of the capture region for these techniques is described and shown to be highly influenced by the placement of array sensors. A procedure is developed to evaluate a given array configuration based on the above-mentioned metrics. Constrained placement optimizations are performed that maximize SNR for both TDC and MFA capture methods. Results are compared for various acoustic environments and various enclosure sizes. General guidelines are presented for placement strategy and bandwidth dependence, as they relate to reverberation levels, ambient noise, and enclosure geometry. An overall performance function is described based on these metrics. Performance of the microphone array system is also constrained by the design limitations of the supporting hardware. Two newly developed hardware architectures are presented that support the described algorithms. A low- cost 8-channel system with off-the-shelf componentry was designed and its performance evaluated. A massively parallel 512-channel custom-built system is in development-its capabilities and the rationale for its design are described.
Focused ultrasound transducer spatial peak intensity estimation: a comparison of methods
NASA Astrophysics Data System (ADS)
Civale, John; Rivens, Ian; Shaw, Adam; ter Haar, Gail
2018-03-01
Characterisation of the spatial peak intensity at the focus of high intensity focused ultrasound transducers is difficult because of the risk of damage to hydrophone sensors at the high focal pressures generated. Hill et al (1994 Ultrasound Med. Biol. 20 259-69) provided a simple equation for estimating spatial-peak intensity for solid spherical bowl transducers using measured acoustic power and focal beamwidth. This paper demonstrates theoretically and experimentally that this expression is only strictly valid for spherical bowl transducers without a central (imaging) aperture. A hole in the centre of the transducer results in over-estimation of the peak intensity. Improved strategies for determining focal peak intensity from a measurement of total acoustic power are proposed. Four methods are compared: (i) a solid spherical bowl approximation (after Hill et al 1994 Ultrasound Med. Biol. 20 259-69), (ii) a numerical method derived from theory, (iii) a method using measured sidelobe to focal peak pressure ratio, and (iv) a method for measuring the focal power fraction (FPF) experimentally. Spatial-peak intensities were estimated for 8 transducers at three drive powers levels: low (approximately 1 W), moderate (~10 W) and high (20-70 W). The calculated intensities were compared with those derived from focal peak pressure measurements made using a calibrated hydrophone. The FPF measurement method was found to provide focal peak intensity estimates that agreed most closely (within 15%) with the hydrophone measurements, followed by the pressure ratio method (within 20%). The numerical method was found to consistently over-estimate focal peak intensity (+40% on average), however, for transducers with a central hole it was more accurate than using the solid bowl assumption (+70% over-estimation). In conclusion, the ability to make use of an automated beam plotting system, and a hydrophone with good spatial resolution, greatly facilitates characterisation of the FPF, and consequently gives improved confidence in estimating spatial peak intensity from measurement of acoustic power.
Three estimates of the association between linear growth failure and cognitive ability.
Cheung, Y B; Lam, K F
2009-09-01
To compare three estimators of association between growth stunting as measured by height-for-age Z-score and cognitive ability in children, and to examine the extent statistical adjustment for covariates is useful for removing confounding due to socio-economic status. Three estimators, namely random-effects, within- and between-cluster estimators, for panel data were used to estimate the association in a survey of 1105 pairs of siblings who were assessed for anthropometry and cognition. Furthermore, a 'combined' model was formulated to simultaneously provide the within- and between-cluster estimates. Random-effects and between-cluster estimators showed strong association between linear growth and cognitive ability, even after adjustment for a range of socio-economic variables. In contrast, the within-cluster estimator showed a much more modest association: For every increase of one Z-score in linear growth, cognitive ability increased by about 0.08 standard deviation (P < 0.001). The combined model verified that the between-cluster estimate was significantly larger than the within-cluster estimate (P = 0.004). Residual confounding by socio-economic situations may explain a substantial proportion of the observed association between linear growth and cognition in studies that attempt to control the confounding by means of multivariable regression analysis. The within-cluster estimator provides more convincing and modest results about the strength of association.
View Estimation Based on Value System
NASA Astrophysics Data System (ADS)
Takahashi, Yasutake; Shimada, Kouki; Asada, Minoru
Estimation of a caregiver's view is one of the most important capabilities for a child to understand the behavior demonstrated by the caregiver, that is, to infer the intention of behavior and/or to learn the observed behavior efficiently. We hypothesize that the child develops this ability in the same way as behavior learning motivated by an intrinsic reward, that is, he/she updates the model of the estimated view of his/her own during the behavior imitated from the observation of the behavior demonstrated by the caregiver based on minimizing the estimation error of the reward during the behavior. From this view, this paper shows a method for acquiring such a capability based on a value system from which values can be obtained by reinforcement learning. The parameters of the view estimation are updated based on the temporal difference error (hereafter TD error: estimation error of the state value), analogous to the way such that the parameters of the state value of the behavior are updated based on the TD error. Experiments with simple humanoid robots show the validity of the method, and the developmental process parallel to young children's estimation of its own view during the imitation of the observed behavior of the caregiver is discussed.
Flexible nonlinear estimates of the association between height and mental ability in early life.
Murasko, Jason E
2014-01-01
To estimate associations between early-life mental ability and height/height-growth in contemporary US children. Structured additive regression models are used to flexibly estimate the associations between height and mental ability at approximately 24 months of age. The sample is taken from the Early Childhood Longitudinal Study-Birth Cohort, a national study whose target population was children born in the US during 2001. A nonlinear association is indicated between height and mental ability at approximately 24 months of age. There is an increasing association between height and mental ability below the mean value of height, but a flat association thereafter. Annualized growth shows the same nonlinear association to ability when controlling for baseline length at 9 months. Restricted growth at lower values of the height distribution is associated with lower measured mental ability in contemporary US children during the first years of life. Copyright © 2013 Wiley Periodicals, Inc.
Stability basin estimates fall risk from observed kinematics, demonstrated on the Sit-to-Stand task.
Shia, Victor; Moore, Talia Yuki; Holmes, Patrick; Bajcsy, Ruzena; Vasudevan, Ram
2018-04-27
The ability to quantitatively measure stability is essential to ensuring the safety of locomoting systems. While the response to perturbation directly reflects the stability of a motion, this experimental method puts human subjects at risk. Unfortunately, existing indirect methods for estimating stability from unperturbed motion have been shown to have limited predictive power. This paper leverages recent advances in dynamical systems theory to accurately estimate the stability of human motion without requiring perturbation. This approach relies on kinematic observations of a nominal Sit-to-Stand motion to construct an individual-specific dynamic model, input bounds, and feedback control that are then used to compute the set of perturbations from which the model can recover. This set, referred to as the stability basin, was computed for 14 individuals, and was able to successfully differentiate between less and more stable Sit-to-Stand strategies for each individual with greater accuracy than existing methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
Satagopan, Jaya M; Sen, Ananda; Zhou, Qin; Lan, Qing; Rothman, Nathaniel; Langseth, Hilde; Engel, Lawrence S
2016-06-01
Matched case-control studies are popular designs used in epidemiology for assessing the effects of exposures on binary traits. Modern studies increasingly enjoy the ability to examine a large number of exposures in a comprehensive manner. However, several risk factors often tend to be related in a nontrivial way, undermining efforts to identify the risk factors using standard analytic methods due to inflated type-I errors and possible masking of effects. Epidemiologists often use data reduction techniques by grouping the prognostic factors using a thematic approach, with themes deriving from biological considerations. We propose shrinkage-type estimators based on Bayesian penalization methods to estimate the effects of the risk factors using these themes. The properties of the estimators are examined using extensive simulations. The methodology is illustrated using data from a matched case-control study of polychlorinated biphenyls in relation to the etiology of non-Hodgkin's lymphoma. © 2015, The International Biometric Society.
Kang, Le; Chen, Weijie; Petrick, Nicholas A; Gallas, Brandon D
2015-02-20
The area under the receiver operating characteristic curve is often used as a summary index of the diagnostic ability in evaluating biomarkers when the clinical outcome (truth) is binary. When the clinical outcome is right-censored survival time, the C index, motivated as an extension of area under the receiver operating characteristic curve, has been proposed by Harrell as a measure of concordance between a predictive biomarker and the right-censored survival outcome. In this work, we investigate methods for statistical comparison of two diagnostic or predictive systems, of which they could either be two biomarkers or two fixed algorithms, in terms of their C indices. We adopt a U-statistics-based C estimator that is asymptotically normal and develop a nonparametric analytical approach to estimate the variance of the C estimator and the covariance of two C estimators. A z-score test is then constructed to compare the two C indices. We validate our one-shot nonparametric method via simulation studies in terms of the type I error rate and power. We also compare our one-shot method with resampling methods including the jackknife and the bootstrap. Simulation results show that the proposed one-shot method provides almost unbiased variance estimations and has satisfactory type I error control and power. Finally, we illustrate the use of the proposed method with an example from the Framingham Heart Study. Copyright © 2014 John Wiley & Sons, Ltd.
Comparative study of age estimation using dentinal translucency by digital and conventional methods
Bommannavar, Sushma; Kulkarni, Meena
2015-01-01
Introduction: Estimating age using the dentition plays a significant role in identification of the individual in forensic cases. Teeth are one of the most durable and strongest structures in the human body. The morphology and arrangement of teeth vary from person-to-person and is unique to an individual as are the fingerprints. Therefore, the use of dentition is the method of choice in the identification of the unknown. Root dentin translucency is considered to be one of the best parameters for dental age estimation. Traditionally, root dentin translucency was measured using calipers. Recently, the use of custom built software programs have been proposed for the same. Objectives: The present study describes a method to measure root dentin translucency on sectioned teeth using a custom built software program Adobe Photoshop 7.0 version (Adobe system Inc, Mountain View California). Materials and Methods: A total of 50 single rooted teeth were sectioned longitudinally to derive a 0.25 mm uniform thickness and the root dentin translucency was measured using digital and caliper methods and compared. The Gustafson's morphohistologic approach is used in this study. Results: Correlation coefficients of translucency measurements to age were statistically significant for both the methods (P < 0.125) and linear regression equations derived from both methods revealed better ability of the digital method to assess age. Conclusion: The custom built software program used in the present study is commercially available and widely used image editing software. Furthermore, this method is easy to use and less time consuming. The measurements obtained using this method are more precise and thus help in more accurate age estimation. Considering these benefits, the present study recommends the use of digital method to assess translucency for age estimation. PMID:25709325
Landy, Rebecca; Cheung, Li C; Schiffman, Mark; Gage, Julia C; Hyun, Noorie; Wentzensen, Nicolas; Kinney, Walter K; Castle, Philip E; Fetterman, Barbara; Poitras, Nancy E; Lorey, Thomas; Sasieni, Peter D; Katki, Hormuzd A
2018-06-01
Electronic health-records (EHR) are increasingly used by epidemiologists studying disease following surveillance testing to provide evidence for screening intervals and referral guidelines. Although cost-effective, undiagnosed prevalent disease and interval censoring (in which asymptomatic disease is only observed at the time of testing) raise substantial analytic issues when estimating risk that cannot be addressed using Kaplan-Meier methods. Based on our experience analysing EHR from cervical cancer screening, we previously proposed the logistic-Weibull model to address these issues. Here we demonstrate how the choice of statistical method can impact risk estimates. We use observed data on 41,067 women in the cervical cancer screening program at Kaiser Permanente Northern California, 2003-2013, as well as simulations to evaluate the ability of different methods (Kaplan-Meier, Turnbull, Weibull and logistic-Weibull) to accurately estimate risk within a screening program. Cumulative risk estimates from the statistical methods varied considerably, with the largest differences occurring for prevalent disease risk when baseline disease ascertainment was random but incomplete. Kaplan-Meier underestimated risk at earlier times and overestimated risk at later times in the presence of interval censoring or undiagnosed prevalent disease. Turnbull performed well, though was inefficient and not smooth. The logistic-Weibull model performed well, except when event times didn't follow a Weibull distribution. We have demonstrated that methods for right-censored data, such as Kaplan-Meier, result in biased estimates of disease risks when applied to interval-censored data, such as screening programs using EHR data. The logistic-Weibull model is attractive, but the model fit must be checked against Turnbull non-parametric risk estimates. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Computational tools for multi-linked flexible structures
NASA Technical Reports Server (NTRS)
Lee, Gordon K. F.; Brubaker, Thomas A.; Shults, James R.
1990-01-01
A software module which designs and tests controllers and filters in Kalman Estimator form, based on a polynomial state-space model is discussed. The user-friendly program employs an interactive graphics approach to simplify the design process. A variety of input methods are provided to test the effectiveness of the estimator. Utilities are provided which address important issues in filter design such as graphical analysis, statistical analysis, and calculation time. The program also provides the user with the ability to save filter parameters, inputs, and outputs for future use.
Meyer, Andreas L S; Wiens, John J
2018-01-01
Estimates of diversification rates are invaluable for many macroevolutionary studies. Recently, an approach called BAMM (Bayesian Analysis of Macro-evolutionary Mixtures) has become widely used for estimating diversification rates and rate shifts. At the same time, several articles have concluded that estimates of net diversification rates from the method-of-moments (MS) estimators are inaccurate. Yet, no studies have compared the ability of these two methods to accurately estimate clade diversification rates. Here, we use simulations to compare their performance. We found that BAMM yielded relatively weak relationships between true and estimated diversification rates. This occurred because BAMM underestimated the number of rates shifts across each tree, and assigned high rates to small clades with low rates. Errors in both speciation and extinction rates contributed to these errors, showing that using BAMM to estimate only speciation rates is also problematic. In contrast, the MS estimators (particularly using stem group ages), yielded stronger relationships between true and estimated diversification rates, by roughly twofold. Furthermore, the MS approach remained relatively accurate when diversification rates were heterogeneous within clades, despite the widespread assumption that it requires constant rates within clades. Overall, we caution that BAMM may be problematic for estimating diversification rates and rate shifts. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Estimation of pyrethroid pesticide intake using regression ...
Population-based estimates of pesticide intake are needed to characterize exposure for particular demographic groups based on their dietary behaviors. Regression modeling performed on measurements of selected pesticides in composited duplicate diet samples allowed (1) estimation of pesticide intakes for a defined demographic community, and (2) comparison of dietary pesticide intakes between the composite and individual samples. Extant databases were useful for assigning individual samples to composites, but they could not provide the breadth of information needed to facilitate measurable levels in every composite. Composite sample measurements were found to be good predictors of pyrethroid pesticide levels in their individual sample constituents where sufficient measurements are available above the method detection limit. Statistical inference shows little evidence of differences between individual and composite measurements and suggests that regression modeling of food groups based on composite dietary samples may provide an effective tool for estimating dietary pesticide intake for a defined population. The research presented in the journal article will improve community's ability to determine exposures through the dietary route with a less burdensome and costly method.
Models of Pilot Behavior and Their Use to Evaluate the State of Pilot Training
NASA Astrophysics Data System (ADS)
Jirgl, Miroslav; Jalovecky, Rudolf; Bradac, Zdenek
2016-07-01
This article discusses the possibilities of obtaining new information related to human behavior, namely the changes or progressive development of pilots' abilities during training. The main assumption is that a pilot's ability can be evaluated based on a corresponding behavioral model whose parameters are estimated using mathematical identification procedures. The mean values of the identified parameters are obtained via statistical methods. These parameters are then monitored and their changes evaluated. In this context, the paper introduces and examines relevant mathematical models of human (pilot) behavior, the pilot-aircraft interaction, and an example of the mathematical analysis.
Sundstrup, Emil; Jakobsen, Markus Due; Mortensen, Ole Steen; Andersen, Lars Louis
2017-03-01
Objectives The aim of this study was to determine the joint association of multimorbidity and work ability with the risk of long-term sickness absence (LTSA) in the general working population. Methods Cox regression analysis censoring for competing events (statutory retirement, early retirement, disability pension, immigration, or death) was performed to estimate the joint association of chronic diseases and work ability in relation to physical and mental demands of the job with the prospective risk for LTSA (defined as ≥6 consecutive weeks during 2-year follow-up) among 10 427 wage earners from the general working population (2010 Danish Work Environment Cohort Study). Control variables were age, gender, psychosocial work environment, smoking, leisure physical activity, body mass index, job group, and previous LTSA. Results Of the 10 427 respondents, 56.8% had experienced ≥1 chronic disease at baseline. The fully adjusted model showed an association between number of chronic diseases and risk of LTSA. This association was stronger among employees with poor work ability (either physical or mental). Compared to employees with no diseases and good physical work ability, the risk estimate for LTSA was 1.95 [95% confidence interval (95% CI) 1.50-2.52] for employees with ≥3 chronic diseases and good physical work ability, whereas it was 3.60 (95% CI 2.50-5.19) for those with ≥3 chronic diseases and poor physical work ability. Overall, the joint association of chronic disease and work ability with LTSA appears to be additive. Conclusions Poor work ability combined with ≥1 chronic diseases is associated with high risk of long-term sickness absence in the general working population. Initiatives to improve or maintain work ability should be highly prioritized to secure sustainable employability among workers with ≥1 chronic diseases.
Levine, Zachary H.; Pintar, Adam L.; Dobler, Jeremy T.; ...
2016-04-13
Laser absorption spectroscopy (LAS) has been used over the last several decades for the measurement of trace gasses in the atmosphere. For over a decade, LAS measurements from multiple sources and tens of retroreflectors have been combined with sparse-sample tomography methods to estimate the 2-D distribution of trace gas concentrations and underlying fluxes from point-like sources. In this work, we consider the ability of such a system to detect and estimate the position and rate of a single point leak which may arise as a failure mode for carbon dioxide storage. The leak is assumed to be at a constant ratemore » giving rise to a plume with a concentration and distribution that depend on the wind velocity. Lastly, we demonstrate the ability of our approach to detect a leak using numerical simulation and also present a preliminary measurement.« less
Optimal External Wrench Distribution During a Multi-Contact Sit-to-Stand Task.
Bonnet, Vincent; Azevedo-Coste, Christine; Robert, Thomas; Fraisse, Philippe; Venture, Gentiane
2017-07-01
This paper aims at developing and evaluating a new practical method for the real-time estimate of joint torques and external wrenches during multi-contact sit-to-stand (STS) task using kinematics data only. The proposed method allows also identifying subject specific body inertial segment parameters that are required to perform inverse dynamics. The identification phase is performed using simple and repeatable motions. Thanks to an accurately identified model the estimate of the total external wrench can be used as an input to solve an under-determined multi-contact problem. It is solved using a constrained quadratic optimization process minimizing a hybrid human-like energetic criterion. The weights of this hybrid cost function are adjusted and a sensitivity analysis is performed in order to reproduce robustly human external wrench distribution. The results showed that the proposed method could successfully estimate the external wrenches under buttocks, feet, and hands during STS tasks (RMS error lower than 20 N and 6 N.m). The simplicity and generalization abilities of the proposed method allow paving the way of future diagnosis solutions and rehabilitation applications, including in-home use.
NASA Technical Reports Server (NTRS)
Meier, M. J.; Evans, W. E.
1975-01-01
Snow-covered areas on LANDSAT (ERTS) images of the Santiam River basin, Oregon, and other basins in Washington were measured using several operators and methods. Seven methods were used: (1) Snowline tracing followed by measurement with planimeter, (2) mean snowline altitudes determined from many locations, (3) estimates in 2.5 x 2.5 km boxes of snow-covered area with reference to snow-free images, (4) single radiance-threshold level for entire basin, (5) radiance-threshold setting locally edited by reference to altitude contours and other images, (6) two-band color-sensitive extraction locally edited as in (5), and (7) digital (spectral) pattern recognition techniques. The seven methods are compared in regard to speed of measurement, precision, the ability to recognize snow in deep shadow or in trees, relative cost, and whether useful supplemental data are produced.
Multi-chain Markov chain Monte Carlo methods for computationally expensive models
NASA Astrophysics Data System (ADS)
Huang, M.; Ray, J.; Ren, H.; Hou, Z.; Bao, J.
2017-12-01
Markov chain Monte Carlo (MCMC) methods are used to infer model parameters from observational data. The parameters are inferred as probability densities, thus capturing estimation error due to sparsity of the data, and the shortcomings of the model. Multiple communicating chains executing the MCMC method have the potential to explore the parameter space better, and conceivably accelerate the convergence to the final distribution. We present results from tests conducted with the multi-chain method to show how the acceleration occurs i.e., for loose convergence tolerances, the multiple chains do not make much of a difference. The ensemble of chains also seems to have the ability to accelerate the convergence of a few chains that might start from suboptimal starting points. Finally, we show the performance of the chains in the estimation of O(10) parameters using computationally expensive forward models such as the Community Land Model, where the sampling burden is distributed over multiple chains.
Assessment of Person Fit Using Resampling-Based Approaches
ERIC Educational Resources Information Center
Sinharay, Sandip
2016-01-01
De la Torre and Deng suggested a resampling-based approach for person-fit assessment (PFA). The approach involves the use of the [math equation unavailable] statistic, a corrected expected a posteriori estimate of the examinee ability, and the Monte Carlo (MC) resampling method. The Type I error rate of the approach was closer to the nominal level…
Alan K. Swanson; Solomon Z. Dobrowski; Andrew O. Finley; James H. Thorne; Michael K. Schwartz
2013-01-01
The uncertainty associated with species distribution model (SDM) projections is poorly characterized, despite its potential value to decision makers. Error estimates from most modelling techniques have been shown to be biased due to their failure to account for spatial autocorrelation (SAC) of residual error. Generalized linear mixed models (GLMM) have the ability to...
An improved method for bivariate meta-analysis when within-study correlations are unknown.
Hong, Chuan; D Riley, Richard; Chen, Yong
2018-03-01
Multivariate meta-analysis, which jointly analyzes multiple and possibly correlated outcomes in a single analysis, is becoming increasingly popular in recent years. An attractive feature of the multivariate meta-analysis is its ability to account for the dependence between multiple estimates from the same study. However, standard inference procedures for multivariate meta-analysis require the knowledge of within-study correlations, which are usually unavailable. This limits standard inference approaches in practice. Riley et al proposed a working model and an overall synthesis correlation parameter to account for the marginal correlation between outcomes, where the only data needed are those required for a separate univariate random-effects meta-analysis. As within-study correlations are not required, the Riley method is applicable to a wide variety of evidence synthesis situations. However, the standard variance estimator of the Riley method is not entirely correct under many important settings. As a consequence, the coverage of a function of pooled estimates may not reach the nominal level even when the number of studies in the multivariate meta-analysis is large. In this paper, we improve the Riley method by proposing a robust variance estimator, which is asymptotically correct even when the model is misspecified (ie, when the likelihood function is incorrect). Simulation studies of a bivariate meta-analysis, in a variety of settings, show a function of pooled estimates has improved performance when using the proposed robust variance estimator. In terms of individual pooled estimates themselves, the standard variance estimator and robust variance estimator give similar results to the original method, with appropriate coverage. The proposed robust variance estimator performs well when the number of studies is relatively large. Therefore, we recommend the use of the robust method for meta-analyses with a relatively large number of studies (eg, m≥50). When the sample size is relatively small, we recommend the use of the robust method under the working independence assumption. We illustrate the proposed method through 2 meta-analyses. Copyright © 2017 John Wiley & Sons, Ltd.
Comparative study of age estimation using dentinal translucency by digital and conventional methods.
Bommannavar, Sushma; Kulkarni, Meena
2015-01-01
Estimating age using the dentition plays a significant role in identification of the individual in forensic cases. Teeth are one of the most durable and strongest structures in the human body. The morphology and arrangement of teeth vary from person-to-person and is unique to an individual as are the fingerprints. Therefore, the use of dentition is the method of choice in the identification of the unknown. Root dentin translucency is considered to be one of the best parameters for dental age estimation. Traditionally, root dentin translucency was measured using calipers. Recently, the use of custom built software programs have been proposed for the same. The present study describes a method to measure root dentin translucency on sectioned teeth using a custom built software program Adobe Photoshop 7.0 version (Adobe system Inc, Mountain View California). A total of 50 single rooted teeth were sectioned longitudinally to derive a 0.25 mm uniform thickness and the root dentin translucency was measured using digital and caliper methods and compared. The Gustafson's morphohistologic approach is used in this study. Correlation coefficients of translucency measurements to age were statistically significant for both the methods (P < 0.125) and linear regression equations derived from both methods revealed better ability of the digital method to assess age. The custom built software program used in the present study is commercially available and widely used image editing software. Furthermore, this method is easy to use and less time consuming. The measurements obtained using this method are more precise and thus help in more accurate age estimation. Considering these benefits, the present study recommends the use of digital method to assess translucency for age estimation.
Variance Estimation, Design Effects, and Sample Size Calculations for Respondent-Driven Sampling
2006-01-01
Hidden populations, such as injection drug users and sex workers, are central to a number of public health problems. However, because of the nature of these groups, it is difficult to collect accurate information about them, and this difficulty complicates disease prevention efforts. A recently developed statistical approach called respondent-driven sampling improves our ability to study hidden populations by allowing researchers to make unbiased estimates of the prevalence of certain traits in these populations. Yet, not enough is known about the sample-to-sample variability of these prevalence estimates. In this paper, we present a bootstrap method for constructing confidence intervals around respondent-driven sampling estimates and demonstrate in simulations that it outperforms the naive method currently in use. We also use simulations and real data to estimate the design effects for respondent-driven sampling in a number of situations. We conclude with practical advice about the power calculations that are needed to determine the appropriate sample size for a study using respondent-driven sampling. In general, we recommend a sample size twice as large as would be needed under simple random sampling. PMID:16937083
Painter, Jaime A.; Torak, Lynn J.; Jones, John W.
2015-09-30
Methods to estimate irrigation withdrawal using nationally available datasets and techniques that are transferable to other agricultural regions were evaluated by the U.S. Geological Survey as part of the Apalachicola-Chattahoochee-Flint (ACF) River Basin focus area study of the National Water Census (ACF–FAS). These methods investigated the spatial, temporal, and quantitative distributions of water withdrawal for irrigation in the southwestern Georgia region of the ACF–FAS, filling a vital need to inform science-based decisions regarding resource management and conservation. The crop– demand method assumed that only enough water is pumped onto a crop to satisfy the deficit between evapotranspiration and precipitation. A second method applied a geostatistical regimen of variography and conditional simulation to monthly metered irrigation withdrawal to estimate irrigation withdrawal where data do not exist. A third method analyzed Landsat satellite imagery using an automated approach to generate monthly estimates of irrigated lands. These methods were evaluated independently and compared collectively with measured water withdrawal information available in the Georgia part of the ACF–FAS, principally in the Chattahoochee-Flint River Basin. An assessment of each method’s contribution to the National Water Census program was also made to identify transfer value of the methods to the national program and other water census studies. None of the three methods evaluated represent a turnkey process to estimate irrigation withdrawal on any spatial (local or regional) or temporal (monthly or annual) extent. Each method requires additional information on agricultural practices during the growing season to complete the withdrawal estimation process. Spatial and temporal limitations inherent in identifying irrigated acres during the growing season, and in designing spatially and temporally representative monitor (meter) networks, can belie the ability of the methods to produce accurate irrigation-withdrawal estimates that can be used to produce dependable and consistent assessments of water availability and use for the National Water Census. Emerging satellite-data products and techniques for data analysis can generate high spatial-resolution estimates of irrigated-acres distributions with near-term temporal frequencies compatible with the needs of the ACF–FAS and the National Water Census.
Le Vu, Stéphane; Ratmann, Oliver; Delpech, Valerie; Brown, Alison E; Gill, O Noel; Tostevin, Anna; Fraser, Christophe; Volz, Erik M
2018-06-01
Phylogenetic clustering of HIV sequences from a random sample of patients can reveal epidemiological transmission patterns, but interpretation is hampered by limited theoretical support and statistical properties of clustering analysis remain poorly understood. Alternatively, source attribution methods allow fitting of HIV transmission models and thereby quantify aspects of disease transmission. A simulation study was conducted to assess error rates of clustering methods for detecting transmission risk factors. We modeled HIV epidemics among men having sex with men and generated phylogenies comparable to those that can be obtained from HIV surveillance data in the UK. Clustering and source attribution approaches were applied to evaluate their ability to identify patient attributes as transmission risk factors. We find that commonly used methods show a misleading association between cluster size or odds of clustering and covariates that are correlated with time since infection, regardless of their influence on transmission. Clustering methods usually have higher error rates and lower sensitivity than source attribution method for identifying transmission risk factors. But neither methods provide robust estimates of transmission risk ratios. Source attribution method can alleviate drawbacks from phylogenetic clustering but formal population genetic modeling may be required to estimate quantitative transmission risk factors. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Robust position estimation of a mobile vehicle
NASA Astrophysics Data System (ADS)
Conan, Vania; Boulanger, Pierre; Elgazzar, Shadia
1994-11-01
The ability to estimate the position of a mobile vehicle is a key task for navigation over large distances in complex indoor environments such as nuclear power plants. Schematics of the plants are available, but they are incomplete, as real settings contain many objects, such as pipes, cables or furniture, that mask part of the model. The position estimation method described in this paper matches 3-D data with a simple schematic of a plant. It is basically independent of odometry information and viewpoint, robust to noisy data and spurious points and largely insensitive to occlusions. The method is based on a hypothesis/verification paradigm and its complexity is polynomial; it runs in (Omicron) (m4n4), where m represents the number of model patches and n the number of scene patches. Heuristics are presented to speed up the algorithm. Results on real 3-D data show good behavior even when the scene is very occluded.
Dong, M C; van Vleck, L D
1989-03-01
Variance and covariance components for milk yield, survival to second freshening, calving interval in first lactation were estimated by REML with the expectation and maximization algorithm for an animal model which included herd-year-season effects. Cows without calving interval but with milk yield were included. Each of the four data sets of 15 herds included about 3000 Holstein cows. Relationships across herds were ignored to enable inversion of the coefficient matrix of mixed model equations. Quadratics and their expectations were accumulated herd by herd. Heritability of milk yield (.32) agrees with reports by same methods. Heritabilities of survival (.11) and calving interval(.15) are slightly larger and genetic correlations smaller than results from different methods of estimation. Genetic correlation between milk yield and calving interval (.09) indicates genetic ability to produce more milk is lightly associated with decreased fertility.
Flood hydrology for Dry Creek, Lake County, Northwestern Montana
Parrett, C.; Jarrett, R.D.
2004-01-01
Dry Creek drains about 22.6 square kilometers of rugged mountainous terrain upstream from Tabor Dam in the Mission Range near St. Ignatius, Montana. Because of uncertainty about plausible peak discharges and concerns regarding the ability of the Tabor Dam spillway to safely convey these discharges, the flood hydrology for Dry Creek was evaluated on the basis of three hydrologic and geologic methods. The first method involved determining an envelope line relating flood discharge to drainage area on the basis of regional historical data and calculating a 500-year flood for Dry Creek using a regression equation. The second method involved paleoflood methods to estimate the maximum plausible discharge for 35 sites in the study area. The third method involved rainfall-runoff modeling for the Dry Creek basin in conjunction with regional precipitation information to determine plausible peak discharges. All of these methods resulted in estimates of plausible peak discharges that are substantially less than those predicted by the more generally applied probable maximum flood technique. Copyright ASCE 2004.
FlowCam: Quantification and Classification of Phytoplankton by Imaging Flow Cytometry.
Poulton, Nicole J
2016-01-01
The ability to enumerate, classify, and determine biomass of phytoplankton from environmental samples is essential for determining ecosystem function and their role in the aquatic community and microbial food web. Traditional micro-phytoplankton quantification methods using microscopic techniques require preservation and are slow, tedious and very laborious. The availability of more automated imaging microscopy platforms has revolutionized the way particles and cells are detected within their natural environment. The ability to examine cells unaltered and without preservation is key to providing more accurate cell concentration estimates and overall phytoplankton biomass. The FlowCam(®) is an imaging cytometry tool that was originally developed for use in aquatic sciences and provides a more rapid and unbiased method for enumerating and classifying phytoplankton within diverse aquatic environments.
Estimated maximal and current brain volume predict cognitive ability in old age.
Royle, Natalie A; Booth, Tom; Valdés Hernández, Maria C; Penke, Lars; Murray, Catherine; Gow, Alan J; Maniega, Susana Muñoz; Starr, John; Bastin, Mark E; Deary, Ian J; Wardlaw, Joanna M
2013-12-01
Brain tissue deterioration is a significant contributor to lower cognitive ability in later life; however, few studies have appropriate data to establish how much influence prior brain volume and prior cognitive performance have on this association. We investigated the associations between structural brain imaging biomarkers, including an estimate of maximal brain volume, and detailed measures of cognitive ability at age 73 years in a large (N = 620), generally healthy, community-dwelling population. Cognitive ability data were available from age 11 years. We found positive associations (r) between general cognitive ability and estimated brain volume in youth (male, 0.28; females, 0.12), and in measured brain volume in later life (males, 0.27; females, 0.26). Our findings show that cognitive ability in youth is a strong predictor of estimated prior and measured current brain volume in old age but that these effects were the same for both white and gray matter. As 1 of the largest studies of associations between brain volume and cognitive ability with normal aging, this work contributes to the wider understanding of how some early-life factors influence cognitive aging. Copyright © 2013 Elsevier Inc. All rights reserved.
Hatch, Stephani L.; Feinstein, Leon; Link, Bruce G.; Wadsworth, Michael E. J.; Richards, Marcus
2007-01-01
Objectives. Evidence shows education positively impacts cognitive ability. However, researchers have given little attention to the potential impact of adult education on cognitive ability, still malleable in midlife. The primary study aim was to examine whether there were continuing effects of education over the life course on midlife cognitive ability. Methods. This study used data from the Medical Research Council National Survey of Health and Development, also known as the British 1946 birth cohort, and multivariate regression to estimate the continuing effects of adult education on multiple measures of midlife cognitive ability. Results. Educational attainment completed by early adulthood was associated with all measures of cognitive ability in late midlife. The continued effect of education was apparent in the associations between adult education and higher verbal ability, verbal memory, and verbal fluency in late midlife. We found no association between adult education and mental speed and concentration. Discussion. Associations between adult education and midlife cognitive ability indicate wider benefits of education to health that may be important for social integration, well-being, and the delay of cognitive decline in later life. PMID:18079429
Optimal Designs for the Rasch Model
ERIC Educational Resources Information Center
Grasshoff, Ulrike; Holling, Heinz; Schwabe, Rainer
2012-01-01
In this paper, optimal designs will be derived for estimating the ability parameters of the Rasch model when difficulty parameters are known. It is well established that a design is locally D-optimal if the ability and difficulty coincide. But locally optimal designs require that the ability parameters to be estimated are known. To attenuate this…
ERIC Educational Resources Information Center
Sinharay, Sandip
2015-01-01
The maximum likelihood estimate (MLE) of the ability parameter of an item response theory model with known item parameters was proved to be asymptotically normally distributed under a set of regularity conditions for tests involving dichotomous items and a unidimensional ability parameter (Klauer, 1990; Lord, 1983). This article first considers…
Numerosity but Not Texture-Density Discrimination Correlates with Math Ability in Children
ERIC Educational Resources Information Center
Anobile, Giovanni; Castaldi, Elisa; Turi, Marco; Tinelli, Francesca; Burr, David C.
2016-01-01
Considerable recent work suggests that mathematical abilities in children correlate with the ability to estimate numerosity. Does math correlate only with numerosity estimation, or also with other similar tasks? We measured discrimination thresholds of school-age (6- to 12.5-years-old) children in 3 tasks: numerosity of patterns of relatively…
Muschalla, Beate
2017-03-01
Aims Work-anxiety may produce overly negative views of the workplace that impair provider efforts to assess work ability from patient self-report. This study explores the empirical relationships between patient-reported workplace characteristics, work-anxiety, and subjective and objective work ability measures. Methods 125 patients in medical rehabilitation before vocational reintegration were interviewed concerning their vocational situation, and filled in a questionnaire on work-anxiety, subjective mental work ability and perceived workplace characteristics. Treating physicians gave independent socio-medical judgments concerning the patients' work ability and impairment, and need for supportive means for vocational reintegration. Results Patients with high work-anxiety reported more negative workplace characteristics. Low judgments of work ability were correlated with problematic workplace characteristics. When controlled for work-anxiety, subjective work ability remained related only with social workplace characteristics and with work achievement demands, but independent from situational or task characteristics. Sick leave duration and physicians' judgment of work ability were not significantly related to patient-reported workplace characteristics. Conclusions In socio-medical work ability assessments, patients with high work-anxiety may over-report negative workplace characteristics that can confound provider estimates of work ability. Assessing work-anxiety may be important to assess readiness for returning to work and initiating work-directed treatments.
A pdf-Free Change Detection Test Based on Density Difference Estimation.
Bu, Li; Alippi, Cesare; Zhao, Dongbin
2018-02-01
The ability to detect online changes in stationarity or time variance in a data stream is a hot research topic with striking implications. In this paper, we propose a novel probability density function-free change detection test, which is based on the least squares density-difference estimation method and operates online on multidimensional inputs. The test does not require any assumption about the underlying data distribution, and is able to operate immediately after having been configured by adopting a reservoir sampling mechanism. Thresholds requested to detect a change are automatically derived once a false positive rate is set by the application designer. Comprehensive experiments validate the effectiveness in detection of the proposed method both in terms of detection promptness and accuracy.
Correction of bias in belt transect studies of immotile objects
Anderson, D.R.; Pospahala, R.S.
1970-01-01
Unless a correction is made, population estimates derived from a sample of belt transects will be biased if a fraction of, the individuals on the sample transects are not counted. An approach, useful for correcting this bias when sampling immotile populations using transects of a fixed width, is presented. The method assumes that a searcher's ability to find objects near the center of the transect is nearly perfect. The method utilizes a mathematical equation, estimated from the data, to represent the searcher's inability to find all objects at increasing distances from the center of the transect. An example of the analysis of data, formation of the equation, and application is presented using waterfowl nesting data collected in Colorado.
Lead burdens and behavioral impairments of the lined shore crab Pachygrapsus crassipes
Hui, Clifford A.
2002-01-01
Unless a correction is made, population estimates derived from a sample of belt transects will be biased if a fraction of, the individuals on the sample transects are not counted. An approach, useful for correcting this bias when sampling immotile populations using transects of a fixed width, is presented. The method assumes that a searcher's ability to find objects near the center of the transect is nearly perfect. The method utilizes a mathematical equation, estimated from the data, to represent the searcher's inability to find all objects at increasing distances from the center of the transect. An example of the analysis of data, formation of the equation, and application is presented using waterfowl nesting data collected in Colorado.
Mukaka, Mavuto; White, Sarah A; Terlouw, Dianne J; Mwapasa, Victor; Kalilani-Phiri, Linda; Faragher, E Brian
2016-07-22
Missing outcomes can seriously impair the ability to make correct inferences from randomized controlled trials (RCTs). Complete case (CC) analysis is commonly used, but it reduces sample size and is perceived to lead to reduced statistical efficiency of estimates while increasing the potential for bias. As multiple imputation (MI) methods preserve sample size, they are generally viewed as the preferred analytical approach. We examined this assumption, comparing the performance of CC and MI methods to determine risk difference (RD) estimates in the presence of missing binary outcomes. We conducted simulation studies of 5000 simulated data sets with 50 imputations of RCTs with one primary follow-up endpoint at different underlying levels of RD (3-25 %) and missing outcomes (5-30 %). For missing at random (MAR) or missing completely at random (MCAR) outcomes, CC method estimates generally remained unbiased and achieved precision similar to or better than MI methods, and high statistical coverage. Missing not at random (MNAR) scenarios yielded invalid inferences with both methods. Effect size estimate bias was reduced in MI methods by always including group membership even if this was unrelated to missingness. Surprisingly, under MAR and MCAR conditions in the assessed scenarios, MI offered no statistical advantage over CC methods. While MI must inherently accompany CC methods for intention-to-treat analyses, these findings endorse CC methods for per protocol risk difference analyses in these conditions. These findings provide an argument for the use of the CC approach to always complement MI analyses, with the usual caveat that the validity of the mechanism for missingness be thoroughly discussed. More importantly, researchers should strive to collect as much data as possible.
How to Estimate Demand Charge Savings from PV on Commercial Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gagnon, Pieter J; Bird, Lori A
Rooftop photovoltaic (PV) systems are compensated through retail electricity tariffs - and for commercial and industrial customers, these are typically comprised of three components: a fixed monthly charge, energy charges, and demand charges. Of these, PV's ability to reduce demand charges has traditionally been the most difficult to estimate. In this fact sheet we explain the basics of demand charges, and provide a new method that a potential customer or PV developer can use to estimate a range of potential demand charge savings for a proposed PV system. These savings can then be added to other project cash flows, inmore » assessing the project's financial performance.« less
NASA Astrophysics Data System (ADS)
Techavipoo, Udomchai
Manual palpation to sense variations in tissue stiffness for disease diagnosis has been regularly performed by clinicians for centuries. However, it is generally limited to large and superficial structures and the ability of the physician performing the palpation. Imaging of tissue stiffness or elastic properties via the aid of modern imaging such as ultrasound and magnetic resonance imaging, referred to as elastography, enhances the capability for disease diagnosis. In addition, elastography could be used for monitoring tissue response to minimally invasive ablative therapies, which are performed percutaneously to destruct tumors with minimum damage to surrounding tissue. Monitoring tissue temperature during ablation is another approach to estimate tissue damage. The ultimate goal of this dissertation is to improve the image quality of elastograms and temperature profiles for visualizing thermal lesions during and after ablative therapies. Elastographic imaging of thermal lesions is evaluated by comparison of sizes, shapes, and volumes with the results obtained using gross pathology. Semiautomated segmentation of lesion boundaries on elastograms is also developed. It provides comparable results to those with manual segmentation. Elastograms imaged during radiofrequency ablation in vitro show that the impact of gas bubbles during ablation on the ability to delineate the thermal lesion is small. Two novel methods to reduce noise artifacts in elastograms, and an accurate estimation of displacement vectors are proposed. The first method applies wavelet-denoising algorithms to the displacement estimates. The second method utilizes angular compounding of the elastograms generated using ultrasound signal frames acquired from different insonification angles. These angular frames are also utilized to estimate all tissue displacement vector components in response to a deformation. These enable the generation of normal and shear strain elastograms and Poisson's ratio elastograms, which provide additional valuable information for disease diagnosis. Finally, measurements of temperature dependent variables, including sound speed, attenuation coefficient, and thermal expansion in canine liver tissue, are performed. This information is necessary for the estimation of the temperature profile during ablation. A mapping function between the gradient of timeshifts and tissue temperature is calculated using this information and subsequently applied to estimate temperature profiles.
An efficient incremental learning mechanism for tracking concept drift in spam filtering
Sheu, Jyh-Jian; Chu, Ko-Tsung; Li, Nien-Feng; Lee, Cheng-Chi
2017-01-01
This research manages in-depth analysis on the knowledge about spams and expects to propose an efficient spam filtering method with the ability of adapting to the dynamic environment. We focus on the analysis of email’s header and apply decision tree data mining technique to look for the association rules about spams. Then, we propose an efficient systematic filtering method based on these association rules. Our systematic method has the following major advantages: (1) Checking only the header sections of emails, which is different from those spam filtering methods at present that have to analyze fully the email’s content. Meanwhile, the email filtering accuracy is expected to be enhanced. (2) Regarding the solution to the problem of concept drift, we propose a window-based technique to estimate for the condition of concept drift for each unknown email, which will help our filtering method in recognizing the occurrence of spam. (3) We propose an incremental learning mechanism for our filtering method to strengthen the ability of adapting to the dynamic environment. PMID:28182691
Wilkins, Bryce; Lee, Namgyun; Gajawelli, Niharika; Law, Meng; Leporé, Natasha
2015-01-01
Advances in diffusion-weighted magnetic resonance imaging (DW-MRI) have led to many alternative diffusion sampling strategies and analysis methodologies. A common objective among methods is estimation of white matter fiber orientations within each voxel, as doing so permits in-vivo fiber-tracking and the ability to study brain connectivity and networks. Knowledge of how DW-MRI sampling schemes affect fiber estimation accuracy, and consequently tractography and the ability to recover complex white-matter pathways, as well as differences between results due to choice of analysis method and which method(s) perform optimally for specific data sets, all remain important problems, especially as tractography-based studies become common. In this work we begin to address these concerns by developing sets of simulated diffusion-weighted brain images which we then use to quantitatively evaluate the performance of six DW-MRI analysis methods in terms of estimated fiber orientation accuracy, false-positive (spurious) and false-negative (missing) fiber rates, and fiber-tracking. The analysis methods studied are: 1) a two-compartment “ball and stick” model (BSM) (Behrens et al., 2003); 2) a non-negativity constrained spherical deconvolution (CSD) approach (Tournier et al., 2007); 3) analytical q-ball imaging (QBI) (Descoteaux et al., 2007); 4) q-ball imaging with Funk-Radon and Cosine Transform (FRACT) (Haldar and Leahy, 2013); 5) q-ball imaging within constant solid angle (CSA) (Aganj et al., 2010); and 6) a generalized Fourier transform approach known as generalized q-sampling imaging (GQI) (Yeh et al., 2010). We investigate these methods using 20, 30, 40, 60, 90 and 120 evenly distributed q-space samples of a single shell, and focus on a signal-to-noise ratio (SNR = 18) and diffusion-weighting (b = 1000 s/mm2) common to clinical studies. We found the BSM and CSD methods consistently yielded the least fiber orientation error and simultaneously greatest detection rate of fibers. Fiber detection rate was found to be the most distinguishing characteristic between the methods, and a significant factor for complete recovery of tractography through complex white-matter pathways. For example, while all methods recovered similar tractography of prominent white matter pathways of limited fiber crossing, CSD (which had the highest fiber detection rate, especially for voxels containing three fibers) recovered the greatest number of fibers and largest fraction of correct tractography for a complex three-fiber crossing region. The synthetic data sets, ground-truth, and tools for quantitative evaluation are publically available on the NITRC website as the project “Simulated DW-MRI Brain Data Sets for Quantitative Evaluation of Estimated Fiber Orientations” at http://www.nitrc.org/projects/sim_dwi_brain PMID:25555998
Wilkins, Bryce; Lee, Namgyun; Gajawelli, Niharika; Law, Meng; Leporé, Natasha
2015-04-01
Advances in diffusion-weighted magnetic resonance imaging (DW-MRI) have led to many alternative diffusion sampling strategies and analysis methodologies. A common objective among methods is estimation of white matter fiber orientations within each voxel, as doing so permits in-vivo fiber-tracking and the ability to study brain connectivity and networks. Knowledge of how DW-MRI sampling schemes affect fiber estimation accuracy, tractography and the ability to recover complex white-matter pathways, differences between results due to choice of analysis method, and which method(s) perform optimally for specific data sets, all remain important problems, especially as tractography-based studies become common. In this work, we begin to address these concerns by developing sets of simulated diffusion-weighted brain images which we then use to quantitatively evaluate the performance of six DW-MRI analysis methods in terms of estimated fiber orientation accuracy, false-positive (spurious) and false-negative (missing) fiber rates, and fiber-tracking. The analysis methods studied are: 1) a two-compartment "ball and stick" model (BSM) (Behrens et al., 2003); 2) a non-negativity constrained spherical deconvolution (CSD) approach (Tournier et al., 2007); 3) analytical q-ball imaging (QBI) (Descoteaux et al., 2007); 4) q-ball imaging with Funk-Radon and Cosine Transform (FRACT) (Haldar and Leahy, 2013); 5) q-ball imaging within constant solid angle (CSA) (Aganj et al., 2010); and 6) a generalized Fourier transform approach known as generalized q-sampling imaging (GQI) (Yeh et al., 2010). We investigate these methods using 20, 30, 40, 60, 90 and 120 evenly distributed q-space samples of a single shell, and focus on a signal-to-noise ratio (SNR = 18) and diffusion-weighting (b = 1000 s/mm(2)) common to clinical studies. We found that the BSM and CSD methods consistently yielded the least fiber orientation error and simultaneously greatest detection rate of fibers. Fiber detection rate was found to be the most distinguishing characteristic between the methods, and a significant factor for complete recovery of tractography through complex white-matter pathways. For example, while all methods recovered similar tractography of prominent white matter pathways of limited fiber crossing, CSD (which had the highest fiber detection rate, especially for voxels containing three fibers) recovered the greatest number of fibers and largest fraction of correct tractography for complex three-fiber crossing regions. The synthetic data sets, ground-truth, and tools for quantitative evaluation are publically available on the NITRC website as the project "Simulated DW-MRI Brain Data Sets for Quantitative Evaluation of Estimated Fiber Orientations" at http://www.nitrc.org/projects/sim_dwi_brain. Copyright © 2014 Elsevier Inc. All rights reserved.
Comparison of methods to assess change in children’s body composition123
Elberg, Jane; McDuffie, Jennifer R; Sebring, Nancy G; Salaita, Christine; Keil, Margaret; Robotham, Delphine; Reynolds, James C; Yanovski, Jack A
2008-01-01
Background Little is known about how simpler and more available methods to measure change in body fatness compare with criterion methods such as dual-energy X-ray absorptiometry (DXA) in children. Objective Our objective was to determine the ability of air-displacement plethysmography (ADP) and formulas based on triceps skinfold thickness (TSF) and bioelectrical impedance analysis (BIA) to estimate changes in body fat over time in children. Design Eighty-six nonoverweight and overweight boys (n = 34) and girls (n = 52) with an average age of 11.0 ± 2.4 y underwent ADP, TSF measurement, BIA, and DXA to estimate body fatness at baseline and 1 ± 0.3 y later. Recent equations were used to estimate percentage body fat by TSF measurement (Dezenberg equation) and by BIA (Suprasongsin and Lewy equations). Percentage body fat estimates by ADP, TSF measurement, and BIA were compared with those by DXA. Results All methods were highly correlated with DXA (P < 0.001). No mean bias for estimates of percentage body fat change was found for ADP (Siri equation) compared with DXA for all subjects examined together, and agreement between body fat estimation by ADP and DXA did not vary with race or sex. Magnitude bias was present for ADP relative to DXA (P < 0.01). Estimates of change in percentage body fat were systematically overestimated by BIA equations (1.37 ± 6.98%; P < 0.001). TSF accounted for only 13% of the variance in percentage body fat change. Conclusion Compared with DXA, there appears to be no noninvasive and simple method to measure changes in children’s percentage body fat accurately and precisely, but ADP performed better than did TSF or BIA. ADP could prove useful for measuring changes in adiposity in children. PMID:15213029
2SLS versus 2SRI: Appropriate methods for rare outcomes and/or rare exposures.
Basu, Anirban; Coe, Norma B; Chapman, Cole G
2018-06-01
This study used Monte Carlo simulations to examine the ability of the two-stage least squares (2SLS) estimator and two-stage residual inclusion (2SRI) estimators with varying forms of residuals to estimate the local average and population average treatment effect parameters in models with binary outcome, endogenous binary treatment, and single binary instrument. The rarity of the outcome and the treatment was varied across simulation scenarios. Results showed that 2SLS generated consistent estimates of the local average treatment effects (LATE) and biased estimates of the average treatment effects (ATE) across all scenarios. 2SRI approaches, in general, produced biased estimates of both LATE and ATE under all scenarios. 2SRI using generalized residuals minimized the bias in ATE estimates. Use of 2SLS and 2SRI is illustrated in an empirical application estimating the effects of long-term care insurance on a variety of binary health care utilization outcomes among the near-elderly using the Health and Retirement Study. Copyright © 2018 John Wiley & Sons, Ltd.
Sakurai, Ryota; Fujiwara, Yoshinori; Ishihara, Masami; Higuchi, Takahiro; Uchida, Hayato; Imanaka, Kuniyasu
2013-05-07
Older adults could not safely step over an obstacle unless they correctly estimated their physical ability to be capable of a successful step over action. Thus, incorrect estimation (overestimation) of ability to step over an obstacle could result in severe accident such as falls in older adults. We investigated whether older adults tended to overestimate step-over ability compared with young adults and whether such overestimation in stepping over obstacles was associated with falls. Three groups of adults, young-old (age, 60-74 years; n, 343), old-old (age, >74 years; n, 151), and young (age, 18-35 years; n, 71), performed our original step-over test (SOT). In the SOT, participants observed a horizontal bar at a 7-m distance and estimated the maximum height (EH) that they could step over. After estimation, they performed real SOT trials to measure the actual maximum height (AH). We also identified participants who had experienced falls in the 1 year period before the study. Thirty-nine young-old adults (11.4%) and 49 old-old adults (32.5%) failed to step over the bar at EH (overestimation), whereas all young adults succeeded (underestimation). There was a significant negative correlation between actual performance (AH) and self-estimation error (difference between EH and AH) in the older adults, indicating that older adults with lower AH (SOT ability) tended to overestimate actual ability (EH > AH) and vice versa. Furthermore, the percentage of participants who overestimated SOT ability in the fallers (28%) was almost double larger than that in the non-fallers (16%), with the fallers showing significantly lower SOT ability than the non-fallers. Older adults appear unaware of age-related physical decline and tended to overestimate step-over ability. Both age-related decline in step-over ability, and more importantly, overestimation or decreased underestimation of this ability may raise potential risk of falls.
NASA Astrophysics Data System (ADS)
Duncan, Kenneth J.; Jarvis, Matt J.; Brown, Michael J. I.; Röttgering, Huub J. A.
2018-07-01
Building on the first paper in this series (Duncan et al. 2018), we present a study investigating the performance of Gaussian process photometric redshift (photo-z) estimates for galaxies and active galactic nuclei (AGNs) detected in deep radio continuum surveys. A Gaussian process redshift code is used to produce photo-z estimates targeting specific subsets of both the AGN population - infrared (IR), X-ray, and optically selected AGNs - and the general galaxy population. The new estimates for the AGN population are found to perform significantly better at z > 1 than the template-based photo-z estimates presented in our previous study. Our new photo-z estimates are then combined with template estimates through hierarchical Bayesian combination to produce a hybrid consensus estimate that outperforms both of the individual methods across all source types. Photo-z estimates for radio sources that are X-ray sources or optical/IR AGNs are significantly improved in comparison to previous template-only estimates - with outlier fractions and robust scatter reduced by up to a factor of ˜4. The ability of our method to combine the strengths of the two input photo-z techniques and the large improvements we observe illustrate its potential for enabling future exploitation of deep radio continuum surveys for both the study of galaxy and black hole coevolution and for cosmological studies.
Direct estimation of evoked hemoglobin changes by multimodality fusion imaging
Huppert, Theodore J.; Diamond, Solomon G.; Boas, David A.
2009-01-01
In the last two decades, both diffuse optical tomography (DOT) and blood oxygen level dependent (BOLD)-based functional magnetic resonance imaging (fMRI) methods have been developed as noninvasive tools for imaging evoked cerebral hemodynamic changes in studies of brain activity. Although these two technologies measure functional contrast from similar physiological sources, i.e., changes in hemoglobin levels, these two modalities are based on distinct physical and biophysical principles leading to both limitations and strengths to each method. In this work, we describe a unified linear model to combine the complimentary spatial, temporal, and spectroscopic resolutions of concurrently measured optical tomography and fMRI signals. Using numerical simulations, we demonstrate that concurrent optical and BOLD measurements can be used to create cross-calibrated estimates of absolute micromolar deoxyhemoglobin changes. We apply this new analysis tool to experimental data acquired simultaneously with both DOT and BOLD imaging during a motor task, demonstrate the ability to more robustly estimate hemoglobin changes in comparison to DOT alone, and show how this approach can provide cross-calibrated estimates of hemoglobin changes. Using this multimodal method, we estimate the calibration of the 3 tesla BOLD signal to be −0.55% ± 0.40% signal change per micromolar change of deoxyhemoglobin. PMID:19021411
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Donald D.; Gowardhan, Akshay; Cameron-Smith, Philip
2015-08-08
Here, a computational Bayesian inverse technique is used to quantify the effects of meteorological inflow uncertainty on tracer transport and source estimation in a complex urban environment. We estimate a probability distribution of meteorological inflow by comparing wind observations to Monte Carlo simulations from the Aeolus model. Aeolus is a computational fluid dynamics model that simulates atmospheric and tracer flow around buildings and structures at meter-scale resolution. Uncertainty in the inflow is propagated through forward and backward Lagrangian dispersion calculations to determine the impact on tracer transport and the ability to estimate the release location of an unknown source. Ourmore » uncertainty methods are compared against measurements from an intensive observation period during the Joint Urban 2003 tracer release experiment conducted in Oklahoma City.« less
Taking the Missing Propensity Into Account When Estimating Competence Scores
Pohl, Steffi; Carstensen, Claus H.
2014-01-01
When competence tests are administered, subjects frequently omit items. These missing responses pose a threat to correctly estimating the proficiency level. Newer model-based approaches aim to take nonignorable missing data processes into account by incorporating a latent missing propensity into the measurement model. Two assumptions are typically made when using these models: (1) The missing propensity is unidimensional and (2) the missing propensity and the ability are bivariate normally distributed. These assumptions may, however, be violated in real data sets and could, thus, pose a threat to the validity of this approach. The present study focuses on modeling competencies in various domains, using data from a school sample (N = 15,396) and an adult sample (N = 7,256) from the National Educational Panel Study. Our interest was to investigate whether violations of unidimensionality and the normal distribution assumption severely affect the performance of the model-based approach in terms of differences in ability estimates. We propose a model with a competence dimension, a unidimensional missing propensity and a distributional assumption more flexible than a multivariate normal. Using this model for ability estimation results in different ability estimates compared with a model ignoring missing responses. Implications for ability estimation in large-scale assessments are discussed. PMID:29795844
Carriquiry, Alicia L; Bailey, Regan L; Sempos, Christopher T; Yetley, Elizabeth A
2013-01-01
Background: There are questions about the appropriate method for the accurate estimation of the population prevalence of nutrient inadequacy on the basis of a biomarker of nutrient status (BNS). Objective: We determined the applicability of a statistical probability method to a BNS, specifically serum 25-hydroxyvitamin D [25(OH)D]. The ability to meet required statistical assumptions was the central focus. Design: Data on serum 25(OH)D concentrations in adults aged 19–70 y from the 2005–2006 NHANES were used (n = 3871). An Institute of Medicine report provided reference values. We analyzed key assumptions of symmetry, differences in variance, and the independence of distributions. We also corrected observed distributions for within-person variability (WPV). Estimates of vitamin D inadequacy were determined. Results: We showed that the BNS [serum 25(OH)D] met the criteria to use the method for the estimation of the prevalence of inadequacy. The difference between observations corrected compared with uncorrected for WPV was small for serum 25(OH)D but, nonetheless, showed enhanced accuracy because of correction. The method estimated a 19% prevalence of inadequacy in this sample, whereas misclassification inherent in the use of the more traditional 97.5th percentile high-end cutoff inflated the prevalence of inadequacy (36%). Conclusions: When the prevalence of nutrient inadequacy for a population is estimated by using serum 25(OH)D as an example of a BNS, a statistical probability method is appropriate and more accurate in comparison with a high-end cutoff. Contrary to a common misunderstanding, the method does not overlook segments of the population. The accuracy of population estimates of inadequacy is enhanced by the correction of observed measures for WPV. PMID:23097269
O'Connell, Allan F.; Talancy, Neil W.; Bailey, Larissa L.; Sauer, John R.; Cook, Robert; Gilbert, Andrew T.
2006-01-01
Large-scale, multispecies monitoring programs are widely used to assess changes in wildlife populations but they often assume constant detectability when documenting species occurrence. This assumption is rarely met in practice because animal populations vary across time and space. As a result, detectability of a species can be influenced by a number of physical, biological, or anthropogenic factors (e.g., weather, seasonality, topography, biological rhythms, sampling methods). To evaluate some of these influences, we estimated site occupancy rates using species-specific detection probabilities for meso- and large terrestrial mammal species on Cape Cod, Massachusetts, USA. We used model selection to assess the influence of different sampling methods and major environmental factors on our ability to detect individual species. Remote cameras detected the most species (9), followed by cubby boxes (7) and hair traps (4) over a 13-month period. Estimated site occupancy rates were similar among sampling methods for most species when detection probabilities exceeded 0.15, but we question estimates obtained from methods with detection probabilities between 0.05 and 0.15, and we consider methods with lower probabilities unacceptable for occupancy estimation and inference. Estimated detection probabilities can be used to accommodate variation in sampling methods, which allows for comparison of monitoring programs using different protocols. Vegetation and seasonality produced species-specific differences in detectability and occupancy, but differences were not consistent within or among species, which suggests that our results should be considered in the context of local habitat features and life history traits for the target species. We believe that site occupancy is a useful state variable and suggest that monitoring programs for mammals using occupancy data consider detectability prior to making inferences about species distributions or population change.
Estimating Spectra from Photometry
NASA Astrophysics Data System (ADS)
Kalmbach, J. Bryce; Connolly, Andrew J.
2017-12-01
Measuring the physical properties of galaxies such as redshift frequently requires the use of spectral energy distributions (SEDs). SED template sets are, however, often small in number and cover limited portions of photometric color space. Here we present a new method to estimate SEDs as a function of color from a small training set of template SEDs. We first cover the mathematical background behind the technique before demonstrating our ability to reconstruct spectra based upon colors and then compare our results to other common interpolation and extrapolation methods. When the photometric filters and spectra overlap, we show that the error in the estimated spectra is reduced by more than 65% compared to the more commonly used techniques. We also show an expansion of the method to wavelengths beyond the range of the photometric filters. Finally, we demonstrate the usefulness of our technique by generating 50 additional SED templates from an original set of 10 and by applying the new set to photometric redshift estimation. We are able to reduce the photometric redshifts standard deviation by at least 22.0% and the outlier rejected bias by over 86.2% compared to original set for z ≤ 3.
ERIC Educational Resources Information Center
de la Torre, Jimmy
2009-01-01
For one reason or another, various sources of information, namely, ancillary variables and correlational structure of the latent abilities, which are usually available in most testing situations, are ignored in ability estimation. A general model that incorporates these sources of information is proposed in this article. The model has a general…
Sun, Chuanyu; VanRaden, Paul M.; Cole, John B.; O'Connell, Jeffrey R.
2014-01-01
Dominance may be an important source of non-additive genetic variance for many traits of dairy cattle. However, nearly all prediction models for dairy cattle have included only additive effects because of the limited number of cows with both genotypes and phenotypes. The role of dominance in the Holstein and Jersey breeds was investigated for eight traits: milk, fat, and protein yields; productive life; daughter pregnancy rate; somatic cell score; fat percent and protein percent. Additive and dominance variance components were estimated and then used to estimate additive and dominance effects of single nucleotide polymorphisms (SNPs). The predictive abilities of three models with both additive and dominance effects and a model with additive effects only were assessed using ten-fold cross-validation. One procedure estimated dominance values, and another estimated dominance deviations; calculation of the dominance relationship matrix was different for the two methods. The third approach enlarged the dataset by including cows with genotype probabilities derived using genotyped ancestors. For yield traits, dominance variance accounted for 5 and 7% of total variance for Holsteins and Jerseys, respectively; using dominance deviations resulted in smaller dominance and larger additive variance estimates. For non-yield traits, dominance variances were very small for both breeds. For yield traits, including additive and dominance effects fit the data better than including only additive effects; average correlations between estimated genetic effects and phenotypes showed that prediction accuracy increased when both effects rather than just additive effects were included. No corresponding gains in prediction ability were found for non-yield traits. Including cows with derived genotype probabilities from genotyped ancestors did not improve prediction accuracy. The largest additive effects were located on chromosome 14 near DGAT1 for yield traits for both breeds; those SNPs also showed the largest dominance effects for fat yield (both breeds) as well as for Holstein milk yield. PMID:25084281
In vivo sensitivity estimation and imaging acceleration with rotating RF coil arrays at 7 Tesla.
Li, Mingyan; Jin, Jin; Zuo, Zhentao; Liu, Feng; Trakic, Adnan; Weber, Ewald; Zhuo, Yan; Xue, Rong; Crozier, Stuart
2015-03-01
Using a new rotating SENSitivity Encoding (rotating-SENSE) algorithm, we have successfully demonstrated that the rotating radiofrequency coil array (RRFCA) was capable of achieving a significant reduction in scan time and a uniform image reconstruction for a homogeneous phantom at 7 Tesla. However, at 7 Tesla the in vivo sensitivity profiles (B1(-)) become distinct at various angular positions. Therefore, sensitivity maps at other angular positions cannot be obtained by numerically rotating the acquired ones. In this work, a novel sensitivity estimation method for the RRFCA was developed and validated with human brain imaging. This method employed a library database and registration techniques to estimate coil sensitivity at an arbitrary angular position. The estimated sensitivity maps were then compared to the acquired sensitivity maps. The results indicate that the proposed method is capable of accurately estimating both magnitude and phase of sensitivity at an arbitrary angular position, which enables us to employ the rotating-SENSE algorithm to accelerate acquisition and reconstruct image. Compared to a stationary coil array with the same number of coil elements, the RRFCA was able to reconstruct images with better quality at a high reduction factor. It is hoped that the proposed rotation-dependent sensitivity estimation algorithm and the acceleration ability of the RRFCA will be particularly useful for ultra high field MRI. Copyright © 2014 Elsevier Inc. All rights reserved.
In vivo sensitivity estimation and imaging acceleration with rotating RF coil arrays at 7 Tesla
NASA Astrophysics Data System (ADS)
Li, Mingyan; Jin, Jin; Zuo, Zhentao; Liu, Feng; Trakic, Adnan; Weber, Ewald; Zhuo, Yan; Xue, Rong; Crozier, Stuart
2015-03-01
Using a new rotating SENSitivity Encoding (rotating-SENSE) algorithm, we have successfully demonstrated that the rotating radiofrequency coil array (RRFCA) was capable of achieving a significant reduction in scan time and a uniform image reconstruction for a homogeneous phantom at 7 Tesla. However, at 7 Tesla the in vivo sensitivity profiles (B1-) become distinct at various angular positions. Therefore, sensitivity maps at other angular positions cannot be obtained by numerically rotating the acquired ones. In this work, a novel sensitivity estimation method for the RRFCA was developed and validated with human brain imaging. This method employed a library database and registration techniques to estimate coil sensitivity at an arbitrary angular position. The estimated sensitivity maps were then compared to the acquired sensitivity maps. The results indicate that the proposed method is capable of accurately estimating both magnitude and phase of sensitivity at an arbitrary angular position, which enables us to employ the rotating-SENSE algorithm to accelerate acquisition and reconstruct image. Compared to a stationary coil array with the same number of coil elements, the RRFCA was able to reconstruct images with better quality at a high reduction factor. It is hoped that the proposed rotation-dependent sensitivity estimation algorithm and the acceleration ability of the RRFCA will be particularly useful for ultra high field MRI.
A Semi-Parametric Bayesian Mixture Modeling Approach for the Analysis of Judge Mediated Data
ERIC Educational Resources Information Center
Muckle, Timothy Joseph
2010-01-01
Existing methods for the analysis of ordinal-level data arising from judge ratings, such as the Multi-Facet Rasch model (MFRM, or the so-called Facets model) have been widely used in assessment in order to render fair examinee ability estimates in situations where the judges vary in their behavior or severity. However, this model makes certain…
Computer Aided Multi-Data Fusion Dismount Modeling
2012-03-22
The ability of geometric morphometric methods to estimate a known covariance matrix., volume 49. Systematic Biology, 2000. [39] Wang C., Yuen M...the use of human shape descriptors like landmarks, body composition, body segmentation, skeletonisation, body representation using geometrical shapes...Springer. [10] Bookstein, F. L. “ Morphometric Tools for Landmark Data: Geometry and Biology.” Cambridge University Press, 1991. [11] Borengasser, M
Timothy J. Veverica; Evan S. Kane; Eric S. Kasischke
2012-01-01
Organic layer consumption during forest fires is hard to quantify. These data suggest that the adventitious root methods developed for reconstructing organic layer depths following wildfires in boreal black spruce forests can also be applied to mixed tamarack forests growing in temperate regions with glacially transported soils.
Social cost impact assessment of pipeline infrastructure projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, John C., E-mail: matthewsj@battelle.org; Allouche, Erez N., E-mail: allouche@latech.edu; Sterling, Raymond L., E-mail: sterling@latech.edu
A key advantage of trenchless construction methods compared with traditional open-cut methods is their ability to install or rehabilitate underground utility systems with limited disruption to the surrounding built and natural environments. The equivalent monetary values of these disruptions are commonly called social costs. Social costs are often ignored by engineers or project managers during project planning and design phases, partially because they cannot be calculated using standard estimating methods. In recent years some approaches for estimating social costs were presented. Nevertheless, the cost data needed for validation of these estimating methods is lacking. Development of such social cost databasesmore » can be accomplished by compiling relevant information reported in various case histories. This paper identifies eight most important social cost categories, presents mathematical methods for calculating them, and summarizes the social cost impacts for two pipeline construction projects. The case histories are analyzed in order to identify trends for the various social cost categories. The effectiveness of the methods used to estimate these values is also discussed. These findings are valuable for pipeline infrastructure engineers making renewal technology selection decisions by providing a more accurate process for the assessment of social costs and impacts. - Highlights: • Identified the eight most important social cost factors for pipeline construction • Presented mathematical methods for calculating those social cost factors • Summarized social cost impacts for two pipeline construction projects • Analyzed those projects to identify trends for the social cost factors.« less
NASA Astrophysics Data System (ADS)
Siripatana, Adil; Mayo, Talea; Sraj, Ihab; Knio, Omar; Dawson, Clint; Le Maitre, Olivier; Hoteit, Ibrahim
2017-08-01
Bayesian estimation/inversion is commonly used to quantify and reduce modeling uncertainties in coastal ocean model, especially in the framework of parameter estimation. Based on Bayes rule, the posterior probability distribution function (pdf) of the estimated quantities is obtained conditioned on available data. It can be computed either directly, using a Markov chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation approach, which is heavily exploited in large dimensional state estimation problems. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach due to the restricted Gaussian prior and noise assumptions that are generally imposed in these methods. This contribution aims at evaluating the effectiveness of utilizing an ensemble Kalman-based data assimilation method for parameter estimation of a coastal ocean model against an MCMC polynomial chaos (PC)-based scheme. We focus on quantifying the uncertainties of a coastal ocean ADvanced CIRCulation (ADCIRC) model with respect to the Manning's n coefficients. Based on a realistic framework of observation system simulation experiments (OSSEs), we apply an ensemble Kalman filter and the MCMC method employing a surrogate of ADCIRC constructed by a non-intrusive PC expansion for evaluating the likelihood, and test both approaches under identical scenarios. We study the sensitivity of the estimated posteriors with respect to the parameters of the inference methods, including ensemble size, inflation factor, and PC order. A full analysis of both methods, in the context of coastal ocean model, suggests that an ensemble Kalman filter with appropriate ensemble size and well-tuned inflation provides reliable mean estimates and uncertainties of Manning's n coefficients compared to the full posterior distributions inferred by MCMC.
In-flight wind identification and soft landing control for autonomous unmanned powered parafoils
NASA Astrophysics Data System (ADS)
Luo, Shuzhen; Tan, Panlong; Sun, Qinglin; Wu, Wannan; Luo, Haowen; Chen, Zengqiang
2018-04-01
For autonomous unmanned powered parafoil, the ability to perform a final flare manoeuvre against the wind direction can allow a considerable reduction of horizontal and vertical velocities at impact, enabling a soft landing for a safe delivery of sensible loads; the lack of knowledge about the surface-layer winds will result in messing up terminal flare manoeuvre. Moreover, unknown or erroneous winds can also prevent the parafoil system from reaching the target area. To realize accurate trajectory tracking and terminal soft landing in the unknown wind environment, an efficient in-flight wind identification method merely using Global Positioning System (GPS) data and recursive least square method is proposed to online identify the variable wind information. Furthermore, a novel linear extended state observation filter is proposed to filter the groundspeed of the powered parafoil system calculated by the GPS information to provide a best estimation of the present wind during flight. Simulation experiments and real airdrop tests demonstrate the great ability of this method to in-flight identify the variable wind field, and it can benefit the powered parafoil system to fulfil accurate tracking control and a soft landing in the unknown wind field with high landing accuracy and strong wind-resistance ability.
Accuracy of Blood Loss Measurement during Cesarean Delivery.
Doctorvaladan, Sahar V; Jelks, Andrea T; Hsieh, Eric W; Thurer, Robert L; Zakowski, Mark I; Lagrew, David C
2017-04-01
Objective This study aims to compare the accuracy of visual, quantitative gravimetric, and colorimetric methods used to determine blood loss during cesarean delivery procedures employing a hemoglobin extraction assay as the reference standard. Study Design In 50 patients having cesarean deliveries blood loss determined by assays of hemoglobin content on surgical sponges and in suction canisters was compared with obstetricians' visual estimates, a quantitative gravimetric method, and the blood loss determined by a novel colorimetric system. Agreement between the reference assay and other measures was evaluated by the Bland-Altman method. Results Compared with the blood loss measured by the reference assay (470 ± 296 mL), the colorimetric system (572 ± 334 mL) was more accurate than either visual estimation (928 ± 261 mL) or gravimetric measurement (822 ± 489 mL). The correlation between the assay method and the colorimetric system was more predictive (standardized coefficient = 0.951, adjusted R 2 = 0.902) than either visual estimation (standardized coefficient = 0.700, adjusted R 2 = 00.479) or the gravimetric determination (standardized coefficient = 0.564, adjusted R 2 = 0.304). Conclusion During cesarean delivery, measuring blood loss using colorimetric image analysis is superior to visual estimation and a gravimetric method. Implementation of colorimetric analysis may enhance the ability of management protocols to improve clinical outcomes.
Accuracy of Blood Loss Measurement during Cesarean Delivery
Doctorvaladan, Sahar V.; Jelks, Andrea T.; Hsieh, Eric W.; Thurer, Robert L.; Zakowski, Mark I.; Lagrew, David C.
2017-01-01
Objective This study aims to compare the accuracy of visual, quantitative gravimetric, and colorimetric methods used to determine blood loss during cesarean delivery procedures employing a hemoglobin extraction assay as the reference standard. Study Design In 50 patients having cesarean deliveries blood loss determined by assays of hemoglobin content on surgical sponges and in suction canisters was compared with obstetricians' visual estimates, a quantitative gravimetric method, and the blood loss determined by a novel colorimetric system. Agreement between the reference assay and other measures was evaluated by the Bland–Altman method. Results Compared with the blood loss measured by the reference assay (470 ± 296 mL), the colorimetric system (572 ± 334 mL) was more accurate than either visual estimation (928 ± 261 mL) or gravimetric measurement (822 ± 489 mL). The correlation between the assay method and the colorimetric system was more predictive (standardized coefficient = 0.951, adjusted R2 = 0.902) than either visual estimation (standardized coefficient = 0.700, adjusted R2 = 00.479) or the gravimetric determination (standardized coefficient = 0.564, adjusted R2 = 0.304). Conclusion During cesarean delivery, measuring blood loss using colorimetric image analysis is superior to visual estimation and a gravimetric method. Implementation of colorimetric analysis may enhance the ability of management protocols to improve clinical outcomes. PMID:28497007
Paynter, Ian; Genest, Daniel; Peri, Francesco; Schaaf, Crystal
2018-04-06
Volumetric models with known biases are shown to provide bounds for the uncertainty in estimations of volume for ecologically interesting objects, observed with a terrestrial laser scanner (TLS) instrument. Bounding cuboids, three-dimensional convex hull polygons, voxels, the Outer Hull Model and Square Based Columns (SBCs) are considered for their ability to estimate the volume of temperate and tropical trees, as well as geomorphological features such as bluffs and saltmarsh creeks. For temperate trees, supplementary geometric models are evaluated for their ability to bound the uncertainty in cylinder-based reconstructions, finding that coarser volumetric methods do not currently constrain volume meaningfully, but may be helpful with further refinement, or in hybridized models. Three-dimensional convex hull polygons consistently overestimate object volume, and SBCs consistently underestimate volume. Voxel estimations vary in their bias, due to the point density of the TLS data, and occlusion, particularly in trees. The response of the models to parametrization is analysed, observing unexpected trends in the SBC estimates for the drumlin dataset. Establishing that this result is due to the resolution of the TLS observations being insufficient to support the resolution of the geometric model, it is suggested that geometric models with predictable outcomes can also highlight data quality issues when they produce illogical results.
Bounding uncertainty in volumetric geometric models for terrestrial lidar observations of ecosystems
Genest, Daniel; Peri, Francesco; Schaaf, Crystal
2018-01-01
Volumetric models with known biases are shown to provide bounds for the uncertainty in estimations of volume for ecologically interesting objects, observed with a terrestrial laser scanner (TLS) instrument. Bounding cuboids, three-dimensional convex hull polygons, voxels, the Outer Hull Model and Square Based Columns (SBCs) are considered for their ability to estimate the volume of temperate and tropical trees, as well as geomorphological features such as bluffs and saltmarsh creeks. For temperate trees, supplementary geometric models are evaluated for their ability to bound the uncertainty in cylinder-based reconstructions, finding that coarser volumetric methods do not currently constrain volume meaningfully, but may be helpful with further refinement, or in hybridized models. Three-dimensional convex hull polygons consistently overestimate object volume, and SBCs consistently underestimate volume. Voxel estimations vary in their bias, due to the point density of the TLS data, and occlusion, particularly in trees. The response of the models to parametrization is analysed, observing unexpected trends in the SBC estimates for the drumlin dataset. Establishing that this result is due to the resolution of the TLS observations being insufficient to support the resolution of the geometric model, it is suggested that geometric models with predictable outcomes can also highlight data quality issues when they produce illogical results. PMID:29503722
Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H
2017-03-01
To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.
Chen, Gongbo; Li, Shanshan; Knibbs, Luke D; Hamm, N A S; Cao, Wei; Li, Tiantian; Guo, Jianping; Ren, Hongyan; Abramson, Michael J; Guo, Yuming
2018-09-15
Machine learning algorithms have very high predictive ability. However, no study has used machine learning to estimate historical concentrations of PM 2.5 (particulate matter with aerodynamic diameter ≤ 2.5 μm) at daily time scale in China at a national level. To estimate daily concentrations of PM 2.5 across China during 2005-2016. Daily ground-level PM 2.5 data were obtained from 1479 stations across China during 2014-2016. Data on aerosol optical depth (AOD), meteorological conditions and other predictors were downloaded. A random forests model (non-parametric machine learning algorithms) and two traditional regression models were developed to estimate ground-level PM 2.5 concentrations. The best-fit model was then utilized to estimate the daily concentrations of PM 2.5 across China with a resolution of 0.1° (≈10 km) during 2005-2016. The daily random forests model showed much higher predictive accuracy than the other two traditional regression models, explaining the majority of spatial variability in daily PM 2.5 [10-fold cross-validation (CV) R 2 = 83%, root mean squared prediction error (RMSE) = 28.1 μg/m 3 ]. At the monthly and annual time-scale, the explained variability of average PM 2.5 increased up to 86% (RMSE = 10.7 μg/m 3 and 6.9 μg/m 3 , respectively). Taking advantage of a novel application of modeling framework and the most recent ground-level PM 2.5 observations, the machine learning method showed higher predictive ability than previous studies. Random forests approach can be used to estimate historical exposure to PM 2.5 in China with high accuracy. Copyright © 2018 Elsevier B.V. All rights reserved.
White, M.A.; de Beurs, K. M.; Didan, K.; Inouye, D.W.; Richardson, A.D.; Jensen, O.P.; O'Keefe, J.; Zhang, G.; Nemani, R.R.; van, Leeuwen; Brown, Jesslyn F.; de Wit, A.; Schaepman, M.; Lin, X.; Dettinger, M.; Bailey, A.S.; Kimball, J.; Schwartz, M.D.; Baldocchi, D.D.; Lee, J.T.; Lauenroth, W.K.
2009-01-01
Shifts in the timing of spring phenology are a central feature of global change research. Long-term observations of plant phenology have been used to track vegetation responses to climate variability but are often limited to particular species and locations and may not represent synoptic patterns. Satellite remote sensing is instead used for continental to global monitoring. Although numerous methods exist to extract phenological timing, in particular start-of-spring (SOS), from time series of reflectance data, a comprehensive intercomparison and interpretation of SOS methods has not been conducted. Here, we assess 10 SOS methods for North America between 1982 and 2006. The techniques include consistent inputs from the 8 km Global Inventory Modeling and Mapping Studies Advanced Very High Resolution Radiometer NDVIg dataset, independent data for snow cover, soil thaw, lake ice dynamics, spring streamflow timing, over 16 000 individual measurements of ground-based phenology, and two temperature-driven models of spring phenology. Compared with an ensemble of the 10 SOS methods, we found that individual methods differed in average day-of-year estimates by ±60 days and in standard deviation by ±20 days. The ability of the satellite methods to retrieve SOS estimates was highest in northern latitudes and lowest in arid, tropical, and Mediterranean ecoregions. The ordinal rank of SOS methods varied geographically, as did the relationships between SOS estimates and the cryospheric/hydrologic metrics. Compared with ground observations, SOS estimates were more related to the first leaf and first flowers expanding phenological stages. We found no evidence for time trends in spring arrival from ground- or model-based data; using an ensemble estimate from two methods that were more closely related to ground observations than other methods, SOS trends could be detected for only 12% of North America and were divided between trends towards both earlier and later spring.
The Big Finger: the second to fourth digit ratio is a predictor of sporting ability in women
Paul, S N; Kato, B S; Hunkin, J L; Vivekanandan, S; Spector, T D
2006-01-01
Background The second to fourth finger length ratio (2d:4d) is thought to be related to diverse traits including cognitive ability, disease susceptibility, and sexuality. Objective To examine the relationship between 2d:4d and sports ability in women. Methods Hand radiographs from 607 women (mean age 54 years) were used to estimate 2d:4d. Ranking of sports ability was on a scale (1–5). Results The highest achieved level of participation in any sport was significantly negatively associated with 2d:4d (b = −4.93, p = 0.01) as was the relationship between 2d:4d and running level (b = −6.81, p = 0.034). Ability in other sports also showed a negative relationship albeit non‐significant. Conclusions These results suggest that a low 2d:4d ratio is related to increased female sports ability. It can be postulated that this ratio may predict potential sports ability. Understanding the mechanisms underpinning this relationship may give important insights into musculoskeletal fitness, health and disease. PMID:17008344
Hayashi, Ryusuke; Watanabe, Osamu; Yokoyama, Hiroki; Nishida, Shin'ya
2017-06-01
Characterization of the functional relationship between sensory inputs and neuronal or observers' perceptual responses is one of the fundamental goals of systems neuroscience and psychophysics. Conventional methods, such as reverse correlation and spike-triggered data analyses are limited in their ability to resolve complex and inherently nonlinear neuronal/perceptual processes because these methods require input stimuli to be Gaussian with a zero mean. Recent studies have shown that analyses based on a generalized linear model (GLM) do not require such specific input characteristics and have advantages over conventional methods. GLM, however, relies on iterative optimization algorithms and its calculation costs become very expensive when estimating the nonlinear parameters of a large-scale system using large volumes of data. In this paper, we introduce a new analytical method for identifying a nonlinear system without relying on iterative calculations and yet also not requiring any specific stimulus distribution. We demonstrate the results of numerical simulations, showing that our noniterative method is as accurate as GLM in estimating nonlinear parameters in many cases and outperforms conventional, spike-triggered data analyses. As an example of the application of our method to actual psychophysical data, we investigated how different spatiotemporal frequency channels interact in assessments of motion direction. The nonlinear interaction estimated by our method was consistent with findings from previous vision studies and supports the validity of our method for nonlinear system identification.
Estimation of effective connectivity using multi-layer perceptron artificial neural network.
Talebi, Nasibeh; Nasrabadi, Ali Motie; Mohammad-Rezazadeh, Iman
2018-02-01
Studies on interactions between brain regions estimate effective connectivity, (usually) based on the causality inferences made on the basis of temporal precedence. In this study, the causal relationship is modeled by a multi-layer perceptron feed-forward artificial neural network, because of the ANN's ability to generate appropriate input-output mapping and to learn from training examples without the need of detailed knowledge of the underlying system. At any time instant, the past samples of data are placed in the network input, and the subsequent values are predicted at its output. To estimate the strength of interactions, the measure of " Causality coefficient " is defined based on the network structure, the connecting weights and the parameters of hidden layer activation function. Simulation analysis demonstrates that the method, called "CREANN" (Causal Relationship Estimation by Artificial Neural Network), can estimate time-invariant and time-varying effective connectivity in terms of MVAR coefficients. The method shows robustness with respect to noise level of data. Furthermore, the estimations are not significantly influenced by the model order (considered time-lag), and the different initial conditions (initial random weights and parameters of the network). CREANN is also applied to EEG data collected during a memory recognition task. The results implicate that it can show changes in the information flow between brain regions, involving in the episodic memory retrieval process. These convincing results emphasize that CREANN can be used as an appropriate method to estimate the causal relationship among brain signals.
Genetic influence on contrast sensitivity in middle-aged male twins.
Cronin-Golomb, Alice; Panizzon, Matthew S; Lyons, Michael J; Franz, Carol E; Grant, Michael D; Jacobson, Kristen C; Eisen, Seth A; Laudate, Thomas M; Kremen, William S
2007-07-01
Contrast sensitivity is strongly associated with daily functioning among older adults, but the genetic and environmental contributions to this ability are unknown. Using the classical twin method, we addressed this issue by examining contrast sensitivity at five spatial frequencies (1.5-18 cycles per degree) in 718 middle-aged male twins from the Vietnam Era Twin Study of Aging (VETSA). Heritability estimates were modest (14-38%), whereas individual-specific environmental influences accounted for 62-86% of the variance. Identifying the types of individual-specific events that impact contrast sensitivity may suggest interventions to modulate this ability and thereby improve overall quality of life as adults age.
Number of discernible object colors is a conundrum.
Masaoka, Kenichiro; Berns, Roy S; Fairchild, Mark D; Moghareh Abed, Farhad
2013-02-01
Widely varying estimates of the number of discernible object colors have been made by using various methods over the past 100 years. To clarify the source of the discrepancies in the previous, inconsistent estimates, the number of discernible object colors is estimated over a wide range of color temperatures and illuminance levels using several chromatic adaptation models, color spaces, and color difference limens. Efficient and accurate models are used to compute optimal-color solids and count the number of discernible colors. A comprehensive simulation reveals limitations in the ability of current color appearance models to estimate the number of discernible colors even if the color solid is smaller than the optimal-color solid. The estimates depend on the color appearance model, color space, and color difference limen used. The fundamental problem lies in the von Kries-type chromatic adaptation transforms, which have an unknown effect on the ranking of the number of discernible colors at different color temperatures.
Lin, Tiger W.; Das, Anup; Krishnan, Giri P.; Bazhenov, Maxim; Sejnowski, Terrence J.
2017-01-01
With our ability to record more neurons simultaneously, making sense of these data is a challenge. Functional connectivity is one popular way to study the relationship of multiple neural signals. Correlation-based methods are a set of currently well-used techniques for functional connectivity estimation. However, due to explaining away and unobserved common inputs (Stevenson, Rebesco, Miller, & Körding, 2008), they produce spurious connections. The general linear model (GLM), which models spike trains as Poisson processes (Okatan, Wilson, & Brown, 2005; Truccolo, Eden, Fellows, Donoghue, & Brown, 2005; Pillow et al., 2008), avoids these confounds. We develop here a new class of methods by using differential signals based on simulated intracellular voltage recordings. It is equivalent to a regularized AR(2) model. We also expand the method to simulated local field potential recordings and calcium imaging. In all of our simulated data, the differential covariance-based methods achieved performance better than or similar to the GLM method and required fewer data samples. This new class of methods provides alternative ways to analyze neural signals. PMID:28777719
Lin, Tiger W; Das, Anup; Krishnan, Giri P; Bazhenov, Maxim; Sejnowski, Terrence J
2017-10-01
With our ability to record more neurons simultaneously, making sense of these data is a challenge. Functional connectivity is one popular way to study the relationship of multiple neural signals. Correlation-based methods are a set of currently well-used techniques for functional connectivity estimation. However, due to explaining away and unobserved common inputs (Stevenson, Rebesco, Miller, & Körding, 2008 ), they produce spurious connections. The general linear model (GLM), which models spike trains as Poisson processes (Okatan, Wilson, & Brown, 2005 ; Truccolo, Eden, Fellows, Donoghue, & Brown, 2005 ; Pillow et al., 2008 ), avoids these confounds. We develop here a new class of methods by using differential signals based on simulated intracellular voltage recordings. It is equivalent to a regularized AR(2) model. We also expand the method to simulated local field potential recordings and calcium imaging. In all of our simulated data, the differential covariance-based methods achieved performance better than or similar to the GLM method and required fewer data samples. This new class of methods provides alternative ways to analyze neural signals.
Souza, Erica Silva; Zaramello, Laize; Kuhnen, Carlos Alberto; Junkes, Berenice da Silva; Yunes, Rosendo Augusto; Heinzen, Vilma Edite Fonseca
2011-01-01
A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (ISET). The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes. Log P values were calculated employing atomic-contribution methods, as in the Ghose/Crippen approach and its later refinement, AlogP; using fragmental methods through the ClogP method; and employing an approach considering the whole molecule using topological indices with the MlogP method. The efficiency and the applicability of the ISET in terms of calculating log P were demonstrated through good statistical quality (r > 0.99; s < 0.18), high internal stability and good predictive ability for an external group of compounds in the same order as the widely used models based on the fragmental method, ClogP, and the atomic contribution method, AlogP, which are among the most used methods of predicting log P. PMID:22072945
Souza, Erica Silva; Zaramello, Laize; Kuhnen, Carlos Alberto; Junkes, Berenice da Silva; Yunes, Rosendo Augusto; Heinzen, Vilma Edite Fonseca
2011-01-01
A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (I(SET)). The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes. Log P values were calculated employing atomic-contribution methods, as in the Ghose/Crippen approach and its later refinement, AlogP; using fragmental methods through the ClogP method; and employing an approach considering the whole molecule using topological indices with the MlogP method. The efficiency and the applicability of the I(SET) in terms of calculating log P were demonstrated through good statistical quality (r > 0.99; s < 0.18), high internal stability and good predictive ability for an external group of compounds in the same order as the widely used models based on the fragmental method, ClogP, and the atomic contribution method, AlogP, which are among the most used methods of predicting log P.
Steinberg, Idan; Tamir, Gil; Gannot, Israel
2018-03-16
Solid malignant tumors are one of the leading causes of death worldwide. Many times complete removal is not possible and alternative methods such as focused hyperthermia are used. Precise control of the hyperthermia process is imperative for the successful application of such treatment. To that end, this research presents a fast method that enables the estimation of deep tissue heat distribution by capturing and processing the transient temperature at the boundary based on a bio-heat transfer model. The theoretical model is rigorously developed and thoroughly validated by a series of experiments. A 10-fold improvement is demonstrated in resolution and visibility on tissue mimicking phantoms. The inverse problem is demonstrated as well with a successful application of the model for imaging deep-tissue embedded heat sources. Thereby, allowing the physician then ability to dynamically evaluate the hyperthermia treatment efficiency in real time.
2011-05-30
affect chemical agents. Therefore no change in the methods for chemical or radiological decontamination would be necessary. 14. Radiation...here is the high radiation doses do affect the ability to polymerase chain reaction methods. It appears, depending on the dose and target, these...2001) Bacillus spore inactivation methods affect detection assays. Appl Environ Microbiol. 67(8): p. 3665‐70. DeCarlos, A. and Paez, E. (1991
NASA Astrophysics Data System (ADS)
Vachálek, Ján
2011-12-01
The paper compares the abilities of forgetting methods to track time varying parameters of two different simulated models with different types of excitation. The observed parameters in the simulations are the integral sum of the Euclidean norm, deviation of the parameter estimates from their true values and a selected band prediction error count. As supplementary information, we observe the eigenvalues of the covariance matrix. In the paper we used a modified method of Regularized Exponential Forgetting with Alternative Covariance Matrix (REFACM) along with Directional Forgetting (DF) and three standard regularized methods.
Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun
2016-05-01
Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.
Comparison of in silico models for prediction of mutagenicity.
Bakhtyari, Nazanin G; Raitano, Giuseppa; Benfenati, Emilio; Martin, Todd; Young, Douglas
2013-01-01
Using a dataset with more than 6000 compounds, the performance of eight quantitative structure activity relationships (QSAR) models was evaluated: ACD/Tox Suite, Absorption, Distribution, Metabolism, Elimination, and Toxicity of chemical substances (ADMET) predictor, Derek, Toxicity Estimation Software Tool (T.E.S.T.), TOxicity Prediction by Komputer Assisted Technology (TOPKAT), Toxtree, CEASAR, and SARpy (SAR in python). In general, the results showed a high level of performance. To have a realistic estimate of the predictive ability, the results for chemicals inside and outside the training set for each model were considered. The effect of applicability domain tools (when available) on the prediction accuracy was also evaluated. The predictive tools included QSAR models, knowledge-based systems, and a combination of both methods. Models based on statistical QSAR methods gave better results.
Nowrouzi, Behdin; Lightfoot, Nancy; Carter, Lorraine; Larivière, Michel; Rukholm, Ellen; Schinke, Robert; Belanger-Gardner, Diane
2015-01-01
The aim of this study was to determine: 1) if quality of work life (QWL), location of cross-training, stress variables, and various demographic factors in nurses are associated with work ability, and 2) nursing occupational stress, QWL, and various associated factors are related with nurses' work ability. There is limited research examining the obstetrical nursing environment. Given the amount of time and energy people expend at the workplace, it is crucial for employees to be satisfied with their lives at work. This cross sectional study was conducted in 2012 in four hospitals in northeastern Ontario, Canada. A stratified random sample of registered nurses (n= 111) were selected. The majority of participants were female (94.6%) ranging in age from 24 to 64 years (M = 41.9, s.d. = 10.2). For the stress and QWL model, one variable: QWL (home-work support - see Methods for definition) (p= 0.015), cross-trained (see Methods for definition) nurses (p= 0.048), and having more than 4 patients per shift (p= 0.024) significantly contributed to the variance in work ability scores. In the logistic regression model, the odds of a higher work ability for nurses who received home-work support were estimated to be 1.32 (95% CI, 1.06 to 1.66) times the odds of a higher work ability for nurses who did not receive home-work support. Work ability in the work environment of obstetrical nursing is important. To be high functioning, workplaces should maximize the use of their employees' actual and potential skills.
GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation
Li, Hong; Lu, Mingquan
2017-01-01
Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks. PMID:28665318
GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation.
Wang, Fei; Li, Hong; Lu, Mingquan
2017-06-30
Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks.
State estimation for autopilot control of small unmanned aerial vehicles in windy conditions
NASA Astrophysics Data System (ADS)
Poorman, David Paul
The use of small unmanned aerial vehicles (UAVs) both in the military and civil realms is growing. This is largely due to the proliferation of inexpensive sensors and the increase in capability of small computers that has stemmed from the personal electronic device market. Methods for performing accurate state estimation for large scale aircraft have been well known and understood for decades, which usually involve a complex array of expensive high accuracy sensors. Performing accurate state estimation for small unmanned aircraft is a newer area of study and often involves adapting known state estimation methods to small UAVs. State estimation for small UAVs can be more difficult than state estimation for larger UAVs due to small UAVs employing limited sensor suites due to cost, and the fact that small UAVs are more susceptible to wind than large aircraft. The purpose of this research is to evaluate the ability of existing methods of state estimation for small UAVs to accurately capture the states of the aircraft that are necessary for autopilot control of the aircraft in a Dryden wind field. The research begins by showing which aircraft states are necessary for autopilot control in Dryden wind. Then two state estimation methods that employ only accelerometer, gyro, and GPS measurements are introduced. The first method uses assumptions on aircraft motion to directly solve for attitude information and smooth GPS data, while the second method integrates sensor data to propagate estimates between GPS measurements and then corrects those estimates with GPS information. The performance of both methods is analyzed with and without Dryden wind, in straight and level flight, in a coordinated turn, and in a wings level ascent. It is shown that in zero wind, the first method produces significant steady state attitude errors in both a coordinated turn and in a wings level ascent. In Dryden wind, it produces large noise on the estimates for its attitude states, and has a non-zero mean error that increases when gyro bias is increased. The second method is shown to not exhibit any steady state error in the tested scenarios that is inherent to its design. The second method can correct for attitude errors that arise from both integration error and gyro bias states, but it suffers from lack of attitude error observability. The attitude errors are shown to be more observable in wind, but increased integration error in wind outweighs the increase in attitude corrections that such increased observability brings, resulting in larger attitude errors in wind. Overall, this work highlights many technical deficiencies of both of these methods of state estimation that could be improved upon in the future to enhance state estimation for small UAVs in windy conditions.
Weed Growth Stage Estimator Using Deep Convolutional Neural Networks.
Teimouri, Nima; Dyrmann, Mads; Nielsen, Per Rydahl; Mathiassen, Solvejg Kopp; Somerville, Gayle J; Jørgensen, Rasmus Nyholm
2018-05-16
This study outlines a new method of automatically estimating weed species and growth stages (from cotyledon until eight leaves are visible) of in situ images covering 18 weed species or families. Images of weeds growing within a variety of crops were gathered across variable environmental conditions with regards to soil types, resolution and light settings. Then, 9649 of these images were used for training the computer, which automatically divided the weeds into nine growth classes. The performance of this proposed convolutional neural network approach was evaluated on a further set of 2516 images, which also varied in term of crop, soil type, image resolution and light conditions. The overall performance of this approach achieved a maximum accuracy of 78% for identifying Polygonum spp. and a minimum accuracy of 46% for blackgrass. In addition, it achieved an average 70% accuracy rate in estimating the number of leaves and 96% accuracy when accepting a deviation of two leaves. These results show that this new method of using deep convolutional neural networks has a relatively high ability to estimate early growth stages across a wide variety of weed species.
NASA Technical Reports Server (NTRS)
Smith, Phillip N.
1990-01-01
The automation of low-altitude rotorcraft flight depends on the ability to detect, locate, and navigate around obstacles lying in the rotorcraft's intended flightpath. Computer vision techniques provide a passive method of obstacle detection and range estimation, for obstacle avoidance. Several algorithms based on computer vision methods have been developed for this purpose using laboratory data; however, further development and validation of candidate algorithms require data collected from rotorcraft flight. A data base containing low-altitude imagery augmented with the rotorcraft and sensor parameters required for passive range estimation is not readily available. Here, the emphasis is on the methodology used to develop such a data base from flight-test data consisting of imagery, rotorcraft and sensor parameters, and ground-truth range measurements. As part of the data preparation, a technique for obtaining the sensor calibration parameters is described. The data base will enable the further development of algorithms for computer vision-based obstacle detection and passive range estimation, as well as provide a benchmark for verification of range estimates against ground-truth measurements.
NASA Astrophysics Data System (ADS)
Gao, Bin; Liu, Wanyu; Wang, Liang; Liu, Zhengjun; Croisille, Pierre; Delachartre, Philippe; Clarysse, Patrick
2016-12-01
Cine-MRI is widely used for the analysis of cardiac function in clinical routine, because of its high soft tissue contrast and relatively short acquisition time in comparison with other cardiac MRI techniques. The gray level distribution in cardiac cine-MRI is relatively homogenous within the myocardium, and can therefore make motion quantification difficult. To ensure that the motion estimation problem is well posed, more image features have to be considered. This work is inspired by a method previously developed for color image processing. The monogenic signal provides a framework to estimate the local phase, orientation, and amplitude, of an image, three features which locally characterize the 2D intensity profile. The independent monogenic features are combined into a 3D matrix for motion estimation. To improve motion estimation accuracy, we chose the zero-mean normalized cross-correlation as a matching measure, and implemented a bilateral filter for denoising and edge-preservation. The monogenic features distance is used in lieu of the color space distance in the bilateral filter. Results obtained from four realistic simulated sequences outperformed two other state of the art methods even in the presence of noise. The motion estimation errors (end point error) using our proposed method were reduced by about 20% in comparison with those obtained by the other tested methods. The new methodology was evaluated on four clinical sequences from patients presenting with cardiac motion dysfunctions and one healthy volunteer. The derived strain fields were analyzed favorably in their ability to identify myocardial regions with impaired motion.
ERIC Educational Resources Information Center
Wang, Zhen; Yao, Lihua
2013-01-01
The current study used simulated data to investigate the properties of a newly proposed method (Yao's rater model) for modeling rater severity and its distribution under different conditions. Our study examined the effects of rater severity, distributions of rater severity, the difference between item response theory (IRT) models with rater effect…
ERIC Educational Resources Information Center
Shaw, James B.; McCormick, Ernest J.
The study was directed towards the further exploration of the use of attribute ratings as the basis for establishing the job component validity of tests, in particular by using different methods of combining "attribute-based" data with "job analysis" data to form estimates of the aptitude requirements of jobs. The primary focus…
A Method to Estimate Fabric Particle Penetration Performance
2014-09-08
may be needed to improve the correlation between wind tunnel component sleeve tests and bench top swatch test. The ability to predict multi-layered...within the fabric/component gap may be needed to improve the correlation between wind tunnel component sleeve tests and bench top swatch test...impermeable garment . Heat stress becomes a major problem with this approach however, as normal physiological heat loss mechanisms (especially sweat
I've Fallen and I Can't Get up: Can High-Ability Students Recover from Early Mistakes in CAT?
ERIC Educational Resources Information Center
Rulison, Kelly L.; Loken, Eric
2009-01-01
A difficult result to interpret in Computerized Adaptive Tests (CATs) occurs when an ability estimate initially drops and then ascends continuously until the test ends, suggesting that the true ability may be higher than implied by the final estimate. This study explains why this asymmetry occurs and shows that early mistakes by high-ability…
Probing Inflation Using Galaxy Clustering On Ultra-Large Scales
NASA Astrophysics Data System (ADS)
Dalal, Roohi; de Putter, Roland; Dore, Olivier
2018-01-01
A detailed understanding of curvature perturbations in the universe is necessary to constrain theories of inflation. In particular, measurements of the local non-gaussianity parameter, flocNL, enable us to distinguish between two broad classes of inflationary theories, single-field and multi-field inflation. While most single-field theories predict flocNL ≈ ‑5/12 (ns -1), in multi-field theories, flocNL is not constrained to this value and is allowed to be observably large. Achieving σ(flocNL) = 1 would give us discovery potential for detecting multi-field inflation, while finding flocNL=0 would rule out a good fraction of interesting multi-field models. We study the use of galaxy clustering on ultra-large scales to achieve this level of constraint on flocNL. Upcoming surveys such as Euclid and LSST will give us galaxy catalogs from which we can construct the galaxy power spectrum and hence infer a value of flocNL. We consider two possible methods of determining the galaxy power spectrum from a catalog of galaxy positions: the traditional Feldman Kaiser Peacock (FKP) Power Spectrum Estimator, and an Optimal Quadratic Estimator (OQE). We implemented and tested each method using mock galaxy catalogs, and compared the resulting constraints on flocNL. We find that the FKP estimator can measure flocNL in an unbiased way, but there remains room for improvement in its precision. We also find that the OQE is not computationally fast, but remains a promising option due to its ability to isolate the power spectrum at large scales. We plan to extend this research to study alternative methods, such as pixel-based likelihood functions. We also plan to study the impact of general relativistic effects at these scales on our ability to measure flocNL.
Dawson, D.K.; Ralph, C. John; Scott, J. Michael
1981-01-01
Work in rugged terrain poses some unique problems that should be considered before research is initiated. Besides the obvious physical difficulties of crossing uneven terrain, topography can influence the bird species? composition of a forest and the observer's ability to detect birds and estimate distances. Census results can also be affected by the slower rate of travel on rugged terrain. Density figures may be higher than results obtained from censuses in similar habitat on level terrain because of the greater likelihood of double-recording of individuals and of recording species that sing infrequently. In selecting a census technique, the researcher should weigh the efficiency and applicability of a technique for the objectives of his study in light of the added difficulties posed by rugged terrain. The variable circular-plot method is probably the most effective technique for estimating bird numbers. Bird counts and distance estimates are facilitated because the observer is stationary, and calculations of species? densities take into account differences in effective area covered amongst stations due to variability in terrain or vegetation structure. Institution of precautions that minimize the risk of injury to field personnel can often enhance the observer?s ability to detect birds.
Pos, Edwin; Guevara Andino, Juan Ernesto; Sabatier, Daniel; Molino, Jean-François; Pitman, Nigel; Mogollón, Hugo; Neill, David; Cerón, Carlos; Rivas-Torres, Gonzalo; Di Fiore, Anthony; Thomas, Raquel; Tirado, Milton; Young, Kenneth R; Wang, Ophelia; Sierra, Rodrigo; García-Villacorta, Roosevelt; Zagt, Roderick; Palacios Cuenca, Walter; Aulestia, Milton; Ter Steege, Hans
2017-06-01
With many sophisticated methods available for estimating migration, ecologists face the difficult decision of choosing for their specific line of work. Here we test and compare several methods, performing sanity and robustness tests, applying to large-scale data and discussing the results and interpretation. Five methods were selected to compare for their ability to estimate migration from spatially implicit and semi-explicit simulations based on three large-scale field datasets from South America (Guyana, Suriname, French Guiana and Ecuador). Space was incorporated semi-explicitly by a discrete probability mass function for local recruitment, migration from adjacent plots or from a metacommunity. Most methods were able to accurately estimate migration from spatially implicit simulations. For spatially semi-explicit simulations, estimation was shown to be the additive effect of migration from adjacent plots and the metacommunity. It was only accurate when migration from the metacommunity outweighed that of adjacent plots, discrimination, however, proved to be impossible. We show that migration should be considered more an approximation of the resemblance between communities and the summed regional species pool. Application of migration estimates to simulate field datasets did show reasonably good fits and indicated consistent differences between sets in comparison with earlier studies. We conclude that estimates of migration using these methods are more an approximation of the homogenization among local communities over time rather than a direct measurement of migration and hence have a direct relationship with beta diversity. As betadiversity is the result of many (non)-neutral processes, we have to admit that migration as estimated in a spatial explicit world encompasses not only direct migration but is an ecological aggregate of these processes. The parameter m of neutral models then appears more as an emerging property revealed by neutral theory instead of being an effective mechanistic parameter and spatially implicit models should be rejected as an approximation of forest dynamics.
Montague, Marjorie; van Garderen, Delinda
2003-01-01
This study investigated students' mathematics achievement, estimation ability, use of estimation strategies, and academic self-perception. Students with learning disabilities (LD), average achievers, and intellectually gifted students (N = 135) in fourth, sixth, and eighth grade participated in the study. They were assessed to determine their mathematics achievement, ability to estimate discrete quantities, knowledge and use of estimation strategies, and perception of academic competence. The results indicated that the students with LD performed significantly lower than their peers on the math achievement measures, as expected, but viewed themselves to be as academically competent as the average achievers did. Students with LD and average achievers scored significantly lower than gifted students on all estimation measures, but they differed significantly from one another only on the estimation strategy use measure. Interestingly, even gifted students did not seem to have a well-developed understanding of estimation and, like the other students, did poorly on the first estimation measure. The accuracy of their estimates seemed to improve, however, when students were asked open-ended questions about the strategies they used to arrive at their estimates. Although students with LD did not differ from average achievers in their estimation accuracy, they used significantly fewer effective estimation strategies. Implications for instruction are discussed.
NASA Technical Reports Server (NTRS)
Frith, James; Barker, Ed; Cowardin, Heather; Buckalew, Brent; Anz-Meado, Phillip; Lederer, Susan
2017-01-01
The NASA Orbital Debris Program Office (ODPO) recently commissioned the Meter Class Autonomous Telescope (MCAT) on Ascension Island with the primary goal of obtaining population statistics of the geosynchronous (GEO) orbital debris environment. To help facilitate this, studies have been conducted using MCAT's known and projected capabilities to estimate the accuracy and timeliness in which it can survey the GEO environment. A simulated GEO debris population is created and sampled at various cadences and run through the Constrained Admissible Region Multi Hypotheses Filter (CAR-MHF). The orbits computed from the results are then compared to the simulated data to assess MCAT's ability to determine accurately the orbits of debris at various sample rates. Additionally, estimates of the rate at which MCAT will be able produce a complete GEO survey are presented using collected weather data and the proposed observation data collection cadence. The specific methods and results are presented here.
Advances and applications of occupancy models
Bailey, Larissa; MacKenzie, Darry I.; Nichols, James D.
2013-01-01
Summary: The past decade has seen an explosion in the development and application of models aimed at estimating species occurrence and occupancy dynamics while accounting for possible non-detection or species misidentification. We discuss some recent occupancy estimation methods and the biological systems that motivated their development. Collectively, these models offer tremendous flexibility, but simultaneously place added demands on the investigator. Unlike many mark–recapture scenarios, investigators utilizing occupancy models have the ability, and responsibility, to define their sample units (i.e. sites), replicate sampling occasions, time period over which species occurrence is assumed to be static and even the criteria that constitute ‘detection’ of a target species. Subsequent biological inference and interpretation of model parameters depend on these definitions and the ability to meet model assumptions. We demonstrate the relevance of these definitions by highlighting applications from a single biological system (an amphibian–pathogen system) and discuss situations where the use of occupancy models has been criticized. Finally, we use these applications to suggest future research and model development.
Simulation of the Continuous Casting and Cooling Behavior of Metallic Glasses
Pei, Zhipu; Ju, Dongying
2017-01-01
The development of melt spinning technique for preparation of metallic glasses was summarized. The limitations as well as restrictions of the melt spinning embodiments were also analyzed. As an improvement and variation of the melt spinning method, the vertical-type twin-roll casting (VTRC) process was discussed. As the thermal history experienced by the casting metals to a great extent determines the qualities of final products, cooling rate in the quenching process is believed to have a significant effect on glass formation. In order to estimate the ability to produce metallic glasses by VTRC method, temperature and flow phenomena of the melt in molten pool were computed, and cooling rates under different casting conditions were calculated with the simulation results. Considering the fluid character during casting process, the material derivative method based on continuum theory was adopted in the cooling rate calculation. Results show that the VTRC process has a good ability in continuous casting metallic glassy ribbons. PMID:28772779
Simulation of the Continuous Casting and Cooling Behavior of Metallic Glasses.
Pei, Zhipu; Ju, Dongying
2017-04-17
The development of melt spinning technique for preparation of metallic glasses was summarized. The limitations as well as restrictions of the melt spinning embodiments were also analyzed. As an improvement and variation of the melt spinning method, the vertical-type twin-roll casting (VTRC) process was discussed. As the thermal history experienced by the casting metals to a great extent determines the qualities of final products, cooling rate in the quenching process is believed to have a significant effect on glass formation. In order to estimate the ability to produce metallic glasses by VTRC method, temperature and flow phenomena of the melt in molten pool were computed, and cooling rates under different casting conditions were calculated with the simulation results. Considering the fluid character during casting process, the material derivative method based on continuum theory was adopted in the cooling rate calculation. Results show that the VTRC process has a good ability in continuous casting metallic glassy ribbons.
Ginsburg, Shoshana B.; Taimen, Pekka; Merisaari, Harri; Vainio, Paula; Boström, Peter J.; Aronen, Hannu J.; Jambor, Ivan; Madabhushi, Anant
2017-01-01
Purpose To develop and evaluate a prostate-based method (PBM) for estimating pharmacokinetic parameters on dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) by leveraging inherent differences in pharmacokinetic characteristics between the peripheral zone (PZ) and transition zone (TZ). Materials and Methods This retrospective study, approved by the Institutional Review Board, included 40 patients who underwent a multiparametric 3T MRI examination and subsequent radical prostatectomy. A two-step PBM for estimating pharmacokinetic parameters exploited the inherent differences in pharmacokinetic characteristics associated with the TZ and PZ. First, the reference region model was implemented to estimate ratios of Ktrans between normal TZ and PZ. Subsequently, the reference region model was leveraged again to estimate values for Ktrans and ve for every prostate voxel. The parameters of PBM were compared with those estimated using an arterial input function (AIF) derived from the femoral arteries. The ability of the parameters to differentiate prostate cancer (PCa) from benign tissue was evaluated on a voxel and lesion level. Additionally, the effect of temporal downsampling of the DCE MRI data was assessed. Results Significant differences (P < 0.05) in PBM Ktrans between PCa lesions and benign tissue were found in 26/27 patients with TZ lesions and in 33/38 patients with PZ lesions; significant differences in AIF-based Ktrans occurred in 26/27 and 30/38 patients, respectively. The 75th and 100th percentiles of Ktrans and ve estimated using PBM positively correlated with lesion size (P < 0.05). Conclusion Pharmacokinetic parameters estimated via PBM outperformed AIF-based parameters in PCa detection. PMID:27285161
Estimating tree bole volume using artificial neural network models for four species in Turkey.
Ozçelik, Ramazan; Diamantopoulou, Maria J; Brooks, John R; Wiant, Harry V
2010-01-01
Tree bole volumes of 89 Scots pine (Pinus sylvestris L.), 96 Brutian pine (Pinus brutia Ten.), 107 Cilicica fir (Abies cilicica Carr.) and 67 Cedar of Lebanon (Cedrus libani A. Rich.) trees were estimated using Artificial Neural Network (ANN) models. Neural networks offer a number of advantages including the ability to implicitly detect complex nonlinear relationships between input and output variables, which is very helpful in tree volume modeling. Two different neural network architectures were used and produced the Back propagation (BPANN) and the Cascade Correlation (CCANN) Artificial Neural Network models. In addition, tree bole volume estimates were compared to other established tree bole volume estimation techniques including the centroid method, taper equations, and existing standard volume tables. An overview of the features of ANNs and traditional methods is presented and the advantages and limitations of each one of them are discussed. For validation purposes, actual volumes were determined by aggregating the volumes of measured short sections (average 1 meter) of the tree bole using Smalian's formula. The results reported in this research suggest that the selected cascade correlation artificial neural network (CCANN) models are reliable for estimating the tree bole volume of the four examined tree species since they gave unbiased results and were superior to almost all methods in terms of error (%) expressed as the mean of the percentage errors. 2009 Elsevier Ltd. All rights reserved.
The allometry of coarse root biomass: log-transformed linear regression or nonlinear regression?
Lai, Jiangshan; Yang, Bo; Lin, Dunmei; Kerkhoff, Andrew J; Ma, Keping
2013-01-01
Precise estimation of root biomass is important for understanding carbon stocks and dynamics in forests. Traditionally, biomass estimates are based on allometric scaling relationships between stem diameter and coarse root biomass calculated using linear regression (LR) on log-transformed data. Recently, it has been suggested that nonlinear regression (NLR) is a preferable fitting method for scaling relationships. But while this claim has been contested on both theoretical and empirical grounds, and statistical methods have been developed to aid in choosing between the two methods in particular cases, few studies have examined the ramifications of erroneously applying NLR. Here, we use direct measurements of 159 trees belonging to three locally dominant species in east China to compare the LR and NLR models of diameter-root biomass allometry. We then contrast model predictions by estimating stand coarse root biomass based on census data from the nearby 24-ha Gutianshan forest plot and by testing the ability of the models to predict known root biomass values measured on multiple tropical species at the Pasoh Forest Reserve in Malaysia. Based on likelihood estimates for model error distributions, as well as the accuracy of extrapolative predictions, we find that LR on log-transformed data is superior to NLR for fitting diameter-root biomass scaling models. More importantly, inappropriately using NLR leads to grossly inaccurate stand biomass estimates, especially for stands dominated by smaller trees.
McCommis, Kyle S.; Koktzoglou, Ioannis; Zhang, Haosen; Goldstein, Thomas A.; Northrup, Benjamin E.; Li, Debiao; Gropler, Robert J.; Zheng, Jie
2010-01-01
Myocardial oxygen extraction fraction (OEF) during hyperemia can be estimated using a double-inversion-recovery (DIR) prepared T2-weighted black-blood sequence. Severe irregular ECG-triggering due to elevated heart rate and/or arrhythmias may render it difficult to adequately suppress the flowing left ventricle blood signal and thus potentially cause errors in the estimates of myocardial OEF. Thus, the goal of this study was to evaluate another black-blood technique, a diffusion-weighted (DW)-prepared TSE sequence for its ability to determine regional myocardial OEF during hyperemia. Control dogs and dogs with acute coronary artery stenosis were imaged with both the DIR- and DW-prepared TSE sequences at rest and during either dipyridamole or dobutamine hyperemia. Validation of MRI OEF estimates was performed using blood sampling from the artery and coronary sinus in control dogs. The two methods showed comparable correlations with blood sampling results (R2 = 0.9). Similar OEF estimations for all dogs were observed except for the group of dogs with severe coronary stenosis during dobutamine stress. In these dogs, the DW method provided more physiologically reasonable OEF (hyperemic OEF = 0.75 ± 0.08 vs resting OEF of 0.6) than the DIR method (hyperemic OEF = 0.56 ± 0.10). DW-preparation may be a valuable alternative for more accurate oxygenation measurements during irregular ECG-triggering. PMID:20512871
Remote sensing of Myriophyllum spicatum L. in a shallow, eutrophic lake
NASA Technical Reports Server (NTRS)
Gustafson, T. D.; Adams, M. S.
1973-01-01
An aerial 35 mm system was used for the acquisition of vertical color and color infrared imagery of the submergent aquatic macrophytes of Lake Wingra, Wisconsin. A method of photographic interpretation of stem density classes is tested for its ability to make standing crop biomass estimates of Myriophyllum spicatum. The results of film image density analysis are significantly correlated with stem densities and standing crop biomass of Myriophyllum and with the biomass of Oedogonium mats. Photographic methods are contrasted with conventional harvest procedures for efficiency and accuracy.
Dose-volume histogram prediction using density estimation.
Skarpman Munter, Johanna; Sjölund, Jens
2015-09-07
Knowledge of what dose-volume histograms can be expected for a previously unseen patient could increase consistency and quality in radiotherapy treatment planning. We propose a machine learning method that uses previous treatment plans to predict such dose-volume histograms. The key to the approach is the framing of dose-volume histograms in a probabilistic setting.The training consists of estimating, from the patients in the training set, the joint probability distribution of some predictive features and the dose. The joint distribution immediately provides an estimate of the conditional probability of the dose given the values of the predictive features. The prediction consists of estimating, from the new patient, the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimate of the dose-volume histogram.To illustrate how the proposed method relates to previously proposed methods, we use the signed distance to the target boundary as a single predictive feature. As a proof-of-concept, we predicted dose-volume histograms for the brainstems of 22 acoustic schwannoma patients treated with stereotactic radiosurgery, and for the lungs of 9 lung cancer patients treated with stereotactic body radiation therapy. Comparing with two previous attempts at dose-volume histogram prediction we find that, given the same input data, the predictions are similar.In summary, we propose a method for dose-volume histogram prediction that exploits the intrinsic probabilistic properties of dose-volume histograms. We argue that the proposed method makes up for some deficiencies in previously proposed methods, thereby potentially increasing ease of use, flexibility and ability to perform well with small amounts of training data.
Green, Linda V; Savin, Sergei; Lu, Yina
2013-01-01
Most existing estimates of the shortage of primary care physicians are based on simple ratios, such as one physician for every 2,500 patients. These estimates do not consider the impact of such ratios on patients' ability to get timely access to care. They also do not quantify the impact of changing patient demographics on the demand side and alternative methods of delivering care on the supply side. We used simulation methods to provide estimates of the number of primary care physicians needed, based on a comprehensive analysis considering access, demographics, and changing practice patterns. We show that the implementation of some increasingly popular operational changes in the ways clinicians deliver care-including the use of teams or "pods," better information technology and sharing of data, and the use of nonphysicians-have the potential to offset completely the increase in demand for physician services while improving access to care, thereby averting a primary care physician shortage.
Field evaluation of distance-estimation error during wetland-dependent bird surveys
Nadeau, Christopher P.; Conway, Courtney J.
2012-01-01
Context: The most common methods to estimate detection probability during avian point-count surveys involve recording a distance between the survey point and individual birds detected during the survey period. Accurately measuring or estimating distance is an important assumption of these methods; however, this assumption is rarely tested in the context of aural avian point-count surveys. Aims: We expand on recent bird-simulation studies to document the error associated with estimating distance to calling birds in a wetland ecosystem. Methods: We used two approaches to estimate the error associated with five surveyor's distance estimates between the survey point and calling birds, and to determine the factors that affect a surveyor's ability to estimate distance. Key results: We observed biased and imprecise distance estimates when estimating distance to simulated birds in a point-count scenario (x̄error = -9 m, s.d.error = 47 m) and when estimating distances to real birds during field trials (x̄error = 39 m, s.d.error = 79 m). The amount of bias and precision in distance estimates differed among surveyors; surveyors with more training and experience were less biased and more precise when estimating distance to both real and simulated birds. Three environmental factors were important in explaining the error associated with distance estimates, including the measured distance from the bird to the surveyor, the volume of the call and the species of bird. Surveyors tended to make large overestimations to birds close to the survey point, which is an especially serious error in distance sampling. Conclusions: Our results suggest that distance-estimation error is prevalent, but surveyor training may be the easiest way to reduce distance-estimation error. Implications: The present study has demonstrated how relatively simple field trials can be used to estimate the error associated with distance estimates used to estimate detection probability during avian point-count surveys. Evaluating distance-estimation errors will allow investigators to better evaluate the accuracy of avian density and trend estimates. Moreover, investigators who evaluate distance-estimation errors could employ recently developed models to incorporate distance-estimation error into analyses. We encourage further development of such models, including the inclusion of such models into distance-analysis software.
Bernstein, Joshua G.W.; Mehraei, Golbarg; Shamma, Shihab; Gallun, Frederick J.; Theodoroff, Sarah M.; Leek, Marjorie R.
2014-01-01
Background A model that can accurately predict speech intelligibility for a given hearing-impaired (HI) listener would be an important tool for hearing-aid fitting or hearing-aid algorithm development. Existing speech-intelligibility models do not incorporate variability in suprathreshold deficits that are not well predicted by classical audiometric measures. One possible approach to the incorporation of such deficits is to base intelligibility predictions on sensitivity to simultaneously spectrally and temporally modulated signals. Purpose The likelihood of success of this approach was evaluated by comparing estimates of spectrotemporal modulation (STM) sensitivity to speech intelligibility and to psychoacoustic estimates of frequency selectivity and temporal fine-structure (TFS) sensitivity across a group of HI listeners. Research Design The minimum modulation depth required to detect STM applied to an 86 dB SPL four-octave noise carrier was measured for combinations of temporal modulation rate (4, 12, or 32 Hz) and spectral modulation density (0.5, 1, 2, or 4 cycles/octave). STM sensitivity estimates for individual HI listeners were compared to estimates of frequency selectivity (measured using the notched-noise method at 500, 1000measured using the notched-noise method at 500, 2000, and 4000 Hz), TFS processing ability (2 Hz frequency-modulation detection thresholds for 500, 10002 Hz frequency-modulation detection thresholds for 500, 2000, and 4000 Hz carriers) and sentence intelligibility in noise (at a 0 dB signal-to-noise ratio) that were measured for the same listeners in a separate study. Study Sample Eight normal-hearing (NH) listeners and 12 listeners with a diagnosis of bilateral sensorineural hearing loss participated. Data Collection and Analysis STM sensitivity was compared between NH and HI listener groups using a repeated-measures analysis of variance. A stepwise regression analysis compared STM sensitivity for individual HI listeners to audiometric thresholds, age, and measures of frequency selectivity and TFS processing ability. A second stepwise regression analysis compared speech intelligibility to STM sensitivity and the audiogram-based Speech Intelligibility Index. Results STM detection thresholds were elevated for the HI listeners, but only for low rates and high densities. STM sensitivity for individual HI listeners was well predicted by a combination of estimates of frequency selectivity at 4000 Hz and TFS sensitivity at 500 Hz but was unrelated to audiometric thresholds. STM sensitivity accounted for an additional 40% of the variance in speech intelligibility beyond the 40% accounted for by the audibility-based Speech Intelligibility Index. Conclusions Impaired STM sensitivity likely results from a combination of a reduced ability to resolve spectral peaks and a reduced ability to use TFS information to follow spectral-peak movements. Combining STM sensitivity estimates with audiometric threshold measures for individual HI listeners provided a more accurate prediction of speech intelligibility than audiometric measures alone. These results suggest a significant likelihood of success for an STM-based model of speech intelligibility for HI listeners. PMID:23636210
Methods for estimating dispersal probabilities and related parameters using marked animals
Bennetts, R.E.; Nichols, J.D.; Pradel, R.; Lebreton, J.D.; Kitchens, W.M.; Clobert, Jean; Danchin, Etienne; Dhondt, Andre A.; Nichols, James D.
2001-01-01
Deriving valid inferences about the causes and consequences of dispersal from empirical studies depends largely on our ability reliably to estimate parameters associated with dispersal. Here, we present a review of the methods available for estimating dispersal and related parameters using marked individuals. We emphasize methods that place dispersal in a probabilistic framework. In this context, we define a dispersal event as a movement of a specified distance or from one predefined patch to another, the magnitude of the distance or the definition of a `patch? depending on the ecological or evolutionary question(s) being addressed. We have organized the chapter based on four general classes of data for animals that are captured, marked, and released alive: (1) recovery data, in which animals are recovered dead at a subsequent time, (2) recapture/resighting data, in which animals are either recaptured or resighted alive on subsequent sampling occasions, (3) known-status data, in which marked animals are reobserved alive or dead at specified times with probability 1.0, and (4) combined data, in which data are of more than one type (e.g., live recapture and ring recovery). For each data type, we discuss the data required, the estimation techniques, and the types of questions that might be addressed from studies conducted at single and multiple sites.
NASA Astrophysics Data System (ADS)
Kulisek, J. A.; Schweppe, J. E.; Stave, S. C.; Bernacki, B. E.; Jordan, D. V.; Stewart, T. N.; Seifert, C. E.; Kernan, W. J.
2015-06-01
Helicopter-mounted gamma-ray detectors can provide law enforcement officials the means to quickly and accurately detect, identify, and locate radiological threats over a wide geographical area. The ability to accurately distinguish radiological threat-generated gamma-ray signatures from background gamma radiation in real time is essential in order to realize this potential. This problem is non-trivial, especially in urban environments for which the background may change very rapidly during flight. This exacerbates the challenge of estimating background due to the poor counting statistics inherent in real-time airborne gamma-ray spectroscopy measurements. To address this challenge, we have developed a new technique for real-time estimation of background gamma radiation from aerial measurements without the need for human analyst intervention. The method can be calibrated using radiation transport simulations along with data from previous flights over areas for which the isotopic composition need not be known. Over the examined measured and simulated data sets, the method generated accurate background estimates even in the presence of a strong, 60Co source. The potential to track large and abrupt changes in background spectral shape and magnitude was demonstrated. The method can be implemented fairly easily in most modern computing languages and environments.
An evaluation of methods for scaling aircraft noise perception
NASA Technical Reports Server (NTRS)
Ollerhead, J. B.
1971-01-01
One hundred and twenty recorded sounds, including jets, turboprops, piston engined aircraft and helicopters were rated by a panel of subjects in a paired comparison test. The results were analyzed to evaluate a number of noise rating procedures in terms of their ability to accurately estimate both relative and absolute perceived noise levels. It was found that the complex procedures developed by Stevens, Zwicker and Kryter are superior to other scales. The main advantage of these methods over the more convenient weighted sound pressure level scales lies in their ability to cope with signals over a wide range of bandwidth. However, Stevens' loudness level scale and the perceived noise level scale both overestimate the growth of perceived level with intensity because of an apparent deficiency in the band level summation rule. A simple correction is proposed which will enable these scales to properly account for the experimental observations.
Computing moment to moment BOLD activation for real-time neurofeedback
Hinds, Oliver; Ghosh, Satrajit; Thompson, Todd W.; Yoo, Julie J.; Whitfield-Gabrieli, Susan; Triantafyllou, Christina; Gabrieli, John D.E.
2013-01-01
Estimating moment to moment changes in blood oxygenation level dependent (BOLD) activation levels from functional magnetic resonance imaging (fMRI) data has applications for learned regulation of regional activation, brain state monitoring, and brain-machine interfaces. In each of these contexts, accurate estimation of the BOLD signal in as little time as possible is desired. This is a challenging problem due to the low signal-to-noise ratio of fMRI data. Previous methods for real-time fMRI analysis have either sacrificed the ability to compute moment to moment activation changes by averaging several acquisitions into a single activation estimate or have sacrificed accuracy by failing to account for prominent sources of noise in the fMRI signal. Here we present a new method for computing the amount of activation present in a single fMRI acquisition that separates moment to moment changes in the fMRI signal intensity attributable to neural sources from those due to noise, resulting in a feedback signal more reflective of neural activation. This method computes an incremental general linear model fit to the fMRI timeseries, which is used to calculate the expected signal intensity at each new acquisition. The difference between the measured intensity and the expected intensity is scaled by the variance of the estimator in order to transform this residual difference into a statistic. Both synthetic and real data were used to validate this method and compare it to the only other published real-time fMRI method. PMID:20682350
Uncertainty Estimation Improves Energy Measurement and Verification Procedures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walter, Travis; Price, Phillip N.; Sohn, Michael D.
2014-05-14
Implementing energy conservation measures in buildings can reduce energy costs and environmental impacts, but such measures cost money to implement so intelligent investment strategies require the ability to quantify the energy savings by comparing actual energy used to how much energy would have been used in absence of the conservation measures (known as the baseline energy use). Methods exist for predicting baseline energy use, but a limitation of most statistical methods reported in the literature is inadequate quantification of the uncertainty in baseline energy use predictions. However, estimation of uncertainty is essential for weighing the risks of investing in retrofits.more » Most commercial buildings have, or soon will have, electricity meters capable of providing data at short time intervals. These data provide new opportunities to quantify uncertainty in baseline predictions, and to do so after shorter measurement durations than are traditionally used. In this paper, we show that uncertainty estimation provides greater measurement and verification (M&V) information and helps to overcome some of the difficulties with deciding how much data is needed to develop baseline models and to confirm energy savings. We also show that cross-validation is an effective method for computing uncertainty. In so doing, we extend a simple regression-based method of predicting energy use using short-interval meter data. We demonstrate the methods by predicting energy use in 17 real commercial buildings. We discuss the benefits of uncertainty estimates which can provide actionable decision making information for investing in energy conservation measures.« less
Microarray image analysis: background estimation using quantile and morphological filters.
Bengtsson, Anders; Bengtsson, Henrik
2006-02-28
In a microarray experiment the difference in expression between genes on the same slide is up to 103 fold or more. At low expression, even a small error in the estimate will have great influence on the final test and reference ratios. In addition to the true spot intensity the scanned signal consists of different kinds of noise referred to as background. In order to assess the true spot intensity background must be subtracted. The standard approach to estimate background intensities is to assume they are equal to the intensity levels between spots. In the literature, morphological opening is suggested to be one of the best methods for estimating background this way. This paper examines fundamental properties of rank and quantile filters, which include morphological filters at the extremes, with focus on their ability to estimate between-spot intensity levels. The bias and variance of these filter estimates are driven by the number of background pixels used and their distributions. A new rank-filter algorithm is implemented and compared to methods available in Spot by CSIRO and GenePix Pro by Axon Instruments. Spot's morphological opening has a mean bias between -47 and -248 compared to a bias between 2 and -2 for the rank filter and the variability of the morphological opening estimate is 3 times higher than for the rank filter. The mean bias of Spot's second method, morph.close.open, is between -5 and -16 and the variability is approximately the same as for morphological opening. The variability of GenePix Pro's region-based estimate is more than ten times higher than the variability of the rank-filter estimate and with slightly more bias. The large variability is because the size of the background window changes with spot size. To overcome this, a non-adaptive region-based method is implemented. Its bias and variability are comparable to that of the rank filter. The performance of more advanced rank filters is equal to the best region-based methods. However, in order to get unbiased estimates these filters have to be implemented with great care. The performance of morphological opening is in general poor with a substantial spatial-dependent bias.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morise, A.P.; Duval, R.D.
To determine whether recent refinements in Bayesian methods have led to improved diagnostic ability, 3 methods using Bayes' theorem and the independence assumption for estimating posttest probability after exercise stress testing were compared. Each method differed in the number of variables considered in the posttest probability estimate (method A = 5, method B = 6 and method C = 15). Method C is better known as CADENZA. There were 436 patients (250 men and 186 women) who underwent stress testing (135 had concurrent thallium scintigraphy) followed within 2 months by coronary arteriography. Coronary artery disease ((CAD), at least 1 vesselmore » with greater than or equal to 50% diameter narrowing) was seen in 169 (38%). Mean pretest probabilities using each method were not different. However, the mean posttest probabilities for CADENZA were significantly greater than those for method A or B (p less than 0.0001). Each decile of posttest probability was compared to the actual prevalence of CAD in that decile. At posttest probabilities less than or equal to 20%, there was underestimation of CAD. However, at posttest probabilities greater than or equal to 60%, there was overestimation of CAD by all methods, especially CADENZA. Comparison of sensitivity and specificity at every fifth percentile of posttest probability revealed that CADENZA was significantly more sensitive and less specific than methods A and B. Therefore, at lower probability thresholds, CADENZA was a better screening method. However, methods A or B still had merit as a means to confirm higher probabilities generated by CADENZA (especially greater than or equal to 60%).« less
Genetic Influence on Contrast Sensitivity in Middle-Aged Male Twins
Cronin-Golomb, Alice; Panizzon, Matthew S.; Lyons, Michael J.; Franz, Carol E.; Grant, Michael D.; Jacobson, Kristen C.; Eisen, Seth A.; Laudate, Thomas M.; Kremen, William S.
2007-01-01
Contrast sensitivity is strongly associated with daily functioning among older adults, but the genetic and environmental contributions to this ability are unknown. Using the classical twin method, we addressed this issue by examining contrast sensitivity at five spatial frequencies (1.5–18 cycles per degree) in 718 middle-aged male twins from the Vietnam Era Twin Study of Aging (VETSA). Heritability estimates were modest (14%–38%), whereas individual-specific environmental influences accounted for 62%–86% of the variance. Identifying the types of individual-specific events that impact contrast sensitivity may suggest interventions to modulate this ability and thereby improve overall quality of life as adults age. PMID:17604073
Integrating resource selection into spatial capture-recapture models for large carnivores
Proffitt, Kelly M.; Goldberg, Joshua; Hebblewite, Mark; Russell, Robin E.; Jimenez, Ben; Robinson, Hugh S.; Pilgrim, Kristine; Schwartz, Michael K.
2015-01-01
Wildlife managers need reliable methods to estimate large carnivore densities and population trends; yet large carnivores are elusive, difficult to detect, and occur at low densities making traditional approaches intractable. Recent advances in spatial capture-recapture (SCR) models have provided new approaches for monitoring trends in wildlife abundance and these methods are particularly applicable to large carnivores. We applied SCR models in a Bayesian framework to estimate mountain lion densities in the Bitterroot Mountains of west central Montana. We incorporate an existing resource selection function (RSF) as a density covariate to account for heterogeneity in habitat use across the study area and include data collected from harvested lions. We identify individuals through DNA samples collected by (1) biopsy darting mountain lions detected in systematic surveys of the study area, (2) opportunistically collecting hair and scat samples, and (3) sampling all harvested mountain lions. We included 80 DNA samples collected from 62 individuals in the analysis. Including information on predicted habitat use as a covariate on the distribution of activity centers reduced the median estimated density by 44%, the standard deviation by 7%, and the width of 95% credible intervals by 10% as compared to standard SCR models. Within the two management units of interest, we estimated a median mountain lion density of 4.5 mountain lions/100 km2 (95% CI = 2.9, 7.7) and 5.2 mountain lions/100 km2 (95% CI = 3.4, 9.1). Including harvested individuals (dead recovery) did not create a significant bias in the detection process by introducing individuals that could not be detected after removal. However, the dead recovery component of the model did have a substantial effect on results by increasing sample size. The ability to account for heterogeneity in habitat use provides a useful extension to SCR models, and will enhance the ability of wildlife managers to reliably and economically estimate density of wildlife populations, particularly large carnivores.
Why are they late? Timing abilities and executive control among students with learning disabilities.
Grinblat, Nufar; Rosenblum, Sara
2016-12-01
While a deficient ability to perform daily tasks on time has been reported among students with learning disabilities (LD), the underlying mechanism behind their 'being late' is still unclear. This study aimed to evaluate the organization in time, time estimation abilities, actual performance time pertaining to specific daily activities, as well as the executive functions of students with LD in comparison to those of controls, and to assess the relationships between these domains among each group. The participants were 27 students with LD, aged 20-30, and 32 gender and age-matched controls who completed the Time Organization and Participation Scale (TOPS) and the Behavioral Rating Inventory of Executive Function-Adult version (BRIEF-A). In addition, their ability to estimate the time needed to complete the task of preparing a cup of coffee as well as their actual performance time were evaluated. The results indicated that in comparison to controls, students with LD showed significantly inferior organization in time (TOPS) and executive function abilities (BRIEF-A). Furthermore, their time estimation abilities were significantly inferior and they required significantly more time to prepare a cup of coffee. Regression analysis identified the variables that predicted organization in time and task performance time among each group. The significance of the results for both theoretical and clinical implications are discussed. What this paper adds? This study examines the underlying mechanism of the phenomena of being late among students with LD. Following a recent call for using ecologically valid assessments, the functional daily ability of students with LD to prepare a cup of coffee and to organize time were investigated. Furthermore, their time estimation and executive control abilities were examined as a possible underlying mechanism for their lateness. Although previous studies have indicated executive control deficits among students with LD, to our knowledge, this is the first analysis of the relationships between their executive control and time estimation deficits and their influence upon their daily function and organization in time abilities. Our findings demonstrate that students with LD need more time in order to execute simple daily activities, such as preparing a cup of coffee. Deficient working memory, retrospective time estimation ability and inhibition predicted their performance time and organization in time abilities. Therefore, this paper sheds light on the mechanism behind daily performance in time among students with LD and emphasizes the need for future development of focused intervention programs to meet their unique needs. Copyright © 2016 Elsevier Ltd. All rights reserved.
Command Recognition of Robot with Low Dimension Whole-Body Haptic Sensor
NASA Astrophysics Data System (ADS)
Ito, Tatsuya; Tsuji, Toshiaki
The authors have developed “haptic armor”, a whole-body haptic sensor that has an ability to estimate contact position. Although it is developed for safety assurance of robots in human environment, it can also be used as an interface. This paper proposes a command recognition method based on finger trace information. This paper also discusses some technical issues for improving recognition accuracy of this system.
A Framework for Measuring Low-Value Care.
Miller, George; Rhyan, Corwin; Beaudin-Seiler, Beth; Hughes-Cromwick, Paul
2018-04-01
It has been estimated that more than 30% of health care spending in the United States is wasteful, and that low-value care, which drives up costs unnecessarily while increasing patient risk, is a significant component of wasteful spending. To address the need for an ability to measure the magnitude of low-value care nationwide, identify the clinical services that are the greatest contributors to waste, and track progress toward eliminating low-value use of these services. Such an ability could provide valuable input to the efforts of policymakers and health systems to improve efficiency. We reviewed existing methods that could contribute to measuring low-value care and developed an integrated framework that combines multiple methods to comprehensively estimate and track the magnitude and principal sources of clinical waste. We also identified a process and needed research for implementing the framework. A comprehensive methodology for measuring and tracking low-value care in the United States would provide an important contribution toward reducing waste. Implementation of the framework described in this article appears feasible, and the proposed research program will allow moving incrementally toward full implementation while providing a near-term capability for measuring low-value care that can be enhanced over time. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Deriving Global Discharge Records from SWOT Observations
NASA Astrophysics Data System (ADS)
Pan, M.; Fisher, C. K.; Wood, E. F.
2017-12-01
River flows are poorly monitored in many regions of the world, hindering our ability to accurately estimate water global water usage, and thus estimate global water and energy budgets or the variability in the global water cycle. Recent developments in satellite remote sensing, such as water surface elevations from radar altimetry or surface water extents from visible/infrared imagery, aim to fill this void; however, the streamflow estimates derived from these are inherently intermittent in both space and time. There is then a need for new methods that are able to derive spatially and temporally continuous records of discharge from the many available data sources. One particular application of this will be the Surface Water and Ocean Topography (SWOT) mission, which is designed to provide global observations of water surface elevation and slope from which river discharge can be estimated. Within the 21-day repeat cycle, a river reach will be observed 2-4 times on average. Due to the relationship between the basin orientation and the orbit, these observations are not evenly distributed in time or space. In this study, we investigate how SWOT will observe global river basins and how the temporal and spatial sampling impacts our ability to reconstruct discharge records.River flows can be estimated throughout a basin by assimilating SWOT observations using the Inverse Streamflow Routing (ISR) model of Pan and Wood [2013]. This method is applied to 32 global basins with different geometries and crossing patterns for the future orbit, assimilating theoretical SWOT-retrieved "gauges". Results show that the model is able to reconstruct basin-wide discharge from SWOT observations alone; however, the performance varies significantly across basins and is driven by the orientation, flow distance, and travel time in each, as well as the sensitivity of the reconstruction method to errors in the satellite retrieval. These properties are combined to estimate the "observability" of each basin. We then apply this metric globally and relate it to the discharge reconstruction performance to gain a better understanding of the impact that spatially and temporally sparse observations, such as those from SWOT, may have in basins with limited in-situ observations. Pan, M; Wood, E F 2013 Inverse streamflow routing, HESS 17(11):4577-4588
Mueller, Silke M; Schiebener, Johannes; Delazer, Margarete; Brand, Matthias
2018-01-22
Many decision situations in everyday life involve mathematical considerations. In decisions under objective risk, i.e., when explicit numeric information is available, executive functions and abilities to handle exact numbers and ratios are predictors of objectively advantageous choices. Although still debated, exact numeric abilities, e.g., normative calculation skills, are assumed to be related to approximate number processing skills. The current study investigates the effects of approximative numeric abilities on decision making under objective risk. Participants (N = 153) performed a paradigm measuring number-comparison, quantity-estimation, risk-estimation, and decision-making skills on the basis of rapid dot comparisons. Additionally, a risky decision-making task with exact numeric information was administered, as well as tasks measuring executive functions and exact numeric abilities, e.g., mental calculation and ratio processing skills, were conducted. Approximative numeric abilities significantly predicted advantageous decision making, even beyond the effects of executive functions and exact numeric skills. Especially being able to make accurate risk estimations seemed to contribute to superior choices. We recommend approximation skills and approximate number processing to be subject of future investigations on decision making under risk.
Cunningham, Marc; Brown, Niquelle; Sacher, Suzy; Hatch, Benjamin; Inglis, Andrew; Aronovich, Dana
2015-01-01
Background: Contraceptive prevalence rate (CPR) is a vital indicator used by country governments, international donors, and other stakeholders for measuring progress in family planning programs against country targets and global initiatives as well as for estimating health outcomes. Because of the need for more frequent CPR estimates than population-based surveys currently provide, alternative approaches for estimating CPRs are being explored, including using contraceptive logistics data. Methods: Using data from the Demographic and Health Surveys (DHS) in 30 countries, population data from the United States Census Bureau International Database, and logistics data from the Procurement Planning and Monitoring Report (PPMR) and the Pipeline Monitoring and Procurement Planning System (PipeLine), we developed and evaluated 3 models to generate country-level, public-sector contraceptive prevalence estimates for injectable contraceptives, oral contraceptives, and male condoms. Models included: direct estimation through existing couple-years of protection (CYP) conversion factors, bivariate linear regression, and multivariate linear regression. Model evaluation consisted of comparing the referent DHS prevalence rates for each short-acting method with the model-generated prevalence rate using multiple metrics, including mean absolute error and proportion of countries where the modeled prevalence rate for each method was within 1, 2, or 5 percentage points of the DHS referent value. Results: For the methods studied, family planning use estimates from public-sector logistics data were correlated with those from the DHS, validating the quality and accuracy of current public-sector logistics data. Logistics data for oral and injectable contraceptives were significantly associated (P<.05) with the referent DHS values for both bivariate and multivariate models. For condoms, however, that association was only significant for the bivariate model. With the exception of the CYP-based model for condoms, models were able to estimate public-sector prevalence rates for each short-acting method to within 2 percentage points in at least 85% of countries. Conclusions: Public-sector contraceptive logistics data are strongly correlated with public-sector prevalence rates for short-acting methods, demonstrating the quality of current logistics data and their ability to provide relatively accurate prevalence estimates. The models provide a starting point for generating interim estimates of contraceptive use when timely survey data are unavailable. All models except the condoms CYP model performed well; the regression models were most accurate but the CYP model offers the simplest calculation method. Future work extending the research to other modern methods, relating subnational logistics data with prevalence rates, and tracking that relationship over time is needed. PMID:26374805
New Analysis Methods Estimate a Critical Property of Ethanol Fuel Blends
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-03-01
To date there have been no adequate methods for measuring the heat of vaporization of complex mixtures. This research developed two separate methods for measuring this key property of ethanol and gasoline blends, including the ability to estimate heat of vaporization at multiple temperatures. Methods for determining heat of vaporization of gasoline-ethanol blends by calculation from a compositional analysis and by direct calorimetric measurement were developed. Direct measurement produced values for pure compounds in good agreement with literature. A range of hydrocarbon gasolines were shown to have heat of vaporization of 325 kJ/kg to 375 kJ/kg. The effect of addingmore » ethanol at 10 vol percent to 50 vol percent was significantly larger than the variation between hydrocarbon gasolines (E50 blends at 650 kJ/kg to 700 kJ/kg). The development of these new and accurate methods allows researchers to begin to both quantify the effect of fuel evaporative cooling on knock resistance, and exploit this effect for combustion of hydrocarbon-ethanol fuel blends in high-efficiency SI engines.« less
Validation of new and existing decision rules for the estimation of beat-to-beat pulse transit time.
Zhou, Xiaolin; Peng, Rongchao; Ding, Hongxia; Zhang, Ningling; Li, Pan
2015-01-01
Pulse transit time (PTT) is a pivotal marker of vascular stiffness. Because the actual PTT duration in vivo is unknown and the complicated variation in waveform may occur, the robust determination of characteristic point is still a very difficult task in the PTT estimation. Our objective is to devise a method for real-time estimation of PTT duration in pulse wave. It has an ability to reduce the interference caused by both high- and low-frequency noise. The reproducibility and performance of these methods are assessed on both artificial and clinical pulse data. Artificial data are generated to investigate the reproducibility with various signal-to-noise ratios. For all artificial data, the mean biases obtained from all methods are less than 1 ms; collectively, this newly proposed method has minimum standard deviation (SD, <1 ms). A set of data from 33 participants together with the synchronously recorded continuous blood pressure data are used to investigate the correlation coefficient (CC). The statistical analysis shows that our method has maximum values of mean CC (0.5231), sum of CCs (17.26), and median CC (0.5695) and has the minimum SD of CCs (0.1943). Overall, the test results in this study indicate that the newly developed method has advantages over traditional decision rules for the PTT measurement.
Effects of CASP5 gene overexpression on angiogenesis of HMEC-1 cells.
Li, Haiyan; Li, Yuzhen; Cai, Limin; Bai, Bingxue; Wang, Yanhua
2015-01-01
The efficacy of gene overexpression of CASP5, a caspase family member, in angiogenesis in vitro and its mechanisms were clarified. Human full-length CASP5 gene was delivered into human microvascular endothelial HMEC-1 cells by recombinant lentivirus. The infection was estimated by green fluorescent protein. MTT method was used to analyze the efficacy of gene overexpression in cell proliferation ability, and Matrigel was used to estimate its effects in angiogenesis ability of cells. Meanwhile, Western blot was used to analyze the effects of CASP5 gene overexpression on the expression levels of angpt-1, angpt-2, Tie2 and VEGF-1 in the cells, which were signaling pathway factors related to angiogenesis. Recombinant lentivirus containing human full-length CASP5 gene was packed and purified successfully, with virus titer of 1×10(8) TU/ml. The recombinant lentivirus was used to infect HMEC-1 cells with MOI of 1, leading to a cell infection rate of 100%. There were no significant effects of CASP5 gene overexpression on both cell proliferation ability and the expression level of angpt-1. Meanwhile, expressions of angpt-2 and VEGF-1 were both enhanced, while Tie2 expression was inhibited. Results indicated that CASP5 gene overexpression promoted angiogenesis of HMEC-1 cells. CASP5 gene overexpression significantly promoted angiogenesis ability of HMEC-1 cells, which was probably achieved by inhibiting angpt-1/Tie2 and promoting VEGF-1 signal pathway.
Berry, Christopher M; Zhao, Peng
2015-01-01
Predictive bias studies have generally suggested that cognitive ability test scores overpredict job performance of African Americans, meaning these tests are not predictively biased against African Americans. However, at least 2 issues call into question existing over-/underprediction evidence: (a) a bias identified by Aguinis, Culpepper, and Pierce (2010) in the intercept test typically used to assess over-/underprediction and (b) a focus on the level of observed validity instead of operational validity. The present study developed and utilized a method of assessing over-/underprediction that draws on the math of subgroup regression intercept differences, does not rely on the biased intercept test, allows for analysis at the level of operational validity, and can use meta-analytic estimates as input values. Therefore, existing meta-analytic estimates of key parameters, corrected for relevant statistical artifacts, were used to determine whether African American job performance remains overpredicted at the level of operational validity. African American job performance was typically overpredicted by cognitive ability tests across levels of job complexity and across conditions wherein African American and White regression slopes did and did not differ. Because the present study does not rely on the biased intercept test and because appropriate statistical artifact corrections were carried out, the present study's results are not affected by the 2 issues mentioned above. The present study represents strong evidence that cognitive ability tests generally overpredict job performance of African Americans. (c) 2015 APA, all rights reserved.
Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan
2016-01-01
This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.
Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan
2016-01-01
This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability. PMID:26941699
Merfeld, Daniel M
2003-01-01
Normally, the nervous system must process ambiguous graviceptor (e.g., otolith) cues to estimate tilt and translation. The neural processes that help perform these estimation processes must adapt upon exposure to weightlessness and readapt upon return to Earth. In this paper we present a review of evidence supporting a new hypothesis that explains some aspects of these adaptive processes. This hypothesis, which we label the rotation otolith tilt-translation reinterpretation (ROTTR) hypothesis, suggests that the neural processes resulting in spaceflight adaptation include deterioration in the ability of the nervous system to use rotational cues to help accurately estimate the relative orientation of gravity ("tilt"). Changes in the ability to estimate gravity then also influence the ability of the nervous system to estimate linear acceleration ("translation"). We explicitly hypothesize that such changes in the ability to estimate "tilt" and "translation" will be measurable upon return to Earth and will, at least partially, explain the disorientation experienced when astronauts return to Earth. In this paper, we present the details and implications of ROTTR, review data related to ROTTR, and discuss the relationship of ROTTR to the influential otolith tilt-translation reinterpretation (OTTR) hypothesis as well as discuss the distinct differences between ROTTR and OTTR.
Kelbert, Anna; Balch, Christopher; Pulkkinen, Antti; Egbert, Gary D; Love, Jeffrey J.; Rigler, E. Joshua; Fujii, Ikuko
2017-01-01
Geoelectric fields at the Earth's surface caused by magnetic storms constitute a hazard to the operation of electric power grids and related infrastructure. The ability to estimate these geoelectric fields in close to real time and provide local predictions would better equip the industry to mitigate negative impacts on their operations. Here we report progress toward this goal: development of robust algorithms that convolve a magnetic storm time series with a frequency domain impedance for a realistic three-dimensional (3-D) Earth, to estimate the local, storm time geoelectric field. Both frequency domain and time domain approaches are presented and validated against storm time geoelectric field data measured in Japan. The methods are then compared in the context of a real-time application.
A Hybrid Neural Network-Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2001-01-01
In this paper, a model-based diagnostic method, which utilizes Neural Networks and Genetic Algorithms, is investigated. Neural networks are applied to estimate the engine internal health, and Genetic Algorithms are applied for sensor bias detection and estimation. This hybrid approach takes advantage of the nonlinear estimation capability provided by neural networks while improving the robustness to measurement uncertainty through the application of Genetic Algorithms. The hybrid diagnostic technique also has the ability to rank multiple potential solutions for a given set of anomalous sensor measurements in order to reduce false alarms and missed detections. The performance of the hybrid diagnostic technique is evaluated through some case studies derived from a turbofan engine simulation. The results show this approach is promising for reliable diagnostics of aircraft engines.
NASA Astrophysics Data System (ADS)
Kelbert, Anna; Balch, Christopher C.; Pulkkinen, Antti; Egbert, Gary D.; Love, Jeffrey J.; Rigler, E. Joshua; Fujii, Ikuko
2017-07-01
Geoelectric fields at the Earth's surface caused by magnetic storms constitute a hazard to the operation of electric power grids and related infrastructure. The ability to estimate these geoelectric fields in close to real time and provide local predictions would better equip the industry to mitigate negative impacts on their operations. Here we report progress toward this goal: development of robust algorithms that convolve a magnetic storm time series with a frequency domain impedance for a realistic three-dimensional (3-D) Earth, to estimate the local, storm time geoelectric field. Both frequency domain and time domain approaches are presented and validated against storm time geoelectric field data measured in Japan. The methods are then compared in the context of a real-time application.
Ability Self-Estimates and Self-Efficacy: Meaningfully Distinct?
ERIC Educational Resources Information Center
Bubany, Shawn T.; Hansen, Jo-Ida C.
2010-01-01
Conceptual differences between self-efficacy and ability self-estimate scores, used in vocational psychology and career counseling, were examined with confirmatory factor analysis, discriminate relations, and reliability analysis. Results suggest that empirical differences may be due to measurement error or scale content, rather than due to the…
Smart, C E; Ross, K; Edge, J A; King, B R; McElduff, P; Collins, C E
2010-03-01
Carbohydrate (CHO) counting allows children with Type 1 diabetes to adjust mealtime insulin dose to carbohydrate intake. Little is known about the ability of children to count CHO and whether a particular method for assessing CHO quantity is better than others. We investigated how accurately children and their caregivers estimate carbohydrate, and whether counting in gram increments improves accuracy compared with CHO portions or exchanges. One hundred and two children and adolescents (age range 8.3-18.1 years) on intensive insulin therapy and 110 caregivers independently estimated the CHO content of 17 standardized meals (containing 8-90 g CHO), using whichever method of carbohydrate quantification they had been taught (gram increments, 10-g portions or 15-g exchanges). Seventy-three per cent (n = 2530) of all estimates were within 10-15 g of actual CHO content. There was no relationship between the mean percentage error and method of carbohydrate counting or glycated haemoglobin (HbA(1c)) (P > 0.05). Mean gram error and meal size were negatively correlated (r = -0.70, P < 0.0001). The longer children had been CHO counting the greater the mean percentage error (r = 0.173, P = 0.014). Core foods in non-standard quantities were most frequently inaccurately estimated, while individually labelled foods were most often accurately estimated. Children with Type 1 diabetes and their caregivers can estimate the carbohydrate content of meals with reasonable accuracy. Teaching CHO counting in gram increments did not improve accuracy compared with CHO portions or exchanges. Large meals tended to be underestimated and snacks overestimated. Repeated age-appropriate education appears necessary to maintain accuracy in carbohydrate estimations.
NASA Astrophysics Data System (ADS)
Girard, Catherine; Dufour, Anne-Béatrice; Charruault, Anne-Lise; Renaud, Sabrina
2018-01-01
Benthic foraminifera have been used as proxies for various paleoenvironmental variables such as food availability, carbon flux from surface waters, microhabitats, and indirectly water depth. Estimating assemblage composition based on morphotypes, as opposed to genus- or species-level identification, potentially loses important ecological information but opens the way to the study of ancient time periods. However, the ability to accurately constrain benthic foraminiferal assemblages has been questioned when the most abundant foraminifera are fragile agglutinated forms, particularly prone to fragmentation. Here we test an alternate method for accurately estimating the composition of fragmented assemblages. The cumulated area per morphotype
method is assessed, i.e., the sum of the area of all tests or fragments of a given morphotype in a sample. The percentage of each morphotype is calculated as a portion of the total cumulated area. Percentages of different morphotypes based on counting and cumulated area methods are compared one by one and analyzed using principal component analyses, a co-inertia analysis, and Shannon diversity indices. Morphotype percentages are further compared to an estimate of water depth based on microfacies description. Percentages of the morphotypes are not related to water depth. In all cases, counting and cumulated area methods deliver highly similar results, suggesting that the less time-consuming traditional counting method may provide robust estimates of assemblages. The size of each morphotype may deliver paleobiological information, for instance regarding biomass, but should be considered carefully due to the pervasive issue of fragmentation.
Hierarchical State-Space Estimation of Leatherback Turtle Navigation Ability
Mills Flemming, Joanna; Jonsen, Ian D.; Field, Christopher A.
2010-01-01
Remotely sensed tracking technology has revealed remarkable migration patterns that were previously unknown; however, models to optimally use such data have developed more slowly. Here, we present a hierarchical Bayes state-space framework that allows us to combine tracking data from a collection of animals and make inferences at both individual and broader levels. We formulate models that allow the navigation ability of animals to be estimated and demonstrate how information can be combined over many animals to allow improved estimation. We also show how formal hypothesis testing regarding navigation ability can easily be accomplished in this framework. Using Argos satellite tracking data from 14 leatherback turtles, 7 males and 7 females, during their southward migration from Nova Scotia, Canada, we find that the circle of confusion (the radius around an animal's location within which it is unable to determine its location precisely) is approximately 96 km. This estimate suggests that the turtles' navigation does not need to be highly accurate, especially if they are able to use more reliable cues as they near their destination. Moreover, for the 14 turtles examined, there is little evidence to suggest that male and female navigation abilities differ. Because of the minimal assumptions made about the movement process, our approach can be used to estimate and compare navigation ability for many migratory species that are able to carry electronic tracking devices. PMID:21203382
Gervès, Chloé; Bellanger, Martine Marie; Ankri, Joël
2013-01-01
Valuation of the intangible impacts of informal care remains a great challenge for economic evaluation, especially in the framework of care recipients with cognitive impairment. Our main objective was to explore the influence of intangible impacts of caring on both informal caregivers' ability to estimate their willingness to pay (WTP) to be replaced and their WTP value. We mapped characteristics that influence ability or inability to estimate WTP by using a multiple correspondence analysis. We ran a bivariate probit model with sample selection to further analyze the caregivers' WTP value conditional on their ability to estimate their WTP. A distinction exists between the opportunity costs of the caring dimension and those of the intangible costs and benefits of caring. Informal caregivers' ability to estimate WTP is negatively influenced by both intangible benefits from caring (P < 0.001) and negative intangible impacts of caring (P < 0.05). Caregivers' WTP value is negatively associated with positive intangible impacts of informal care (P < 0.01). Informal caregivers' WTP and their ability to estimate WTP are both influenced by intangible burden and benefit of caring. These results call into question the relevance of a hypothetical generalized financial compensation system as the optimal way to motivate caregivers to continue providing care. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Dear, Blake F; Heller, Gillian Z; Crane, Monique F; Titov, Nickolai
2018-01-01
Background Missing cases following treatment are common in Web-based psychotherapy trials. Without the ability to directly measure and evaluate the outcomes for missing cases, the ability to measure and evaluate the effects of treatment is challenging. Although common, little is known about the characteristics of Web-based psychotherapy participants who present as missing cases, their likely clinical outcomes, or the suitability of different statistical assumptions that can characterize missing cases. Objective Using a large sample of individuals who underwent Web-based psychotherapy for depressive symptoms (n=820), the aim of this study was to explore the characteristics of cases who present as missing cases at posttreatment (n=138), their likely treatment outcomes, and compare between statistical methods for replacing their missing data. Methods First, common participant and treatment features were tested through binary logistic regression models, evaluating the ability to predict missing cases. Second, the same variables were screened for their ability to increase or impede the rate symptom change that was observed following treatment. Third, using recontacted cases at 3-month follow-up to proximally represent missing cases outcomes following treatment, various simulated replacement scores were compared and evaluated against observed clinical follow-up scores. Results Missing cases were dominantly predicted by lower treatment adherence and increased symptoms at pretreatment. Statistical methods that ignored these characteristics can overlook an important clinical phenomenon and consequently produce inaccurate replacement outcomes, with symptoms estimates that can swing from −32% to 70% from the observed outcomes of recontacted cases. In contrast, longitudinal statistical methods that adjusted their estimates for missing cases outcomes by treatment adherence rates and baseline symptoms scores resulted in minimal measurement bias (<8%). Conclusions Certain variables can characterize and predict missing cases likelihood and jointly predict lesser clinical improvement. Under such circumstances, individuals with potentially worst off treatment outcomes can become concealed, and failure to adjust for this can lead to substantial clinical measurement bias. Together, this preliminary research suggests that missing cases in Web-based psychotherapeutic interventions may not occur as random events and can be systematically predicted. Critically, at the same time, missing cases may experience outcomes that are distinct and important for a complete understanding of the treatment effect. PMID:29674311
Nordgren, Lena; Söderlund, Anne
2016-04-01
Little is known about sick leave and the ability to return to work (RTW) for people with heart failure (HF). Previous research findings raise questions about the significance of encounters with social insurance officers (SIOs) and sociodemographics in people sick-listed due to HF. To investigate how people on sick leave due to HF experience encounters with SIOs and associations between sociodemographic factors, experiences of positive/negative encounters with SIOs, and self-estimated ability to RTW. This was a population-based study with a cross-sectional design. The sample consisted of 590 sick-listed people with HF in Sweden. A register-based investigation supplemented with a postal survey questionnaire was conducted. Bivariate correlations and logistic regression analysis was used to test associations between sociodemographic factors, positive and negative encounters, and self-estimated ability to RTW. People with low income were more likely to receive sickness compensation. A majority of the responders experienced encounters with SIOs as positive. Being married was significantly associated with positive encounters. Having a low income was related to negative encounters. More than a third of the responders agreed that positive encounters with SIOs facilitated self-estimated ability to RTW. High income was strongly associated with the impact of positive encounters on self-estimated ability to RTW. Encounters between SIOs and people on sick leave due to HF need to be characterized by a person-centred approach including confidence and trust. People with low income need special attention. © The European Society of Cardiology 2015.
Wu, Hau-Tieng; Lewis, Gregory F; Davila, Maria I; Daubechies, Ingrid; Porges, Stephen W
2016-10-17
With recent advances in sensor and computer technologies, the ability to monitor peripheral pulse activity is no longer limited to the laboratory and clinic. Now inexpensive sensors, which interface with smartphones or other computer-based devices, are expanding into the consumer market. When appropriate algorithms are applied, these new technologies enable ambulatory monitoring of dynamic physiological responses outside the clinic in a variety of applications including monitoring fatigue, health, workload, fitness, and rehabilitation. Several of these applications rely upon measures derived from peripheral pulse waves measured via contact or non-contact photoplethysmography (PPG). As technologies move from contact to non-contact PPG, there are new challenges. The technology necessary to estimate average heart rate over a few seconds from a noncontact PPG is available. However, a technology to precisely measure instantaneous heat rate (IHR) from non-contact sensors, on a beat-to-beat basis, is more challenging. The objective of this paper is to develop an algorithm with the ability to accurately monitor IHR from peripheral pulse waves, which provides an opportunity to measure the neural regulation of the heart from the beat-to-beat heart rate pattern (i.e., heart rate variability). The adaptive harmonic model is applied to model the contact or non-contact PPG signals, and a new methodology, the Synchrosqueezing Transform (SST), is applied to extract IHR. The body sway rhythm inherited in the non-contact PPG signal is modeled and handled by the notion of wave-shape function. The SST optimizes the extraction of IHR from the PPG signals and the technique functions well even during periods of poor signal to noise. We contrast the contact and non-contact indices of PPG derived heart rate with a criterion electrocardiogram (ECG). ECG and PPG signals were monitored in 21 healthy subjects performing tasks with different physical demands. The root mean square error of IHR estimated by SST is significantly better than commonly applied methods such as autoregressive (AR) method. In the walking situation, while AR method fails, SST still provides a reasonably good result. The SST processed PPG data provided an accurate estimate of the ECG derived IHR and consistently performed better than commonly applied methods such as autoregressive method.
Fused methods for visual saliency estimation
NASA Astrophysics Data System (ADS)
Danko, Amanda S.; Lyu, Siwei
2015-02-01
In this work, we present a new model of visual saliency by combing results from existing methods, improving upon their performance and accuracy. By fusing pre-attentive and context-aware methods, we highlight the abilities of state-of-the-art models while compensating for their deficiencies. We put this theory to the test in a series of experiments, comparatively evaluating the visual saliency maps and employing them for content-based image retrieval and thumbnail generation. We find that on average our model yields definitive improvements upon recall and f-measure metrics with comparable precisions. In addition, we find that all image searches using our fused method return more correct images and additionally rank them higher than the searches using the original methods alone.
Robust Estimation of Latent Ability in Item Response Models
ERIC Educational Resources Information Center
Schuster, Christof; Yuan, Ke-Hai
2011-01-01
Because of response disturbances such as guessing, cheating, or carelessness, item response models often can only approximate the "true" individual response probabilities. As a consequence, maximum-likelihood estimates of ability will be biased. Typically, the nature and extent to which response disturbances are present is unknown, and, therefore,…
Gunn, Cameron Allan; Dickson, Jennifer L; Pretty, Christopher G; Alsweiler, Jane M; Lynn, Adrienne; Shaw, Geoffrey M; Chase, J Geoffrey
2014-07-01
Hyperglycaemia is a common complication of stress and prematurity in extremely low-birth-weight infants. Model-based insulin therapy protocols have the ability to safely improve glycaemic control for this group. Estimating non-insulin-mediated brain glucose uptake by the central nervous system in these models is typically done using population-based body weight models, which may not be ideal. A head circumference-based model that separately treats small-for-gestational-age (SGA) and appropriate-for-gestational-age (AGA) infants is compared to a body weight model in a retrospective analysis of 48 patients with a median birth weight of 750g and median gestational age of 25 weeks. Estimated brain mass, model-based insulin sensitivity (SI) profiles, and projected glycaemic control outcomes are investigated. SGA infants (5) are also analyzed as a separate cohort. Across the entire cohort, estimated brain mass deviated by a median 10% between models, with a per-patient median difference in SI of 3.5%. For the SGA group, brain mass deviation was 42%, and per-patient SI deviation 13.7%. In virtual trials, 87-93% of recommended insulin rates were equal or slightly reduced (Δ<0.16mU/h) under the head circumference method, while glycaemic control outcomes showed little change. The results suggest that body weight methods are not as accurate as head circumference methods. Head circumference-based estimates may offer improved modelling accuracy and a small reduction in insulin administration, particularly for SGA infants. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Zhang, Ao; Liu, Tingting; Zheng, Kaiyuan; Liu, Ningbo; Huang, Fei; Li, Weidong; Liu, Tong; Fu, Weihua
2017-01-01
Abstract Laparoscopic colorectal surgery had been widely used for colorectal cancer patient and showed a favorable outcome on the postoperative morbidity rate. We attempted to evaluate physiological status of patients by mean of Estimation of physiologic ability and surgical stress (E-PASS) system and to analyze the difference variation of postoperative morbidity rate of open and laparoscopic colorectal cancer surgery in patients with different physiological status. In total 550 colorectal cancer patients who underwent surgery treatment were included. E-PASS and some conventional scoring systems were reviewed to examine their mortality prediction ability. The preoperative risk score (PRS) in the E-PASS system was used to evaluate the physiological status of patients. The difference of postoperative morbidity rate between open and laparoscopic colorectal cancer surgeries was analyzed respectively in patients with different physiological status. E-PASS had better prediction ability than other conventional scoring systems in colorectal cancer surgeries. Postoperative morbidities were developed in 143 patients. The parameters in the E-PASS system had positive correlations with postoperative morbidity. The overall postoperative morbidity rate of laparoscopic surgeries was lower than open surgeries (19.61% and 28.46%), but the postoperative morbidity rate of laparoscopic surgeries increased more significantly than in open surgery as PRS increased. When PRS was more than 0.7, the postoperative morbidity rate of laparoscopic surgeries would exceed the postoperative morbidity rate of open surgeries. The E-PASS system was capable to evaluate the physiological and surgical risk of colorectal cancer surgery. PRS could assist preoperative decision-making on the surgical method. Colorectal cancer patients who were assessed with a low physiological risk by PRS would be safe to undergo laparoscopic surgery. On the contrary, surgeons should make decisions prudently on the operation method for patient with a high physiological risk. PMID:28816959
Zier, Lucas S.; Burack, Jeffrey H.; Micco, Guy; Chipman, Anne K.; Frank, James A.; Luce, John M.; White, Douglas B.
2009-01-01
Objectives: Although discussing a prognosis is a duty of physicians caring for critically ill patients, little is known about surrogate decision-makers' beliefs about physicians' ability to prognosticate. We sought to determine: 1) surrogates' beliefs about whether physicians can accurately prognosticate for critically ill patients; and 2) how individuals use prognostic information in their role as surrogate decision-makers. Design, Setting, and Patients: Multicenter study in intensive care units of a public hospital, a tertiary care hospital, and a veterans' hospital. We conducted semistructured interviews with 50 surrogate decision-makers of critically ill patients. We analyzed the interview transcripts using grounded theory methods to inductively develop a framework to describe surrogates' beliefs about physicians' ability to prognosticate. Validation methods included triangulation by multidisciplinary analysis and member checking. Measurements and Main Results: Overall, 88% (44 of 50) of surrogates expressed doubt about physicians' ability to prognosticate for critically ill patients. Four distinct themes emerged that explained surrogates' doubts about prognostic accuracy: a belief that God could alter the course of the illness, a belief that predicting the future is inherently uncertain, prior experiences where physicians' prognostications were inaccurate, and experiences with prognostication during the patient's intensive care unit stay. Participants also identified several factors that led to belief in physicians' prognostications, such as receiving similar prognostic estimates from multiple physicians and prior experiences with accurate prognostication. Surrogates' doubts about prognostic accuracy did not prevent them from wanting prognostic information. Instead, most surrogate decision-makers view physicians' prognostications as rough estimates that are valuable in informing decisions, but are not determinative. Surrogates identified the act of prognostic disclosure as a key step in preparing emotionally and practically for the possibility that a patient may not survive. Conclusions: Although many surrogate decision-makers harbor some doubt about the accuracy of physicians' prognostications, they highly value discussions about prognosis and use the information for multiple purposes. (Crit Care Med 2008; 36: 2341–2347) PMID:18596630
NASA Astrophysics Data System (ADS)
Wright, Ashley J.; Walker, Jeffrey P.; Pauwels, Valentijn R. N.
2017-08-01
Floods are devastating natural hazards. To provide accurate, precise, and timely flood forecasts, there is a need to understand the uncertainties associated within an entire rainfall time series, even when rainfall was not observed. The estimation of an entire rainfall time series and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of entire rainfall input time series to be considered when estimating model parameters, and provides the ability to improve rainfall estimates from poorly gauged catchments. Current methods to estimate entire rainfall time series from streamflow records are unable to adequately invert complex nonlinear hydrologic systems. This study aims to explore the use of wavelets in the estimation of rainfall time series from streamflow records. Using the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia, it is shown that model parameter distributions and an entire rainfall time series can be estimated. Including rainfall in the estimation process improves streamflow simulations by a factor of up to 1.78. This is achieved while estimating an entire rainfall time series, inclusive of days when none was observed. It is shown that the choice of wavelet can have a considerable impact on the robustness of the inversion. Combining the use of a likelihood function that considers rainfall and streamflow errors with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
Emergency Physician Estimation of Blood Loss
Ashburn, Jeffery C.; Harrison, Tamara; Ham, James J.; Strote, Jared
2012-01-01
Introduction Emergency physicians (EP) frequently estimate blood loss, which can have implications for clinical care. The objectives of this study were to examine EP accuracy in estimating blood loss on different surfaces and compare attending physician and resident performance. Methods A sample of 56 emergency department (ED) physicians (30 attending physicians and 26 residents) were asked to estimate the amount of moulage blood present in 4 scenarios: 500 mL spilled onto an ED cot; 25 mL spilled onto a 10-pack of 4 × 4-inch gauze; 100 mL on a T-shirt; and 150 mL in a commode filled with water. Standard estimate error (the absolute value of (estimated volume − actual volume)/actual volume × 100) was calculated for each estimate. Results The mean standard error for all estimates was 116% with a range of 0% to 1233%. Only 8% of estimates were within 20% of the true value. Estimates were most accurate for the sheet scenario and worst for the commode scenario. Residents and attending physicians did not perform significantly differently (P > 0.05). Conclusion Emergency department physicians do not estimate blood loss well in a variety of scenarios. Such estimates could potentially be misleading if used in clinical decision making. Clinical experience does not appear to improve estimation ability in this limited study. PMID:22942938
A practical guideline for intracranial volume estimation in patients with Alzheimer's disease
2015-01-01
Background Intracranial volume (ICV) is an important normalization measure used in morphometric analyses to correct for head size in studies of Alzheimer Disease (AD). Inaccurate ICV estimation could introduce bias in the outcome. The current study provides a decision aid in defining protocols for ICV estimation in patients with Alzheimer disease in terms of sampling frequencies that can be optimally used on the volumetric MRI data, and the type of software most suitable for use in estimating the ICV measure. Methods Two groups of 22 subjects are considered, including adult controls (AC) and patients with Alzheimer Disease (AD). Reference measurements were calculated for each subject by manually tracing intracranial cavity by the means of visual inspection. The reliability of reference measurements were assured through intra- and inter- variation analyses. Three publicly well-known software packages (Freesurfer, FSL, and SPM) were examined in their ability to automatically estimate ICV across the groups. Results Analysis of the results supported the significant effect of estimation method, gender, cognitive condition of the subject and the interaction among method and cognitive condition factors in the measured ICV. Results on sub-sampling studies with a 95% confidence showed that in order to keep the accuracy of the interleaved slice sampling protocol above 99%, the sampling period cannot exceed 20 millimeters for AC and 15 millimeters for AD. Freesurfer showed promising estimates for both adult groups. However SPM showed more consistency in its ICV estimation over the different phases of the study. Conclusions This study emphasized the importance in selecting the appropriate protocol, the choice of the sampling period in the manual estimation of ICV and selection of suitable software for the automated estimation of ICV. The current study serves as an initial framework for establishing an appropriate protocol in both manual and automatic ICV estimations with different subject populations. PMID:25953026
Estimating the surface area of birds: using the homing pigeon (Columba livia) as a model.
Perez, Cristina R; Moye, John K; Pritsos, Chris A
2014-05-08
Estimation of the surface area of the avian body is valuable for thermoregulation and metabolism studies as well as for assessing exposure to oil and other surface-active organic pollutants from a spill. The use of frozen carcasses for surface area estimations prevents the ability to modify the posture of the bird. The surface area of six live homing pigeons in the fully extended flight position was estimated using a noninvasive method. An equation was derived to estimate the total surface area of a pigeon based on its body weight. A pigeon's surface area in the fully extended flight position is approximately 4 times larger than the surface area of a pigeon in the perching position. The surface area of a bird is dependent on its physical position, and, therefore, the fully extended flight position exhibits the maximum area of a bird and should be considered the true surface area of a bird. © 2014. Published by The Company of Biologists Ltd | Biology Open.
Progress in Turbulence Detection via GNSS Occultation Data
NASA Technical Reports Server (NTRS)
Cornman, L. B.; Goodrich, R. K.; Axelrad, P.; Barlow, E.
2012-01-01
The increased availability of radio occultation (RO) data offers the ability to detect and study turbulence in the Earth's atmosphere. An analysis of how RO data can be used to determine the strength and location of turbulent regions is presented. This includes the derivation of a model for the power spectrum of the log-amplitude and phase fluctuations of the permittivity (or index of refraction) field. The bulk of the paper is then concerned with the estimation of the model parameters. Parameter estimators are introduced and some of their statistical properties are studied. These estimators are then applied to simulated log-amplitude RO signals. This includes the analysis of global statistics derived from a large number of realizations, as well as case studies that illustrate various specific aspects of the problem. Improvements to the basic estimation methods are discussed, and their beneficial properties are illustrated. The estimation techniques are then applied to real occultation data. Only two cases are presented, but they illustrate some of the salient features inherent in real data.
Improvements in Virtual Sensors: Using Spatial Information to Estimate Remote Sensing Spectra
NASA Technical Reports Server (NTRS)
Oza, Nikunj C.; Srivastava, Ashok N.; Stroeve, Julienne
2005-01-01
Various instruments are used to create images of the Earth and other objects in the universe in a diverse set of wavelength bands with the aim of understanding natural phenomena. Sometimes these instruments are built in a phased approach, with additional measurement capabilities added in later phases. In other cases, technology may mature to the point that the instrument offers new measurement capabilities that were not planned in the original design of the instrument. In still other cases, high resolution spectral measurements may be too costly to perform on a large sample and therefore lower resolution spectral instruments are used to take the majority of measurements. Many applied science questions that are relevant to the earth science remote sensing community require analysis of enormous amounts of data that were generated by instruments with disparate measurement capabilities. In past work [1], we addressed this problem using Virtual Sensors: a method that uses models trained on spectrally rich (high spectral resolution) data to "fill in" unmeasured spectral channels in spectrally poor (low spectral resolution) data. We demonstrated this method by using models trained on the high spectral resolution Terra MODIS instrument to estimate what the equivalent of the MODIS 1.6 micron channel would be for the NOAA AVHRR2 instrument. The scientific motivation for the simulation of the 1.6 micron channel is to improve the ability of the AVHRR2 sensor to detect clouds over snow and ice. This work contains preliminary experiments demonstrating that the use of spatial information can improve our ability to estimate these spectra.
Wells, Ruth; Swaminathan, Vaidy; Sundram, Suresh; Weinberg, Danielle; Bruggemann, Jason; Jacomb, Isabella; Cropley, Vanessa; Lenroot, Rhoshel; Pereira, Avril M; Zalesky, Andrew; Bousman, Chad; Pantelis, Christos; Weickert, Cynthia Shannon; Weickert, Thomas W
2015-01-01
Background: Cognitive heterogeneity among people with schizophrenia has been defined on the basis of premorbid and current intelligence quotient (IQ) estimates. In a relatively large, community cohort, we aimed to independently replicate and extend cognitive subtyping work by determining the extent of symptom severity and functional deficits in each group. Methods: A total of 635 healthy controls and 534 patients with a diagnosis of schizophrenia or schizoaffective disorder were recruited through the Australian Schizophrenia Research Bank. Patients were classified into cognitive subgroups on the basis of the Wechsler Test of Adult Reading (a premorbid IQ estimate) and current overall cognitive abilities into preserved, deteriorated, and compromised groups using both clinical and empirical (k-means clustering) methods. Additional cognitive, functional, and symptom outcomes were compared among the resulting groups. Results: A total of 157 patients (29%) classified as ‘preserved’ performed within one s.d. of control means in all cognitive domains. Patients classified as ‘deteriorated’ (n=239, 44%) performed more than one s.d. below control means in all cognitive domains except estimated premorbid IQ and current visuospatial abilities. A separate 138 patients (26%), classified as ‘compromised,’ performed more than one s.d. below control means in all cognitive domains and displayed greater impairment than other groups on symptom and functional measures. Conclusions: In the present study, we independently replicated our previous cognitive classifications of people with schizophrenia. In addition, we extended previous work by demonstrating worse functional outcomes and symptom severity in the compromised group. PMID:27336046
Furutani, Eiko; Nishigaki, Yuki; Kanda, Chiaki; Takeda, Toshihiro; Shirakami, Gotaro
2013-01-01
This paper proposes a novel hypnosis control method using Auditory Evoked Potential Index (aepEX) as a hypnosis index. In order to avoid side effects of an anesthetic drug, it is desirable to reduce the amount of an anesthetic drug during surgery. For this purpose many studies of hypnosis control systems have been done. Most of them use Bispectral Index (BIS), another hypnosis index, but it has problems of dependence on anesthetic drugs and nonsmooth change near some particular values. On the other hand, aepEX has an ability of clear distinction between patient consciousness and unconsciousness and independence of anesthetic drugs. The control method proposed in this paper consists of two elements: estimating the minimum effect-site concentration for maintaining appropriate hypnosis and adjusting infusion rate of an anesthetic drug, propofol, using model predictive control. The minimum effect-site concentration is estimated utilizing the property of aepEX pharmacodynamics. The infusion rate of propofol is adjusted so that effect-site concentration of propofol may be kept near and always above the minimum effect-site concentration. Simulation results of hypnosis control using the proposed method show that the minimum concentration can be estimated appropriately and that the proposed control method can maintain hypnosis adequately and reduce the total infusion amount of propofol.
Estimate Soil Erodibility Factors Distribution for Maioli Block
NASA Astrophysics Data System (ADS)
Lee, Wen-Ying
2014-05-01
The natural conditions in Taiwan are poor. Because of the steep slopes, rushing river and fragile geology, soil erosion turn into a serious problem. Not only undermine the sloping landscape, but also created sediment disaster like that reservoir sedimentation, river obstruction…etc. Therefore, predict and control the amount of soil erosion has become an important research topic. Soil erodibility factor (K) is a quantitative index of distinguish the ability of soil to resist the erosion separation and handling. Taiwan soil erodibility factors have been calculated 280 soil samples' erodibility factors by Wann and Huang (1989) use the Wischmeier and Smith nomorgraph. 221 samples were collected at the Maioli block in Miaoli. The coordinates of every sample point and the land use situations were recorded. The physical properties were analyzed for each sample. Three estimation methods, consist of Kriging, Inverse Distance Weighted (IDW) and Spline, were applied to estimate soil erodibility factors distribution for Maioli block by using 181 points data, and the remaining 40 points for the validation. Then, the SPSS regression analysis was used to comparison of the accuracy of the training data and validation data by three different methods. Then, the best method can be determined. In the future, we can used this method to predict the soil erodibility factors in other areas.
NASA Astrophysics Data System (ADS)
Czirjak, Daniel
2017-04-01
Remote sensing platforms have consistently demonstrated the ability to detect, and in some cases identify, specific targets of interest, and photovoltaic solar panels are shown to have a unique spectral signature that is consistent across multiple manufacturers and construction methods. Solar panels are proven to be detectable in hyperspectral imagery using common statistical target detection methods such as the adaptive cosine estimator, and false alarms can be mitigated through the use of a spectral verification process that eliminates pixels that do not have the key spectral features of photovoltaic solar panel reflectance spectrum. The normalized solar panel index is described and is a key component in the false-alarm mitigation process. After spectral verification, these solar panel arrays are confirmed on openly available literal imagery and can be measured using numerous open-source algorithms and tools. The measurements allow for the assessment of overall solar power generation capacity using an equation that accounts for solar insolation, the area of solar panels, and the efficiency of the solar panels conversion of solar energy to power. Using a known location with readily available information, the methods outlined in this paper estimate the power generation capabilities within 6% of the rated power.
Proximity Navigation of Highly Constrained Spacecraft
NASA Technical Reports Server (NTRS)
Scarritt, S.; Swartwout, M.
2007-01-01
Bandit is a 3-kg automated spacecraft in development at Washington University in St. Louis. Bandit's primary mission is to demonstrate proximity navigation, including docking, around a 25-kg student-built host spacecraft. However, because of extreme constraints in mass, power and volume, traditional sensing and actuation methods are not available. In particular, Bandit carries only 8 fixed-magnitude cold-gas thrusters to control its 6 DOF motion. Bandit lacks true inertial sensing, and the ability to sense position relative to the host has error bounds that approach the size of the Bandit itself. Some of the navigation problems are addressed through an extremely robust, error-tolerant soft dock. In addition, we have identified a control methodology that performs well in this constrained environment: behavior-based velocity potential functions, which use a minimum-seeking method similar to Lyapunov functions. We have also adapted the discrete Kalman filter for use on Bandit for position estimation and have developed a similar measurement vs. propagation weighting algorithm for attitude estimation. This paper provides an overview of Bandit and describes the control and estimation approach. Results using our 6DOF flight simulator are provided, demonstrating that these methods show promise for flight use.
Cunningham, Marc; Bock, Ariella; Brown, Niquelle; Sacher, Suzy; Hatch, Benjamin; Inglis, Andrew; Aronovich, Dana
2015-09-01
Contraceptive prevalence rate (CPR) is a vital indicator used by country governments, international donors, and other stakeholders for measuring progress in family planning programs against country targets and global initiatives as well as for estimating health outcomes. Because of the need for more frequent CPR estimates than population-based surveys currently provide, alternative approaches for estimating CPRs are being explored, including using contraceptive logistics data. Using data from the Demographic and Health Surveys (DHS) in 30 countries, population data from the United States Census Bureau International Database, and logistics data from the Procurement Planning and Monitoring Report (PPMR) and the Pipeline Monitoring and Procurement Planning System (PipeLine), we developed and evaluated 3 models to generate country-level, public-sector contraceptive prevalence estimates for injectable contraceptives, oral contraceptives, and male condoms. Models included: direct estimation through existing couple-years of protection (CYP) conversion factors, bivariate linear regression, and multivariate linear regression. Model evaluation consisted of comparing the referent DHS prevalence rates for each short-acting method with the model-generated prevalence rate using multiple metrics, including mean absolute error and proportion of countries where the modeled prevalence rate for each method was within 1, 2, or 5 percentage points of the DHS referent value. For the methods studied, family planning use estimates from public-sector logistics data were correlated with those from the DHS, validating the quality and accuracy of current public-sector logistics data. Logistics data for oral and injectable contraceptives were significantly associated (P<.05) with the referent DHS values for both bivariate and multivariate models. For condoms, however, that association was only significant for the bivariate model. With the exception of the CYP-based model for condoms, models were able to estimate public-sector prevalence rates for each short-acting method to within 2 percentage points in at least 85% of countries. Public-sector contraceptive logistics data are strongly correlated with public-sector prevalence rates for short-acting methods, demonstrating the quality of current logistics data and their ability to provide relatively accurate prevalence estimates. The models provide a starting point for generating interim estimates of contraceptive use when timely survey data are unavailable. All models except the condoms CYP model performed well; the regression models were most accurate but the CYP model offers the simplest calculation method. Future work extending the research to other modern methods, relating subnational logistics data with prevalence rates, and tracking that relationship over time is needed. © Cunningham et al.
Application of geostatistics to risk assessment.
Thayer, William C; Griffith, Daniel A; Goodrum, Philip E; Diamond, Gary L; Hassett, James M
2003-10-01
Geostatistics offers two fundamental contributions to environmental contaminant exposure assessment: (1) a group of methods to quantitatively describe the spatial distribution of a pollutant and (2) the ability to improve estimates of the exposure point concentration by exploiting the geospatial information present in the data. The second contribution is particularly valuable when exposure estimates must be derived from small data sets, which is often the case in environmental risk assessment. This article addresses two topics related to the use of geostatistics in human and ecological risk assessments performed at hazardous waste sites: (1) the importance of assessing model assumptions when using geostatistics and (2) the use of geostatistics to improve estimates of the exposure point concentration (EPC) in the limited data scenario. The latter topic is approached here by comparing design-based estimators that are familiar to environmental risk assessors (e.g., Land's method) with geostatistics, a model-based estimator. In this report, we summarize the basics of spatial weighting of sample data, kriging, and geostatistical simulation. We then explore the two topics identified above in a case study, using soil lead concentration data from a Superfund site (a skeet and trap range). We also describe several areas where research is needed to advance the use of geostatistics in environmental risk assessment.
Preliminary Exploration of Adaptive State Predictor Based Human Operator Modeling
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Gregory, Irene M.
2012-01-01
Control-theoretic modeling of the human operator dynamic behavior in manual control tasks has a long and rich history. In the last two decades, there has been a renewed interest in modeling the human operator. There has also been significant work on techniques used to identify the pilot model of a given structure. The purpose of this research is to attempt to go beyond pilot identification based on collected experimental data and to develop a predictor of pilot behavior. An experiment was conducted to quantify the effects of changing aircraft dynamics on an operator s ability to track a signal in order to eventually model a pilot adapting to changing aircraft dynamics. A gradient descent estimator and a least squares estimator with exponential forgetting used these data to predict pilot stick input. The results indicate that individual pilot characteristics and vehicle dynamics did not affect the accuracy of either estimator method to estimate pilot stick input. These methods also were able to predict pilot stick input during changing aircraft dynamics and they may have the capability to detect a change in a subject due to workload, engagement, etc., or the effects of changes in vehicle dynamics on the pilot.
Learning free energy landscapes using artificial neural networks.
Sidky, Hythem; Whitmer, Jonathan K
2018-03-14
Existing adaptive bias techniques, which seek to estimate free energies and physical properties from molecular simulations, are limited by their reliance on fixed kernels or basis sets which hinder their ability to efficiently conform to varied free energy landscapes. Further, user-specified parameters are in general non-intuitive yet significantly affect the convergence rate and accuracy of the free energy estimate. Here we propose a novel method, wherein artificial neural networks (ANNs) are used to develop an adaptive biasing potential which learns free energy landscapes. We demonstrate that this method is capable of rapidly adapting to complex free energy landscapes and is not prone to boundary or oscillation problems. The method is made robust to hyperparameters and overfitting through Bayesian regularization which penalizes network weights and auto-regulates the number of effective parameters in the network. ANN sampling represents a promising innovative approach which can resolve complex free energy landscapes in less time than conventional approaches while requiring minimal user input.
Learning free energy landscapes using artificial neural networks
NASA Astrophysics Data System (ADS)
Sidky, Hythem; Whitmer, Jonathan K.
2018-03-01
Existing adaptive bias techniques, which seek to estimate free energies and physical properties from molecular simulations, are limited by their reliance on fixed kernels or basis sets which hinder their ability to efficiently conform to varied free energy landscapes. Further, user-specified parameters are in general non-intuitive yet significantly affect the convergence rate and accuracy of the free energy estimate. Here we propose a novel method, wherein artificial neural networks (ANNs) are used to develop an adaptive biasing potential which learns free energy landscapes. We demonstrate that this method is capable of rapidly adapting to complex free energy landscapes and is not prone to boundary or oscillation problems. The method is made robust to hyperparameters and overfitting through Bayesian regularization which penalizes network weights and auto-regulates the number of effective parameters in the network. ANN sampling represents a promising innovative approach which can resolve complex free energy landscapes in less time than conventional approaches while requiring minimal user input.
Effective Fingerprint Quality Estimation for Diverse Capture Sensors
Xie, Shan Juan; Yoon, Sook; Shin, Jinwook; Park, Dong Sun
2010-01-01
Recognizing the quality of fingerprints in advance can be beneficial for improving the performance of fingerprint recognition systems. The representative features to assess the quality of fingerprint images from different types of capture sensors are known to vary. In this paper, an effective quality estimation system that can be adapted for different types of capture sensors is designed by modifying and combining a set of features including orientation certainty, local orientation quality and consistency. The proposed system extracts basic features, and generates next level features which are applicable for various types of capture sensors. The system then uses the Support Vector Machine (SVM) classifier to determine whether or not an image should be accepted as input to the recognition system. The experimental results show that the proposed method can perform better than previous methods in terms of accuracy. In the meanwhile, the proposed method has an ability to eliminate residue images from the optical and capacitive sensors, and the coarse images from thermal sensors. PMID:22163632
ERIC Educational Resources Information Center
Kirby, Russell S.; Wingate, Martha S.; Van Naarden Braun, Kim; Doernberg, Nancy S.; Arneson, Carrie L.; Benedict, Ruth E.; Mulvihill, Beverly; Durkin, Maureen S.; Fitzgerald, Robert T.; Maenner, Matthew J.; Patz, Jean A.; Yeargin-Allsopp, Marshalyn
2011-01-01
Aim: To estimate the prevalence of cerebral palsy (CP) and the frequency of co-occurring developmental disabilities (DDs), gross motor function (GMF), and walking ability using the largest surveillance DD database in the US. Methods: We conducted population-based surveillance of 8-year-old children in 2006 (N = 142,338), in areas of Alabama,…
NASA Astrophysics Data System (ADS)
Zhou, Shuai; Huang, Danian
2015-11-01
We have developed a new method for the interpretation of gravity tensor data based on the generalized Tilt-depth method. Cooper (2011, 2012) extended the magnetic Tilt-depth method to gravity data. We take the gradient-ratio method of Cooper (2011, 2012) and modify it so that the source type does not need to be specified a priori. We develop the new method by generalizing the Tilt-depth method for depth estimation for different types of source bodies. The new technique uses only the three vertical tensor components of the full gravity tensor data observed or calculated at different height plane to estimate the depth of the buried bodies without a priori specification of their structural index. For severely noise-corrupted data, our method utilizes different upward continuation height data, which can effectively reduce the influence of noise. Theoretical simulations of the gravity source model with and without noise illustrate the ability of the method to provide source depth information. Additionally, the simulations demonstrate that the new method is simple, computationally fast and accurate. Finally, we apply the method using the gravity data acquired over the Humble Salt Dome in the USA as an example. The results show a good correspondence to the previous drilling and seismic interpretation results.
Is Approximate Number Precision a Stable Predictor of Math Ability?
ERIC Educational Resources Information Center
Libertus, Melissa E.; Feigenson, Lisa; Halberda, Justin
2013-01-01
Previous research shows that children's ability to estimate numbers of items using their Approximate Number System (ANS) predicts later math ability. To more closely examine the predictive role of early ANS acuity on later abilities, we assessed the ANS acuity, math ability, and expressive vocabulary of preschoolers twice, six months apart. We…
Current Pressure Transducer Application of Model-based Prognostics Using Steady State Conditions
NASA Technical Reports Server (NTRS)
Teubert, Christopher; Daigle, Matthew J.
2014-01-01
Prognostics is the process of predicting a system's future states, health degradation/wear, and remaining useful life (RUL). This information plays an important role in preventing failure, reducing downtime, scheduling maintenance, and improving system utility. Prognostics relies heavily on wear estimation. In some components, the sensors used to estimate wear may not be fast enough to capture brief transient states that are indicative of wear. For this reason it is beneficial to be capable of detecting and estimating the extent of component wear using steady-state measurements. This paper details a method for estimating component wear using steady-state measurements, describes how this is used to predict future states, and presents a case study of a current/pressure (I/P) Transducer. I/P Transducer nominal and off-nominal behaviors are characterized using a physics-based model, and validated against expected and observed component behavior. This model is used to map observed steady-state responses to corresponding fault parameter values in the form of a lookup table. This method was chosen because of its fast, efficient nature, and its ability to be applied to both linear and non-linear systems. Using measurements of the steady state output, and the lookup table, wear is estimated. A regression is used to estimate the wear propagation parameter and characterize the damage progression function, which are used to predict future states and the remaining useful life of the system.
Changes of chromium concentration in alluvial sediments of the Obra river valley
NASA Astrophysics Data System (ADS)
Młynarczyk, Z.; Sobczyński, T.; Słowik, M.
2006-06-01
In this research work, changes in concentration of the chosen chemical element in alluvial sediments have been used to estimate the relative age of floodplain deposits. The research concerning changes of chromium concentration in alluvial deposits was done in the Obra river valley near Międzyrzecz (Western Poland). Chromium was chosen because of its low ability to migrate in groundwater environment. Moreover, this chemical element was used in the process of dyeing textures in Międzyrzecz between the sixteenth and the nineteenth century. Confrontation of changes in chromium concentration and age of alluvial sediments (age estimated in years BP using radiocarbon method) have shown that the sediments with higher chromium contents are much older than the period of development of the weaving industry in Międzyrzecz. Therefore, it is not possible to use changes in chromium concentration to estimate relative age of floodplain sediments. Despite information in the literature about low migration ability of this chemical component (Macioszczyk and Dobrzyński in Hydrogeochemia: strefy aktywnej wymiany wód podziemnych. PWN, Warszawa, 2002; Ball and Izbicki in Appl Geochem 19:1123 1135, 2004) migration of chromium is so intensive that distinct changes in its concentration are observed even before the period of increased human activity.
Sano, Yuko; Kandori, Akihiko; Shima, Keisuke; Yamaguchi, Yuki; Tsuji, Toshio; Noda, Masafumi; Higashikawa, Fumiko; Yokoe, Masaru; Sakoda, Saburo
2016-06-01
We propose a novel index of Parkinson's disease (PD) finger-tapping severity, called "PDFTsi," for quantifying the severity of symptoms related to the finger tapping of PD patients with high accuracy. To validate the efficacy of PDFTsi, the finger-tapping movements of normal controls and PD patients were measured by using magnetic sensors, and 21 characteristics were extracted from the finger-tapping waveforms. To distinguish motor deterioration due to PD from that due to aging, the aging effect on finger tapping was removed from these characteristics. Principal component analysis (PCA) was applied to the age-normalized characteristics, and principal components that represented the motion properties of finger tapping were calculated. Multiple linear regression (MLR) with stepwise variable selection was applied to the principal components, and PDFTsi was calculated. The calculated PDFTsi indicates that PDFTsi has a high estimation ability, namely a mean square error of 0.45. The estimation ability of PDFTsi is higher than that of the alternative method, MLR with stepwise regression selection without PCA, namely a mean square error of 1.30. This result suggests that PDFTsi can quantify PD finger-tapping severity accurately. Furthermore, the result of interpreting a model for calculating PDFTsi indicated that motion wideness and rhythm disorder are important for estimating PD finger-tapping severity.
Semmens, Darius J.; Diffendorfer, James E.; López-Hoffman, Laura; Shapiro, Carl D.
2011-01-01
Migratory species support ecosystem process and function in multiple areas, establishing ecological linkages between their different habitats. As they travel, migratory species also provide ecosystem services to people in many different locations. Previous research suggests there may be spatial mismatches between locations where humans use services and the ecosystems that produce them. This occurs with migratory species, between the areas that most support the species' population viability – and hence their long-term ability to provide services – and the locations where species provide the most ecosystem services. This paper presents a conceptual framework for estimating how much a particular location supports the provision of ecosystem services in other locations, and for estimating the extent to which local benefits are dependent upon other locations. We also describe a method for estimating the net payment, or subsidy, owed by or to a location that balances benefits received and support provided by locations throughout the migratory range of multiple species. The ability to quantify these spatial subsidies could provide a foundation for the establishment of markets that incentivize cross-jurisdictional cooperative management of migratory species. It could also provide a mechanism for resolving conflicts over the sustainable and equitable allocation of exploited migratory species.
Zisman, David A.; Karlamangla, Arun S.; Kawut, Steven M.; Shlobin, Oksana A.; Saggar, Rajeev; Ross, David J.; Schwarz, Marvin I.; Belperio, John A.; Ardehali, Abbas; Lynch, Joseph P.; Nathan, Steven D.
2008-01-01
Background We have developed a method to screen for pulmonary hypertension (PH) in idiopathic pulmonary fibrosis (IPF) patients, based on a formula to predict mean pulmonary artery pressure (MPAP) from standard lung function measurements. The objective of this study was to validate this method in a separate group of IPF patients. Methods Cross-sectional study of 60 IPF patients from two institutions. The accuracy of the MPAP estimation was assessed by examining the correlation between the predicted and measured MPAPs and the magnitude of the estimation error. The discriminatory ability of the method for PH was assessed using the area under the receiver operating characteristic curve (AUC). Results There was strong correlation in the expected direction between the predicted and measured MPAPs (r = 0.72; p < 0.0001). The estimated MPAP was within 5 mm Hg of the measured MPAP 72% of the time. The AUC for predicting PH was 0.85, and did not differ by institution. A formula-predicted MPAP > 21 mm Hg was associated with a sensitivity, specificity, positive predictive value, and negative predictive value of 95%, 58%, 51%, and 96%, respectively, for PH defined as MPAP from right-heart catheterization > 25 mm Hg. Conclusions A prediction formula for MPAP using standard lung function measurements can be used to screen for PH in IPF patients. PMID:18198245
A comparison of the weights-of-evidence method and probabilistic neural networks
Singer, Donald A.; Kouda, Ryoichi
1999-01-01
The need to integrate large quantities of digital geoscience information to classify locations as mineral deposits or nondeposits has been met by the weights-of-evidence method in many situations. Widespread selection of this method may be more the result of its ease of use and interpretation rather than comparisons with alternative methods. A comparison of the weights-of-evidence method to probabilistic neural networks is performed here with data from Chisel Lake-Andeson Lake, Manitoba, Canada. Each method is designed to estimate the probability of belonging to learned classes where the estimated probabilities are used to classify the unknowns. Using these data, significantly lower classification error rates were observed for the neural network, not only when test and training data were the same (0.02 versus 23%), but also when validation data, not used in any training, were used to test the efficiency of classification (0.7 versus 17%). Despite these data containing too few deposits, these tests of this set of data demonstrate the neural network's ability at making unbiased probability estimates and lower error rates when measured by number of polygons or by the area of land misclassified. For both methods, independent validation tests are required to ensure that estimates are representative of real-world results. Results from the weights-of-evidence method demonstrate a strong bias where most errors are barren areas misclassified as deposits. The weights-of-evidence method is based on Bayes rule, which requires independent variables in order to make unbiased estimates. The chi-square test for independence indicates no significant correlations among the variables in the Chisel Lake–Andeson Lake data. However, the expected number of deposits test clearly demonstrates that these data violate the independence assumption. Other, independent simulations with three variables show that using variables with correlations of 1.0 can double the expected number of deposits as can correlations of −1.0. Studies done in the 1970s on methods that use Bayes rule show that moderate correlations among attributes seriously affect estimates and even small correlations lead to increases in misclassifications. Adverse effects have been observed with small to moderate correlations when only six to eight variables were used. Consistent evidence of upward biased probability estimates from multivariate methods founded on Bayes rule must be of considerable concern to institutions and governmental agencies where unbiased estimates are required. In addition to increasing the misclassification rate, biased probability estimates make classification into deposit and nondeposit classes an arbitrary subjective decision. The probabilistic neural network has no problem dealing with correlated variables—its performance depends strongly on having a thoroughly representative training set. Probabilistic neural networks or logistic regression should receive serious consideration where unbiased estimates are required. The weights-of-evidence method would serve to estimate thresholds between anomalies and background and for exploratory data analysis.
Tools of Robustness for Item Response Theory.
ERIC Educational Resources Information Center
Jones, Douglas H.
This paper briefly demonstrates a few of the possibilities of a systematic application of robustness theory, concentrating on the estimation of ability when the true item response model does and does not fit the data. The definition of the maximum likelihood estimator (MLE) of ability is briefly reviewed. After introducing the notion of…
Blackman, Ian R; Giles, Tracey M
2017-04-01
In order to meet national Australian nursing registration requisites, nurses need to meet competency requirements for evidence-based practices (EBPs). A hypothetical model was formulated to explore factors that influenced Australian nursing students' ability and achievement to understand and employ EBPs related to health care provision. A nonexperimental, descriptive survey method was used to identify self-reported EBP efficacy estimates of 375 completing undergraduate nursing students. Factors influencing participants' self-rated EBP abilities were validated by Rasch analysis and then modeled using the partial least squares analysis (PLS Path) program. Graduating nursing students' ability to understand and apply EBPs for clinical improvement can be directly and indirectly predicted by eight variables including their understanding in the analysis, critique and synthesis of clinically based nursing research, their ability to communicate research to others and whether they had actually witnessed other staff delivering EBP. Forty-one percent of the variance in the nursing students' self-rated EBP efficacy scores is able to be accounted for by this model. Previous exposure to EBP studies facilitates participants' confidence with EBP, particularly with concurrent clinical EBP experiences. © 2017 Sigma Theta Tau International.
Kimura, Shuhei; Sato, Masanao; Okada-Hatakeyama, Mariko
2013-01-01
The inference of a genetic network is a problem in which mutual interactions among genes are inferred from time-series of gene expression levels. While a number of models have been proposed to describe genetic networks, this study focuses on a mathematical model proposed by Vohradský. Because of its advantageous features, several researchers have proposed the inference methods based on Vohradský's model. When trying to analyze large-scale networks consisting of dozens of genes, however, these methods must solve high-dimensional non-linear function optimization problems. In order to resolve the difficulty of estimating the parameters of the Vohradský's model, this study proposes a new method that defines the problem as several two-dimensional function optimization problems. Through numerical experiments on artificial genetic network inference problems, we showed that, although the computation time of the proposed method is not the shortest, the method has the ability to estimate parameters of Vohradský's models more effectively with sufficiently short computation times. This study then applied the proposed method to an actual inference problem of the bacterial SOS DNA repair system, and succeeded in finding several reasonable regulations. PMID:24386175
Robust and transferable quantification of NMR spectral quality using IROC analysis
NASA Astrophysics Data System (ADS)
Zambrello, Matthew A.; Maciejewski, Mark W.; Schuyler, Adam D.; Weatherby, Gerard; Hoch, Jeffrey C.
2017-12-01
Non-Fourier methods are increasingly utilized in NMR spectroscopy because of their ability to handle nonuniformly-sampled data. However, non-Fourier methods present unique challenges due to their nonlinearity, which can produce nonrandom noise and render conventional metrics for spectral quality such as signal-to-noise ratio unreliable. The lack of robust and transferable metrics (i.e. applicable to methods exhibiting different nonlinearities) has hampered comparison of non-Fourier methods and nonuniform sampling schemes, preventing the identification of best practices. We describe a novel method, in situ receiver operating characteristic analysis (IROC), for characterizing spectral quality based on the Receiver Operating Characteristic curve. IROC utilizes synthetic signals added to empirical data as "ground truth", and provides several robust scalar-valued metrics for spectral quality. This approach avoids problems posed by nonlinear spectral estimates, and provides a versatile quantitative means of characterizing many aspects of spectral quality. We demonstrate applications to parameter optimization in Fourier and non-Fourier spectral estimation, critical comparison of different methods for spectrum analysis, and optimization of nonuniform sampling schemes. The approach will accelerate the discovery of optimal approaches to nonuniform sampling experiment design and non-Fourier spectrum analysis for multidimensional NMR.
A unified Bayesian semiparametric approach to assess discrimination ability in survival analysis
Zhao, Lili; Feng, Dai; Chen, Guoan; Taylor, Jeremy M.G.
2015-01-01
Summary The discriminatory ability of a marker for censored survival data is routinely assessed by the time-dependent ROC curve and the c-index. The time-dependent ROC curve evaluates the ability of a biomarker to predict whether a patient lives past a particular time t. The c-index measures the global concordance of the marker and the survival time regardless of the time point. We propose a Bayesian semiparametric approach to estimate these two measures. The proposed estimators are based on the conditional distribution of the survival time given the biomarker and the empirical biomarker distribution. The conditional distribution is estimated by a linear dependent Dirichlet process mixture model. The resulting ROC curve is smooth as it is estimated by a mixture of parametric functions. The proposed c-index estimator is shown to be more efficient than the commonly used Harrell's c-index since it uses all pairs of data rather than only informative pairs. The proposed estimators are evaluated through simulations and illustrated using a lung cancer dataset. PMID:26676324
Ratnayake, M; Obertová, Z; Dose, M; Gabriel, P; Bröker, H M; Brauckmann, M; Barkus, A; Rizgeliene, R; Tutkuviene, J; Ritz-Timme, S; Marasciuolo, L; Gibelli, D; Cattaneo, C
2014-09-01
In cases of suspected child pornography, the age of the victim represents a crucial factor for legal prosecution. The conventional methods for age estimation provide unreliable age estimates, particularly if teenage victims are concerned. In this pilot study, the potential of age estimation for screening purposes is explored for juvenile faces. In addition to a visual approach, an automated procedure is introduced, which has the ability to rapidly scan through large numbers of suspicious image data in order to trace juvenile faces. Age estimations were performed by experts, non-experts and the Demonstrator of a developed software on frontal facial images of 50 females aged 10-19 years from Germany, Italy, and Lithuania. To test the accuracy, the mean absolute error (MAE) between the estimates and the real ages was calculated for each examiner and the Demonstrator. The Demonstrator achieved the lowest MAE (1.47 years) for the 50 test images. Decreased image quality had no significant impact on the performance and classification results. The experts delivered slightly less accurate MAE (1.63 years). Throughout the tested age range, both the manual and the automated approach led to reliable age estimates within the limits of natural biological variability. The visual analysis of the face produces reasonably accurate age estimates up to the age of 18 years, which is the legally relevant age threshold for victims in cases of pedo-pornography. This approach can be applied in conjunction with the conventional methods for a preliminary age estimation of juveniles depicted on images.
Early Life Conditions, Adverse Life Events, and Chewing Ability at Middle and Later Adulthood
Watt, Richard G.; Tsakos, Georgios
2014-01-01
Objectives. We sought to determine the extent to which early life conditions and adverse life events impact chewing ability in middle and later adulthood. Methods. Secondary analyses were conducted based on data from waves 2 and 3 of the Survey of Health, Ageing, and Retirement in Europe (SHARE), collected in the years 2006 to 2009 and encompassing information on current chewing ability and the life history of persons aged 50 years or older from 13 European countries. Logistic regression models were estimated with sequential inclusion of explanatory variables representing living conditions in childhood and adverse life events. Results. After controlling for current determinants of chewing ability at age 50 years or older, certain childhood and later life course socioeconomic, behavioral, and cognitive factors became evident as correlates of chewing ability at age 50 years or older. Specifically, childhood financial hardship was identified as an early life predictor of chewing ability at age 50 years or older (odds ratio = 1.58; 95% confidence interval = 1.22, 2.06). Conclusions. Findings suggest a potential enduring impact of early life conditions and adverse life events on oral health in middle and later adulthood and are relevant for public health decision-makers who design strategies for optimal oral health. PMID:24625140
Scene-based nonuniformity correction algorithm based on interframe registration.
Zuo, Chao; Chen, Qian; Gu, Guohua; Sui, Xiubao
2011-06-01
In this paper, we present a simple and effective scene-based nonuniformity correction (NUC) method for infrared focal plane arrays based on interframe registration. This method estimates the global translation between two adjacent frames and minimizes the mean square error between the two properly registered images to make any two detectors with the same scene produce the same output value. In this way, the accumulation of the registration error can be avoided and the NUC can be achieved. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of the proposed technique is thoroughly studied with infrared image sequences with simulated nonuniformity and infrared imagery with real nonuniformity. It shows a significantly fast and reliable fixed-pattern noise reduction and obtains an effective frame-by-frame adaptive estimation of each detector's gain and offset.
Attenuating Stereo Pixel-Locking via Affine Window Adaptation
NASA Technical Reports Server (NTRS)
Stein, Andrew N.; Huertas, Andres; Matthies, Larry H.
2006-01-01
For real-time stereo vision systems, the standard method for estimating sub-pixel stereo disparity given an initial integer disparity map involves fitting parabolas to a matching cost function aggregated over rectangular windows. This results in a phenomenon known as 'pixel-locking,' which produces artificially-peaked histograms of sub-pixel disparity. These peaks correspond to the introduction of erroneous ripples or waves in the 3D reconstruction of truly Rat surfaces. Since stereo vision is a common input modality for autonomous vehicles, these inaccuracies can pose a problem for safe, reliable navigation. This paper proposes a new method for sub-pixel stereo disparity estimation, based on ideas from Lucas-Kanade tracking and optical flow, which substantially reduces the pixel-locking effect. In addition, it has the ability to correct much larger initial disparity errors than previous approaches and is more general as it applies not only to the ground plane.
Daily pan evaporation modelling using a neuro-fuzzy computing technique
NASA Astrophysics Data System (ADS)
Kişi, Özgür
2006-10-01
SummaryEvaporation, as a major component of the hydrologic cycle, is important in water resources development and management. This paper investigates the abilities of neuro-fuzzy (NF) technique to improve the accuracy of daily evaporation estimation. Five different NF models comprising various combinations of daily climatic variables, that is, air temperature, solar radiation, wind speed, pressure and humidity are developed to evaluate degree of effect of each of these variables on evaporation. A comparison is made between the estimates provided by the NF model and the artificial neural networks (ANNs). The Stephens-Stewart (SS) method is also considered for the comparison. Various statistic measures are used to evaluate the performance of the models. Based on the comparisons, it was found that the NF computing technique could be employed successfully in modelling evaporation process from the available climatic data. The ANN also found to perform better than the SS method.
Schlund, M W
2000-10-01
Bedside hearing screenings are routinely conducted by speech and language pathologists for brain injury survivors during rehabilitation. Cognitive deficits resulting from brain injury, however, may interfere with obtaining estimates of auditory thresholds. Poor comprehension or attention deficits often compromise patient abilities to follow procedural instructions. This article describes the effects of jointly applying behavioral methods and psychophysical methods to improve two severely brain-injured survivors' attending and reporting on auditory test stimuli presentation. Treatment consisted of stimulus control training that involved differentially reinforcing responding in the presence and absence of an auditory test tone. Subsequent hearing screenings were conducted with novel auditory test tones and a common titration procedure. Results showed that prior stimulus control training improved attending and reporting such that hearing screenings were conducted and estimates of auditory thresholds were obtained.
Atmospheric Turbulence Estimates from a Pulsed Lidar
NASA Technical Reports Server (NTRS)
Pruis, Matthew J.; Delisi, Donald P.; Ahmad, Nash'at N.; Proctor, Fred H.
2013-01-01
Estimates of the eddy dissipation rate (EDR) were obtained from measurements made by a coherent pulsed lidar and compared with estimates from mesoscale model simulations and measurements from an in situ sonic anemometer at the Denver International Airport and with EDR estimates from the last observation time of the trailing vortex pair. The estimates of EDR from the lidar were obtained using two different methodologies. The two methodologies show consistent estimates of the vertical profiles. Comparison of EDR derived from the Weather Research and Forecast (WRF) mesoscale model with the in situ lidar estimates show good agreement during the daytime convective boundary layer, but the WRF simulations tend to overestimate EDR during the nighttime. The EDR estimates from a sonic anemometer located at 7.3 meters above ground level are approximately one order of magnitude greater than both the WRF and lidar estimates - which are from greater heights - during the daytime convective boundary layer and substantially greater during the nighttime stable boundary layer. The consistency of the EDR estimates from different methods suggests a reasonable ability to predict the temporal evolution of a spatially averaged vertical profile of EDR in an airport terminal area using a mesoscale model during the daytime convective boundary layer. In the stable nighttime boundary layer, there may be added value to EDR estimates provided by in situ lidar measurements.
NASA Astrophysics Data System (ADS)
Cao, Lu; Li, Hengnian
2016-10-01
For the satellite attitude estimation problem, the serious model errors always exist and hider the estimation performance of the Attitude Determination and Control System (ACDS), especially for a small satellite with low precision sensors. To deal with this problem, a new algorithm for the attitude estimation, referred to as the unscented predictive variable structure filter (UPVSF) is presented. This strategy is proposed based on the variable structure control concept and unscented transform (UT) sampling method. It can be implemented in real time with an ability to estimate the model errors on-line, in order to improve the state estimation precision. In addition, the model errors in this filter are not restricted only to the Gaussian noises; therefore, it has the advantages to deal with the various kinds of model errors or noises. It is anticipated that the UT sampling strategy can further enhance the robustness and accuracy of the novel UPVSF. Numerical simulations show that the proposed UPVSF is more effective and robustness in dealing with the model errors and low precision sensors compared with the traditional unscented Kalman filter (UKF).
Extraction of small boat harmonic signatures from passive sonar.
Ogden, George L; Zurk, Lisa M; Jones, Mark E; Peterson, Mary E
2011-06-01
This paper investigates the extraction of acoustic signatures from small boats using a passive sonar system. Noise radiated from a small boats consists of broadband noise and harmonically related tones that correspond to engine and propeller specifications. A signal processing method to automatically extract the harmonic structure of noise radiated from small boats is developed. The Harmonic Extraction and Analysis Tool (HEAT) estimates the instantaneous fundamental frequency of the harmonic tones, refines the fundamental frequency estimate using a Kalman filter, and automatically extracts the amplitudes of the harmonic tonals to generate a harmonic signature for the boat. Results are presented that show the HEAT algorithms ability to extract these signatures. © 2011 Acoustical Society of America
A Portable Electronic Nose For Toxic Vapor Detection, Identification, and Quantification
NASA Technical Reports Server (NTRS)
Linnell, B. R.; Young, R. C.; Griffin, T. P.; Meneghelli, B. J.; Peterson, B. V.; Brooks, K. B.
2005-01-01
A new prototype instrument based on electronic nose (e-nose) technology has demonstrated the ability to identify and quantify many vapors of interest to the Space Program at their minimum required concentrations for both single vapors and two-component vapor mixtures, and may easily be adapted to detect many other toxic vapors. To do this, it was necessary to develop algorithms to classify unknown vapors, recognize when a vapor is not any of the vapors of interest, and estimate the concentrations of the contaminants. This paper describes the design of the portable e-nose instrument, test equipment setup, test protocols, pattern recognition algorithms, concentration estimation methods, and laboratory test results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, J; Fan, J; Hu, W
Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditionalmore » probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.« less
The degree-related clustering coefficient and its application to link prediction
NASA Astrophysics Data System (ADS)
Liu, Yangyang; Zhao, Chengli; Wang, Xiaojie; Huang, Qiangjuan; Zhang, Xue; Yi, Dongyun
2016-07-01
Link prediction plays a significant role in explaining the evolution of networks. However it is still a challenging problem that has been addressed only with topological information in recent years. Based on the belief that network nodes with a great number of common neighbors are more likely to be connected, many similarity indices have achieved considerable accuracy and efficiency. Motivated by the natural assumption that the effect of missing links on the estimation of a node's clustering ability could be related to node degree, in this paper, we propose a degree-related clustering coefficient index to quantify the clustering ability of nodes. Unlike the classical clustering coefficient, our new coefficient is highly robust when the observed bias of links is considered. Furthermore, we propose a degree-related clustering ability path (DCP) index, which applies the proposed coefficient to the link prediction problem. Experiments on 12 real-world networks show that our proposed method is highly accurate and robust compared with four common-neighbor-based similarity indices (Common Neighbors(CN), Adamic-Adar(AA), Resource Allocation(RA), and Preferential Attachment(PA)), and the recently introduced clustering ability (CA) index.
Weitz, Melissa; Coburn, Jeffrey B; Salinas, Edgar
2008-05-01
This paper estimates national methane emissions from solid waste disposal sites in Panama over the time period 1990-2020 using both the 2006 Intergovernmental Panel on Climate Change (IPCC) Waste Model spreadsheet and the default emissions estimate approach presented in the 1996 IPCC Good Practice Guidelines. The IPCC Waste Model has the ability to calculate emissions from a variety of solid waste disposal site types, taking into account country- or region-specific waste composition and climate information, and can be used with a limited amount of data. Countries with detailed data can also run the model with country-specific values. The paper discusses methane emissions from solid waste disposal; explains the differences between the two methodologies in terms of data needs, assumptions, and results; describes solid waste disposal circumstances in Panama; and presents the results of this analysis. It also demonstrates the Waste Model's ability to incorporate landfill gas recovery data and to make projections. The former default method methane emissions estimates are 25 Gg in 1994, and range from 23.1 Gg in 1990 to a projected 37.5 Gg in 2020. The Waste Model estimates are 26.7 Gg in 1994, ranging from 24.6 Gg in 1990 to 41.6 Gg in 2020. Emissions estimates for Panama produced by the new model were, on average, 8% higher than estimates produced by the former default methodology. The increased estimate can be attributed to the inclusion of all solid waste disposal in Panama (as opposed to only disposal in managed landfills), but the increase was offset somewhat by the different default factors and regional waste values between the 1996 and 2006 IPCC guidelines, and the use of the first-order decay model with a time delay for waste degradation in the IPCC Waste Model.
Vorberg, Susann
2013-01-01
Abstract Biodegradability describes the capacity of substances to be mineralized by free‐living bacteria. It is a crucial property in estimating a compound’s long‐term impact on the environment. The ability to reliably predict biodegradability would reduce the need for laborious experimental testing. However, this endpoint is difficult to model due to unavailability or inconsistency of experimental data. Our approach makes use of the Online Chemical Modeling Environment (OCHEM) and its rich supply of machine learning methods and descriptor sets to build classification models for ready biodegradability. These models were analyzed to determine the relationship between characteristic structural properties and biodegradation activity. The distinguishing feature of the developed models is their ability to estimate the accuracy of prediction for each individual compound. The models developed using seven individual descriptor sets were combined in a consensus model, which provided the highest accuracy. The identified overrepresented structural fragments can be used by chemists to improve the biodegradability of new chemical compounds. The consensus model, the datasets used, and the calculated structural fragments are publicly available at http://ochem.eu/article/31660. PMID:27485201
Software engineering the mixed model for genome-wide association studies on large samples.
Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J
2009-11-01
Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.
Calculation of Cardiac Kinetic Energy Index from PET images.
Sims, John; Oliveira, Marco Antônio; Meneghetti, José Claudio; Gutierrez, Marco Antônio
2015-01-01
Cardiac function can be assessed from displacement measurements in imaging modalities from nuclear medicine Using positron emission tomography (PET) image sequences with Rubidium-82, we propose and estimate the total Kinetic Energy Index (KEf) obtained from the velocity field, which was calculated using 3D optical flow(OF) methods applied over the temporal image sequence. However, it was found that the brightness of the image varied unexpectedly between frames, violating the constant brightness assumption of the OF method and causing large errors in estimating the velocity field. Therefore total brightness was equalized across image frames and the adjusted configuration tested with rest perfusion images acquired from individuals with normal (n=30) and low (n=33) cardiac function. For these images KEf was calculated as 0.5731±0.0899 and 0.3812±0.1146 for individuals with normal and low cardiac function respectively. The ability of KEf to properly classify patients into the two groups was tested with a ROC analysis, with area under the curve estimated as 0.906. To our knowledge this is the first time that KEf has been applied to PET images.
Association Between Connecticut’s Permit-to-Purchase Handgun Law and Homicides
Rudolph, Kara E.; Stuart, Elizabeth A.; Vernick, Jon S.
2015-01-01
Objectives. We sought to estimate the effect of Connecticut’s implementation of a handgun permit-to-purchase law in October 1995 on subsequent homicides. Methods. Using the synthetic control method, we compared Connecticut’s homicide rates after the law’s implementation to rates we would have expected had the law not been implemented. To estimate the counterfactual, we used longitudinal data from a weighted combination of comparison states identified based on the ability of their prelaw homicide trends and covariates to predict prelaw homicide trends in Connecticut. Results. We estimated that the law was associated with a 40% reduction in Connecticut’s firearm homicide rates during the first 10 years that the law was in place. By contrast, there was no evidence for a reduction in nonfirearm homicides. Conclusions. Consistent with prior research, this study demonstrated that Connecticut’s handgun permit-to-purchase law was associated with a subsequent reduction in homicide rates. As would be expected if the law drove the reduction, the policy’s effects were only evident for homicides committed with firearms. PMID:26066959
Noguchi, Yoshinori; Matsui, Kunihiko; Imura, Hiroshi; Kiyota, Masatomo; Fukui, Tsuguya
2004-05-01
Quite often medical students or novice residents have difficulty in ruling out diseases even though they are quite unlikely and, due to this difficulty, such students and novice residents unnecessarily repeat laboratory or imaging tests. To explore whether or not a carefully designed short training course teaching Bayesian probabilistic thinking improves the diagnostic ability of medical students. Ninety students at 2 medical schools were presented with clinical scenarios of coronary artery disease corresponding to high, low, and intermediate pretest probabilities. The students' estimates of test characteristics of exercise stress test, and pretest and posttest probability for each scenario were evaluated before and after the short course. The pretest probability estimates by the students, as well as their proficiency in applying Bayes's theorem, were improved in the high pretest probability scenario after the short course. However, estimates of pretest probability in the low pretest probability scenario, and their proficiency in applying Bayes's theorem in the intermediate and low pretest probability scenarios, showed essentially no improvement. A carefully designed, but traditionally administered, short course could not improve the students' abilities in estimating pretest probability in a low pretest probability setting, and subsequently students remained incompetent in ruling out disease. We need to develop educational methods that cultivate a well-balanced clinical sense to enable students to choose a suitable diagnostic strategy as needed in a clinical setting without being one-sided to the "rule-in conscious paradigm."
Diallel analysis for technological traits in upland cotton.
Queiroz, D R; Farias, F J C; Cavalcanti, J J V; Carvalho, L P; Neder, D G; Souza, L S S; Farias, F C; Teodoro, P E
2017-09-21
Final cotton quality is of great importance, and it depends on intrinsic and extrinsic fiber characteristics. The objective of this study was to estimate general (GCA) and specific (SCA) combining abilities for technological fiber traits among six upland cotton genotypes and their fifteen hybrid combinations, as well as to determine the effective genetic effects in controlling the traits evaluated. In 2015, six cotton genotypes: FM 993, CNPA 04-2080, PSC 355, TAM B 139-17, IAC 26, and TAMCOT-CAMD-E and fifteen hybrid combinations were evaluated at the Experimental Station of Embrapa Algodão, located in Patos, PB, Brazil. The experimental design was a randomized block with three replications. Technological fiber traits evaluated were: length (mm); strength (gf/tex); fineness (Micronaire index); uniformity (%); short fiber index (%), and spinning index. The diallel analysis was carried out according to the methodology proposed by Griffing, using method II and model I. Significant differences were detected between the treatments and combining abilities (GCA and SCA), indicating the variability of the study material. There was a predominance of additive effects for the genetic control of all traits. TAM B 139-17 presented the best GCA estimates for all traits. The best combinations were: FM 993 x TAM B 139-17, CNPA 04-2080 x PSC 355, FM 993 x TAMCOT-CAMD-E, PSC 355 x TAM B 139-17, and TAM B 139-17 x TAMCOT-CAMD-E, by obtaining the best estimates of SCA, with one of the parents having favorable estimates for GCA.
Problems with sampling desert tortoises: A simulation analysis based on field data
Freilich, J.E.; Camp, R.J.; Duda, J.J.; Karl, A.E.
2005-01-01
The desert tortoise (Gopherus agassizii) was listed as a U.S. threatened species in 1990 based largely on population declines inferred from mark-recapture surveys of 2.59-km2 (1-mi2) plots. Since then, several census methods have been proposed and tested, but all methods still pose logistical or statistical difficulties. We conducted computer simulations using actual tortoise location data from 2 1-mi2 plot surveys in southern California, USA, to identify strengths and weaknesses of current sampling strategies. We considered tortoise population estimates based on these plots as "truth" and then tested various sampling methods based on sampling smaller plots or transect lines passing through the mile squares. Data were analyzed using Schnabel's mark-recapture estimate and program CAPTURE. Experimental subsampling with replacement of the 1-mi2 data using 1-km2 and 0.25-km2 plot boundaries produced data sets of smaller plot sizes, which we compared to estimates from the 1-mi 2 plots. We also tested distance sampling by saturating a 1-mi 2 site with computer simulated transect lines, once again evaluating bias in density estimates. Subsampling estimates from 1-km2 plots did not differ significantly from the estimates derived at 1-mi2. The 0.25-km2 subsamples significantly overestimated population sizes, chiefly because too few recaptures were made. Distance sampling simulations were biased 80% of the time and had high coefficient of variation to density ratios. Furthermore, a prospective power analysis suggested limited ability to detect population declines as high as 50%. We concluded that poor performance and bias of both sampling procedures was driven by insufficient sample size, suggesting that all efforts must be directed to increasing numbers found in order to produce reliable results. Our results suggest that present methods may not be capable of accurately estimating desert tortoise populations.
Dictionary-based fiber orientation estimation with improved spatial consistency.
Ye, Chuyang; Prince, Jerry L
2018-02-01
Diffusion magnetic resonance imaging (dMRI) has enabled in vivo investigation of white matter tracts. Fiber orientation (FO) estimation is a key step in tract reconstruction and has been a popular research topic in dMRI analysis. In particular, the sparsity assumption has been used in conjunction with a dictionary-based framework to achieve reliable FO estimation with a reduced number of gradient directions. Because image noise can have a deleterious effect on the accuracy of FO estimation, previous works have incorporated spatial consistency of FOs in the dictionary-based framework to improve the estimation. However, because FOs are only indirectly determined from the mixture fractions of dictionary atoms and not modeled as variables in the objective function, these methods do not incorporate FO smoothness directly, and their ability to produce smooth FOs could be limited. In this work, we propose an improvement to Fiber Orientation Reconstruction using Neighborhood Information (FORNI), which we call FORNI+; this method estimates FOs in a dictionary-based framework where FO smoothness is better enforced than in FORNI alone. We describe an objective function that explicitly models the actual FOs and the mixture fractions of dictionary atoms. Specifically, it consists of data fidelity between the observed signals and the signals represented by the dictionary, pairwise FO dissimilarity that encourages FO smoothness, and weighted ℓ 1 -norm terms that ensure the consistency between the actual FOs and the FO configuration suggested by the dictionary representation. The FOs and mixture fractions are then jointly estimated by minimizing the objective function using an iterative alternating optimization strategy. FORNI+ was evaluated on a simulation phantom, a physical phantom, and real brain dMRI data. In particular, in the real brain dMRI experiment, we have qualitatively and quantitatively evaluated the reproducibility of the proposed method. Results demonstrate that FORNI+ produces FOs with better quality compared with competing methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Foeniculum vulgare essential oils: chemical composition, antioxidant and antimicrobial activities.
Miguel, Maria Graça; Cruz, Cláudia; Faleiro, Leonor; Simões, Mariana T F; Figueiredo, Ana Cristina; Barroso, José G; Pedro, Luis G
2010-02-01
The essential oils from Foeniculum vulgare commercial aerial parts and fruits were isolated by hydrodistillation, with different distillation times (30 min, 1 h, 2 h and 3 h), and analyzed by GC and GC-MS. The antioxidant ability was estimated using four distinct methods. Antibacterial activity was determined by the agar diffusion method. Remarkable differences, and worrying from the quality and safety point of view, were detected in the essential oils. trans-Anethole (31-36%), alpha-pinene (14-20%) and limonene (11-13%) were the main components of the essentials oil isolated from F. vulgare dried aerial parts, whereas methyl chavicol (= estragole) (79-88%) was dominant in the fruit oils. With the DPPH method the plant oils showed better antioxidant activity than the fruits oils. With the TBARS method and at higher concentrations, fennel essential oils showed a pro-oxidant activity. None of the oils showed a hydroxyl radical scavenging capacity > 50%, but they showed an ability to inhibit 5-lipoxygenase. The essential oils showed a very low antimicrobial activity. In general, the essential oils isolated during 2 h were as effective, from the biological activity point of view, as those isolated during 3 h.
Estimation hydrophilic-lipophilic balance number of surfactants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pawignya, Harsa, E-mail: harsa-paw@yahoo.co.id; Chemical Engineering Departement University of Pembangunan Nasional Yogyakarta; Prasetyaningrum, Aji, E-mail: ajiprasetyaningrum@gmail.com
Any type of surfactant has a hydrophilic-lipophilic balance number (HLB number) of different. There are several methods for determining the HLB number, with ohysical properties of surfactant (solubility cloud point and interfacial tension), CMC methods and by thermodynamics properties (Free energy Gibbs). This paper proposes to determined HLB numbers from interfelation methods. The result of study indicated that the CMC method described by Hair and Moulik espesially for nonionic surfactant. The application of exess Gibbs free energy and by implication activity coefficient provides the ability to predict the behavior of surfactants in multi component mixtures of different concentration. Determination ofmore » HLB number by solubility and cloud point parameter is spesific for anionic and nonionic surfactant but this methods not available for cationic surfactants.« less
2012-01-01
Background Although data from longitudinal studies are sparse, effort-reward imbalance (ERI) seems to affect work ability. However, the potential pathway from restricted work ability to ERI must also be considered. Therefore, the aim of our study was to analyse cross-sectional and longitudinal associations between ERI and work ability and vice versa. Methods Data come from the Second German Sociomedical Panel of Employees. Logistic regression models were estimated to determine cross-sectional and longitudinal associations. The sample used to predict new cases of poor or moderate work ability was restricted to cases with good or excellent work ability at baseline. The sample used to predict new cases of ERI was restricted to persons without ERI at baseline. Results The cross-sectional analysis included 1501 full-time employed persons. The longitudinal analyses considered 600 participants with good or excellent baseline work ability and 666 participants without baseline ERI, respectively. After adjustment for socio-demographic variables, health-related behaviour and factors of the work environment, ERI was cross-sectionally associated with poor or moderate work ability (OR = 1.980; 95% CI: 1.428 to 2.747). Longitudinally, persons with ERI had 2.1 times higher odds of poor or moderate work ability after one year (OR = 2.093; 95% CI: 1.047 to 4.183). Conversely, persons with poor or moderate work ability had 2.6 times higher odds of an ERI after one year (OR = 2.573; 95% CI: 1.314 to 5.041). Conclusions Interventions that enable workers to cope with ERI or address indicators of ERI directly could promote the maintenance of work ability. Integration management programmes for persons with poor work ability should also consider their psychosocial demands. PMID:23067110
Myocardial strains from 3D displacement encoded magnetic resonance imaging
2012-01-01
Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE), make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts. PMID:22533791
Feasibility of ultra-low-volume indoor space spraying for dengue control in Southern Thailand.
Ditsuwan, Thanittha; Liabsuetrakul, Tippawan; Ditsuwan, Vallop; Thammapalo, Suwich
2013-02-01
To assess the feasibility of conducting standard indoor space spraying using ultra-low-volume (SID-ULV) in terms of willingness to pay (WTP) and ability to pay (ATP) and ability to conduct space spraying by local administrative organisations (LAO) in lower Southern Thailand. Cross-sectional study. The executive leaders of each LAO were asked to state their WTP and ATP for SID-ULV. Willingness to pay was measured by the payment card and open-ended question methods. Ability to pay was calculated using the budget allocation for space spraying and estimated expenditure for SID-ULV. Ability to conduct the SID-ULV was assessed by interviewing the spraymen. Average WTP and ATP were calculated and uncertainties were estimated using a bootstrapping technique. Ninty-three percent of executive leaders were willing to pay for SID-ULV. The average WTP per case was USD 259 (95% confidence interval [CI] 217-303). Thirty-eight percent of all LAO had actual ATP and 60% had ideal ATP. The average annual budget allocated for space spraying was USD 2327 (95% CI: 1654-3138). The amount of money LAO were willing to pay did not vary significantly between their different types, but ATP did. Thirty-two percent of spraymen could not complete all nine procedures of SID-ULV. Although WTP for SID-ULV space spraying was high, ATP was low, which revealed the flexibility of budget allocation for SID-ULV in each LAO. The spraymen require training in SID-ULV space spraying. © 2012 Blackwell Publishing Ltd.
Effects of Differential Item Functioning on Examinees' Test Performance and Reliability of Test
ERIC Educational Resources Information Center
Lee, Yi-Hsuan; Zhang, Jinming
2017-01-01
Simulations were conducted to examine the effect of differential item functioning (DIF) on measurement consequences such as total scores, item response theory (IRT) ability estimates, and test reliability in terms of the ratio of true-score variance to observed-score variance and the standard error of estimation for the IRT ability parameter. The…
Development on electromagnetic impedance function modeling and its estimation
NASA Astrophysics Data System (ADS)
Sutarno, D.
2015-09-01
Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition-as well as the far-field zones, and consequently the plane wave correction is no longer needed for the impedances. In the resulting robust impedance estimates, outlier contamination is removed and the self consistency between the real and imaginary parts of the impedance estimates is guaranteed. Using synthetic and real MT data, it is shown that the proposed robust estimation methods always yield impedance estimates which are better than the conventional least square (LS) estimation, even under condition of severe noise contamination. A recent development on the constrained robust CSAMT impedance estimation is also discussed. By using synthetic CSAMT data it is demonstrated that the proposed methods can produce usable CSAMT transfer functions for all measurement zones.
Development on electromagnetic impedance function modeling and its estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutarno, D., E-mail: Sutarno@fi.itb.ac.id
2015-09-30
Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim atmore » reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition-as well as the far-field zones, and consequently the plane wave correction is no longer needed for the impedances. In the resulting robust impedance estimates, outlier contamination is removed and the self consistency between the real and imaginary parts of the impedance estimates is guaranteed. Using synthetic and real MT data, it is shown that the proposed robust estimation methods always yield impedance estimates which are better than the conventional least square (LS) estimation, even under condition of severe noise contamination. A recent development on the constrained robust CSAMT impedance estimation is also discussed. By using synthetic CSAMT data it is demonstrated that the proposed methods can produce usable CSAMT transfer functions for all measurement zones.« less
Methods for LWIR Radiometric Calibration and Characterization
NASA Technical Reports Server (NTRS)
Ryan, Robert; Pagnutti, Mary; Zanoni, Vicki; Harrington, Gary; Howell, Dane; Stewart, Randy
2002-01-01
The utility of a thermal remote sensing system increases with it's ability to retrieve surface temperature or radiance accurately. The radiometer measures the water surface radiant temperature. Combining these measurements with atmospheric pressure, temperature, and water vapor profiles, a top-of-the-atmosphere tradiance estimate can be caluclated with a radiativer transfer code to compare to trhe sensor's output. A novel approach has been developed using an uncooled infrared camera mounted on a boom, to quantify buoy effects.
Robbers, J E; Hong, S; Tuite, J; Carlton, W W
1978-01-01
By using thin-layer chromatography and infrared spectroscopy, xanthomegnin and viomellein have been isolated and identified from species of the Aspergillus ochraceus group. A correlation was established between the occurrence of these fungal quinones in the fungal cultural products and the ability of these products to induce mycotoxicosis in mice. In addition, a method was employed to estimate the amount of xanthomegnin and viomellein produced by the fungi. PMID:736540
NASA Astrophysics Data System (ADS)
Gado, Tamer A.; Nguyen, Van-Thanh-Van
2016-04-01
This paper, the second of a two-part paper, investigates the nonstationary behaviour of flood peaks in Quebec (Canada) by analyzing the annual maximum flow series (AMS) available for the common 1966-2001 period from a network of 32 watersheds. Temporal trends in the mean of flood peaks were examined by the nonparametric Mann-Kendall test. The significance of the detected trends over the whole province is also assessed by a bootstrap test that preserves the cross-correlation structure of the network. Furthermore, The LM-NS method (introduced in the first part) is used to parametrically model the AMS, investigating its applicability to real data, to account for temporal trends in the moments of the time series. In this study two probability distributions (GEV & Gumbel) were selected to model four different types of time-varying moments of the historical time series considered, comprising eight competing models. The selected models are: two stationary models (GEV0 & Gumbel0), two nonstationary models in the mean as a linear function of time (GEV1 & Gumbel1), two nonstationary models in the mean as a parabolic function of time (GEV2 & Gumbel2), and two nonstationary models in the mean and the log standard deviation as linear functions of time (GEV11 & Gumbel11). The eight models were applied to flood data available for each watershed and their performance was compared to identify the best model for each location. The comparative methodology involves two phases: (1) a descriptive ability based on likelihood-based optimality criteria such as the Bayesian Information Criterion (BIC) and the deviance statistic; and (2) a predictive ability based on the residual bootstrap. According to the Mann-Kendall test and the LM-NS method, a quarter of the analyzed stations show significant trends in the AMS. All of the significant trends are negative, indicating decreasing flood magnitudes in Quebec. It was found that the LM-NS method could provide accurate flood estimates in the context of nonstationarity. The results have indicated the importance of taking into consideration the nonstationary behaviour of the flood series in order to improve the quality of flood estimation. The results also provided a general impression on the possible impacts of climate change on flood estimation in the Quebec province.
NASA Astrophysics Data System (ADS)
Fonseca, E. S. R.; de Jesus, M. E. P.
2007-07-01
The estimation of optical properties of highly turbid and opaque biological tissue is a difficult task since conventional purely optical methods rapidly loose sensitivity as the mean photon path length decreases. Photothermal methods, such as pulsed or frequency domain photothermal radiometry (FD-PTR), on the other hand, show remarkable sensitivity in experimental conditions that produce very feeble optical signals. Photothermal Radiometry is primarily sensitive to absorption coefficient yielding considerably higher estimation errors on scattering coefficients. Conversely, purely optical methods such as Local Diffuse Reflectance (LDR) depend mainly on the scattering coefficient and yield much better estimates of this parameter. Therefore, at moderate transport albedos, the combination of photothermal and reflectance methods can improve considerably the sensitivity of detection of tissue optical properties. The authors have recently proposed a novel method that combines FD-PTR with LDR, aimed at improving sensitivity on the determination of both optical properties. Signal analysis was performed by global fitting the experimental data to forward models based on Monte-Carlo simulations. Although this approach is accurate, the associated computational burden often limits its use as a forward model. Therefore, the application of analytical models based on the diffusion approximation offers a faster alternative. In this work, we propose the calculation of the diffuse reflectance and the fluence rate profiles under the δ-P I approximation. This approach is known to approximate fluence rate expressions better close to collimated sources and boundaries than the standard diffusion approximation (SDA). We extend this study to the calculation of the diffuse reflectance profiles. The ability of the δ-P I based model to provide good estimates of the absorption, scattering and anisotropy coefficients is tested against Monte-Carlo simulations over a wide range of scattering to absorption ratios. Experimental validation of the proposed method is accomplished by a set of measurements on solid absorbing and scattering phantoms.
Dai, Haichao; Shi, Yan; Wang, Yilin; Sun, Yujing; Hu, Jingting; Ni, Pengjuan; Li, Zhuang
2014-03-15
In this work, we proposed a facile, environmentally friendly and cost-effective assay for melamine with BSA-stabilized gold nanoclusters (AuNCs) as a fluorescence reader. Melamine, which has a multi-nitrogen heterocyclic ring, is prone to coordinate with Hg(2+). This property causes the anti-quenching ability of Hg(2+) to AuNCs through decreasing the metallophilic interaction between Hg(2+) and Au(+). By this method, detection limit down to 0.15 µM is obtained, which is approximately 130 times lower than that of the US food and Drug Administration estimated melamine safety limit of 20 µM. Furthermore, several real samples spiked with melamine, including raw milk and milk powder, are analyzed using the sensing system with excellent recoveries. This gold-nanocluster-based fluorescent method could find applications in highly sensitive detection of melamine in real samples. © 2013 Elsevier B.V. All rights reserved.
Accuracy and Precision of Radioactivity Quantification in Nuclear Medicine Images
Frey, Eric C.; Humm, John L.; Ljungberg, Michael
2012-01-01
The ability to reliably quantify activity in nuclear medicine has a number of increasingly important applications. Dosimetry for targeted therapy treatment planning or for approval of new imaging agents requires accurate estimation of the activity in organs, tumors, or voxels at several imaging time points. Another important application is the use of quantitative metrics derived from images, such as the standard uptake value commonly used in positron emission tomography (PET), to diagnose and follow treatment of tumors. These measures require quantification of organ or tumor activities in nuclear medicine images. However, there are a number of physical, patient, and technical factors that limit the quantitative reliability of nuclear medicine images. There have been a large number of improvements in instrumentation, including the development of hybrid single-photon emission computed tomography/computed tomography and PET/computed tomography systems, and reconstruction methods, including the use of statistical iterative reconstruction methods, which have substantially improved the ability to obtain reliable quantitative information from planar, single-photon emission computed tomography, and PET images. PMID:22475429
Rosen, Lisa M.; Liu, Tao; Merchant, Roland C.
2016-01-01
BACKGROUND Blood and body fluid exposures are frequently evaluated in emergency departments (EDs). However, efficient and effective methods for estimating their incidence are not yet established. OBJECTIVE Evaluate the efficiency and accuracy of estimating statewide ED visits for blood or body fluid exposures using International Classification of Diseases, Ninth Revision (ICD-9), code searches. DESIGN Secondary analysis of a database of ED visits for blood or body fluid exposure. SETTING EDs of 11 civilian hospitals throughout Rhode Island from January 1, 1995, through June 30, 2001. PATIENTS Patients presenting to the ED for possible blood or body fluid exposure were included, as determined by prespecified ICD-9 codes. METHODS Positive predictive values (PPVs) were estimated to determine the ability of 10 ICD-9 codes to distinguish ED visits for blood or body fluid exposure from ED visits that were not for blood or body fluid exposure. Recursive partitioning was used to identify an optimal subset of ICD-9 codes for this purpose. Random-effects logistic regression modeling was used to examine variations in ICD-9 coding practices and styles across hospitals. Cluster analysis was used to assess whether the choice of ICD-9 codes was similar across hospitals. RESULTS The PPV for the original 10 ICD-9 codes was 74.4% (95% confidence interval [CI], 73.2%–75.7%), whereas the recursive partitioning analysis identified a subset of 5 ICD-9 codes with a PPV of 89.9% (95% CI, 88.9%–90.8%) and a misclassification rate of 10.1%. The ability, efficiency, and use of the ICD-9 codes to distinguish types of ED visits varied across hospitals. CONCLUSIONS Although an accurate subset of ICD-9 codes could be identified, variations across hospitals related to hospital coding style, efficiency, and accuracy greatly affected estimates of the number of ED visits for blood or body fluid exposure. PMID:22561713
Kahalley, Lisa S.; Winter-Greenberg, Amanda; Stancel, Heather; Ris, M. Douglas; Gragert, Marsha
2016-01-01
Introduction Pediatric brain tumor survivors are at risk for working memory and processing speed impairment. The General Ability Index (GAI) provides an estimate of intellectual functioning that is less influenced by working memory and processing speed than a Full Scale IQ (FSIQ). The Cognitive Proficiency Index (CPI) provides a measure of efficient information processing derived from working memory and processing speed tasks. We examined the utility of the GAI and CPI to quantify neurocognitive outcomes in a sample of pediatric brain tumor survivors. Methods GAI, CPI, and FSIQ scores from the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) were examined for 57 pediatric brain tumor survivors (ages 6–16) treated with cranial radiation therapy (RT). Results GAI scores were higher than FSIQ and CPI scores, both p < .001. Lower CPI scores were associated with history of craniospinal irradiation and time since RT. Lower FSIQ and GAI scores were associated with higher RT dose and time since RT. The rate of clinically significant GAI-FSIQ discrepancies in our sample was greater than observed in the WISC-IV standardization sample, p < .001. Estimated premorbid IQ scores were higher than GAI, p < .01, and FSIQ scores, p < .001. Conclusions Pediatric brain tumor survivors exhibit weaker cognitive proficiency than expected for age, while general reasoning ability remains relatively spared. The GAI may be useful to quantify the intellectual potential of a survivor when appropriate accommodations are in place for relative cognitive proficiency weaknesses. The CPI may be a particularly sensitive outcome measure of treatment-related cognitive change in this population. PMID:27295192
ERIC Educational Resources Information Center
Bifulco, Robert
2012-01-01
The ability of nonexperimental estimators to match impact estimates derived from random assignment is examined using data from the evaluation of two interdistrict magnet schools. As in previous within-study comparisons, nonexperimental estimates differ from estimates based on random assignment when nonexperimental estimators are implemented…
The validity of a web-based FFQ assessed by doubly labelled water and multiple 24-h recalls.
Medin, Anine C; Carlsen, Monica H; Hambly, Catherine; Speakman, John R; Strohmaier, Susanne; Andersen, Lene F
2017-12-01
The aim of this study was to validate the estimated habitual dietary intake from a newly developed web-based FFQ (WebFFQ), for use in an adult population in Norway. In total, ninety-two individuals were recruited. Total energy expenditure (TEE) measured by doubly labelled water was used as the reference method for energy intake (EI) in a subsample of twenty-nine women, and multiple 24-h recalls (24HR) were used as the reference method for the relative validation of macronutrients and food groups in the entire sample. Absolute differences, ratios, crude and deattenuated correlations, cross-classifications, Bland-Altman plot and plots between misreporting of EI (EI-TEE) and the relative misreporting of food groups (WebFFQ-24HR) were used to assess the validity. Results showed that EI on group level was not significantly different from TEE measured by doubly labelled water (0·7 MJ/d), but ranking abilities were poor (r -0·18). The relative validation showed an overestimation for the majority of the variables using absolute intakes, especially for the food groups 'vegetables' and 'fish and shellfish', but an improved agreement between the test and reference tool was observed for energy adjusted intakes. Deattenuated correlation coefficients were between 0·22 and 0·89, and low levels of grossly misclassified individuals (0-3 %) were observed for the majority of the energy adjusted variables for macronutrients and food groups. In conclusion, energy estimates from the WebFFQ should be used with caution, but the estimated absolute intakes on group level and ranking abilities seem acceptable for macronutrients and most food groups.
Pandit, Jaideep J; Tavare, Aniket
2011-07-01
It is important that a surgical list is planned to utilise as much of the scheduled time as possible while not over-running, because this can lead to cancellation of operations. We wished to assess whether, theoretically, the known duration of individual operations could be used quantitatively to predict the likely duration of the operating list. In a university hospital setting, we first assessed the extent to which the current ad-hoc method of operating list planning was able to match the scheduled operating list times for 153 consecutive historical lists. Using receiver operating curve analysis, we assessed the ability of an alternative method to predict operating list duration for the same operating lists. This method uses a simple formula: the sum of individual operation times and a pooled standard deviation of these times. We used the operating list duration estimated from this formula to generate a probability that the operating list would finish within its scheduled time. Finally, we applied the simple formula prospectively to 150 operating lists, 'shadowing' the current ad-hoc method, to confirm the predictive ability of the formula. The ad-hoc method was very poor at planning: 50% of historical operating lists were under-booked and 37% over-booked. In contrast, the simple formula predicted the correct outcome (under-run or over-run) for 76% of these operating lists. The calculated probability that a planned series of operations will over-run or under-run was found useful in developing an algorithm to adjust the planned cases optimally. In the prospective series, 65% of operating lists were over-booked and 10% were under-booked. The formula predicted the correct outcome for 84% of operating lists. A simple quantitative method of estimating operating list duration for a series of operations leads to an algorithm (readily created on an Excel spreadsheet, http://links.lww.com/EJA/A19) that can potentially improve operating list planning.
ERIC Educational Resources Information Center
Sahin, Alper; Weiss, David J.
2015-01-01
This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…
Brief Report: Use of DQ for Estimating Cognitive Ability in Young Children with Autism
ERIC Educational Resources Information Center
Delmolino, Lara M.
2006-01-01
The utility of Developmental Quotients (DQ) from the Psychoeducational Profile--Revised (PEP-R) to estimate cognitive ability in young children with autism was assessed. DQ scores were compared to scores from the Stanford-Binet Intelligence Scales--Fourth Edition (SB-FE) for 27 preschool students with autism. Overall and domain DQ's on the PEP-R…
Peterson, Josh F.; Eden, Svetlana K.; Moons, Karel G.; Ikizler, T. Alp; Matheny, Michael E.
2013-01-01
Summary Background and objectives Baseline creatinine (BCr) is frequently missing in AKI studies. Common surrogate estimates can misclassify AKI and adversely affect the study of related outcomes. This study examined whether multiple imputation improved accuracy of estimating missing BCr beyond current recommendations to apply assumed estimated GFR (eGFR) of 75 ml/min per 1.73 m2 (eGFR 75). Design, setting, participants, & measurements From 41,114 unique adult admissions (13,003 with and 28,111 without BCr data) at Vanderbilt University Hospital between 2006 and 2008, a propensity score model was developed to predict likelihood of missing BCr. Propensity scoring identified 6502 patients with highest likelihood of missing BCr among 13,003 patients with known BCr to simulate a “missing” data scenario while preserving actual reference BCr. Within this cohort (n=6502), the ability of various multiple-imputation approaches to estimate BCr and classify AKI were compared with that of eGFR 75. Results All multiple-imputation methods except the basic one more closely approximated actual BCr than did eGFR 75. Total AKI misclassification was lower with multiple imputation (full multiple imputation + serum creatinine) (9.0%) than with eGFR 75 (12.3%; P<0.001). Improvements in misclassification were greater in patients with impaired kidney function (full multiple imputation + serum creatinine) (15.3%) versus eGFR 75 (40.5%; P<0.001). Multiple imputation improved specificity and positive predictive value for detecting AKI at the expense of modestly decreasing sensitivity relative to eGFR 75. Conclusions Multiple imputation can improve accuracy in estimating missing BCr and reduce misclassification of AKI beyond currently proposed methods. PMID:23037980
NASA Astrophysics Data System (ADS)
Raventos-Duran, Teresa; Valorso, Richard; Aumont, Bernard; Camredon, Marie
2010-05-01
The oxidation of volatile organic compounds emitted in the atmosphere involves complex reaction mechanisms which leads to the formation of oxygenated organic intermediates, usually denoted as secondary organics. The fate of these secondary organics remains poorly quantified due to a lack of information about their speciation, distribution and evolution in the gas and condensed phases. A significant fraction of secondary organics may dissolve into the tropospheric aqueous phase owing to the presence of polar moieties generated during the oxidation processes. The partitioning of organics between the gas and the aqueous atmospheric phases is usually described in the basis of Henry's law. Atmospheric models require a knowledge of the Henry's law coefficient (H) for every water soluble organic species described in the chemical mechanism. Methods that can predict reliable H values for the vast number of organic compounds are therefore required. We have compiled a data set of experimental Henry's law constants for compounds bearing functional groups of atmospheric relevance. This data set was then used to develop GROMHE, a structure activity relationship to predict H values based on a group contribution approach. We assessed its performance with two other available estimation methods. The results show that for all these methods the reliability of the estimates decreases with increasing solubility. We discuss differences between methods and found that GROMHE had greater prediction ability.
Integrating Stomach Content and Stable Isotope Analyses to Quantify the Diets of Pygoscelid Penguins
Polito, Michael J.; Trivelpiece, Wayne Z.; Karnovsky, Nina J.; Ng, Elizabeth; Patterson, William P.; Emslie, Steven D.
2011-01-01
Stomach content analysis (SCA) and more recently stable isotope analysis (SIA) integrated with isotopic mixing models have become common methods for dietary studies and provide insight into the foraging ecology of seabirds. However, both methods have drawbacks and biases that may result in difficulties in quantifying inter-annual and species-specific differences in diets. We used these two methods to simultaneously quantify the chick-rearing diet of Chinstrap (Pygoscelis antarctica) and Gentoo (P. papua) penguins and highlight methods of integrating SCA data to increase accuracy of diet composition estimates using SIA. SCA biomass estimates were highly variable and underestimated the importance of soft-bodied prey such as fish. Two-source, isotopic mixing model predictions were less variable and identified inter-annual and species-specific differences in the relative amounts of fish and krill in penguin diets not readily apparent using SCA. In contrast, multi-source isotopic mixing models had difficulty estimating the dietary contribution of fish species occupying similar trophic levels without refinement using SCA-derived otolith data. Overall, our ability to track inter-annual and species-specific differences in penguin diets using SIA was enhanced by integrating SCA data to isotopic mixing modes in three ways: 1) selecting appropriate prey sources, 2) weighting combinations of isotopically similar prey in two-source mixing models and 3) refining predicted contributions of isotopically similar prey in multi-source models. PMID:22053199
Smart, Adam S; Tingley, Reid; Weeks, Andrew R; van Rooyen, Anthony R; McCarthy, Michael A
2015-10-01
Effective management of alien species requires detecting populations in the early stages of invasion. Environmental DNA (eDNA) sampling can detect aquatic species at relatively low densities, but few studies have directly compared detection probabilities of eDNA sampling with those of traditional sampling methods. We compare the ability of a traditional sampling technique (bottle trapping) and eDNA to detect a recently established invader, the smooth newt Lissotriton vulgaris vulgaris, at seven field sites in Melbourne, Australia. Over a four-month period, per-trap detection probabilities ranged from 0.01 to 0.26 among sites where L. v. vulgaris was detected, whereas per-sample eDNA estimates were much higher (0.29-1.0). Detection probabilities of both methods varied temporally (across days and months), but temporal variation appeared to be uncorrelated between methods. Only estimates of spatial variation were strongly correlated across the two sampling techniques. Environmental variables (water depth, rainfall, ambient temperature) were not clearly correlated with detection probabilities estimated via trapping, whereas eDNA detection probabilities were negatively correlated with water depth, possibly reflecting higher eDNA concentrations at lower water levels. Our findings demonstrate that eDNA sampling can be an order of magnitude more sensitive than traditional methods, and illustrate that traditional- and eDNA-based surveys can provide independent information on species distributions when occupancy surveys are conducted over short timescales.
Measurement Uncertainty of Dew-Point Temperature in a Two-Pressure Humidity Generator
NASA Astrophysics Data System (ADS)
Martins, L. Lages; Ribeiro, A. Silva; Alves e Sousa, J.; Forbes, Alistair B.
2012-09-01
This article describes the measurement uncertainty evaluation of the dew-point temperature when using a two-pressure humidity generator as a reference standard. The estimation of the dew-point temperature involves the solution of a non-linear equation for which iterative solution techniques, such as the Newton-Raphson method, are required. Previous studies have already been carried out using the GUM method and the Monte Carlo method but have not discussed the impact of the approximate numerical method used to provide the temperature estimation. One of the aims of this article is to take this approximation into account. Following the guidelines presented in the GUM Supplement 1, two alternative approaches can be developed: the forward measurement uncertainty propagation by the Monte Carlo method when using the Newton-Raphson numerical procedure; and the inverse measurement uncertainty propagation by Bayesian inference, based on prior available information regarding the usual dispersion of values obtained by the calibration process. The measurement uncertainties obtained using these two methods can be compared with previous results. Other relevant issues concerning this research are the broad application to measurements that require hygrometric conditions obtained from two-pressure humidity generators and, also, the ability to provide a solution that can be applied to similar iterative models. The research also studied the factors influencing both the use of the Monte Carlo method (such as the seed value and the convergence parameter) and the inverse uncertainty propagation using Bayesian inference (such as the pre-assigned tolerance, prior estimate, and standard deviation) in terms of their accuracy and adequacy.
NASA Astrophysics Data System (ADS)
Hasan, Mohammed A.
1997-11-01
In this dissertation, we present several novel approaches for detection and identification of targets of arbitrary shapes from the acoustic backscattered data and using the incident waveform. This problem is formulated as time- delay estimation and sinusoidal frequency estimation problems which both have applications in many other important areas in signal processing. Solving time-delay estimation problem allows the identification of the specular components in the backscattered signal from elastic and non-elastic targets. Thus, accurate estimation of these time delays would help in determining the existence of certain clues for detecting targets. Several new methods for solving these two problems in the time, frequency and wavelet domains are developed. In the time domain, a new block fast transversal filter (BFTF) is proposed for a fast implementation of the least squares (LS) method. This BFTF algorithm is derived by using data-related constrained block-LS cost function to guarantee global optimality. The new soft-constrained algorithm provides an efficient way of transferring weight information between blocks of data and thus it is computationally very efficient compared with other LS- based schemes. Additionally, the tracking ability of the algorithm can be controlled by varying the block length and/or a soft constrained parameter. The effectiveness of this algorithm is tested on several underwater acoustic backscattered data for elastic targets and non-elastic (cement chunk) objects. In the frequency domain, the time-delay estimation problem is converted to a sinusoidal frequency estimation problem by using the discrete Fourier transform. Then, the lagged sample covariance matrices of the resulting signal are computed and studied in terms of their eigen- structure. These matrices are shown to be robust and effective in extracting bases for the signal and noise subspaces. New MUSIC and matrix pencil-based methods are derived these subspaces. The effectiveness of the method is demonstrated on the problem of detection of multiple specular components in the acoustic backscattered data. Finally, a method for the estimation of time delays using wavelet decomposition is derived. The sub-band adaptive filtering uses discrete wavelet transform for multi- resolution or sub-band decomposition. Joint time delay estimation for identifying multi-specular components and subsequent adaptive filtering processes are performed on the signal in each sub-band. This would provide multiple 'look' of the signal at different resolution scale which results in more accurate estimates for delays associated with the specular components. Simulation results on the simulated and real shallow water data are provided which show the promise of this new scheme for target detection in a heavy cluttered environment.
Hardy, Susan E.; McGurl, David J.; Studenski, Stephanie A.; Degenholtz, Howard B.
2010-01-01
Objectives To establish nationally representative estimates of the prevalence of self-reported difficulty and inability to walk ¼ mile among older adults and to identify the characteristics independently associated with difficulty or inability to walk ¼ mile. Design Cross-sectional analysis of data from the 2003 Cost and Use Medicare Current Beneficiary Survey. Setting Community. Participants 9563 community-dwelling Medicare beneficiaries aged 65 years or older, representing an estimated total population of 34.2 million older adults. Measurements Self-reported ability to walk ¼ mile, sociodemographics, chronic conditions, body mass index, smoking, and functional status. Results In 2003, an estimated 9.5 million aged Medicare beneficiaries had difficulty walking ¼ mile and 5.9 million were unable. Among the 20.2 million older adults with no difficulty in basic or instrumental activities of daily living (ADL), an estimated 4.3 million (21%) had limited ability to walk ¼ mile. Having difficulty or being unable to walk ¼ mile was independently associated with older age, female sex, non-Hispanic ethnicity, lower educational level, Medicaid entitlement, most chronic medical conditions, current smoking, and being overweight or obese. Conclusion Almost half of older adults, and 20% of those reporting no ADL limitations, report limited ability to walk ¼ mile. Among functionally independent older adults, reported ability to walk ¼ mile can identify vulnerable older adults with greater medical problems and fewer resources, and may be a valuable clinical marker in planning their care. Future work is needed to determine the association between ¼ mile walk ability and subsequent functional decline and healthcare utilization. PMID:20210817
Schneider, André; Nguyen, Christophe
2011-01-01
Organic acids released from plant roots can form complexes with cadmium (Cd) in the soil solution and influence metal bioavailability not only due to the nature and concentration of the complexes but also due to their lability. The lability of a complex influences its ability to buffer changes in the concentration of free ions (Cd); it depends on the association (, m mol s) and dissociation (, s) rate constants. A resin exchange method was used to estimate and (m mol s), which is the conditional estimate of depending on the calcium (Ca) concentration in solution. The constants were estimated for oxalate, citrate, and malate, three low-molecular-weight organic acids commonly exuded by plant roots and expected to strongly influence Cd uptake by plants. For all three organic acids, the and estimates were around 2.5 10 m mol s and 1.3 × 10 s, respectively. Based on the literature, these values indicate that the Cd- low-molecular-weight organic acids complexes formed between Cd and low-molecular-weight organic acids may be less labile than complexes formed with soil soluble organic matter but more labile than those formed with aminopolycarboxylic chelates. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Yeung, Debby; Sorbara, Luigina
2018-01-01
It is important to be able to accurately estimate the central corneal clearance when fitting scleral contact lenses. Tools available have intrinsic biases due to the angle of viewing, and therefore an idea of the amount of error in estimation will benefit the fitter. To compare the accuracy of observers' ability to estimate scleral contact lens central corneal clearance (CCC) with biomicroscopy to measurements using slit-lamp imaging and anterior segment optical coherence tomography (AS-OCT). In a Web-based survey with images of four scleral lens fits obtained with a slit-lamp video imaging system, participants were asked to estimate the CCC. Responses were compared with known values of CCC of these images determined with an image-processing program (digital CCC) and using the AS-OCT (AS-OCT CCC). Bland-Altman plots and concordance correlation coefficients were used to assess the agreement of CCC measured by the various methods. Sixty-six participants were categorized for analysis based on the amount of experience with scleral lens fitting into novice, intermediate, or advanced fitters. Comparing the estimated CCC to the digital CCC, all three groups overestimated by an average of +27.3 ± 67.3 μm. The estimated CCC was highly correlated to the digital CCC (0.79, 0.92, and 0.94 for each group, respectively). Compared with the CCC measurements using AS-OCT, the three groups of participants overestimated by +103.3 μm and had high correlations (0.79, 0.93, and 0.94 for each group). Results from this study validate the ability of contact lens practitioners to observe and estimate the CCC in scleral lens fittings through the use of biomicroscopic viewing. Increasing experience with scleral lens fitting does not improve the correlation with measured CCC from digital or the AS-OCT. However, the intermediate and advanced groups display significantly less inter-observer variability compared with the novice group.
NASA Astrophysics Data System (ADS)
Pujos, Cyril; Regnier, Nicolas; Mousseau, Pierre; Defaye, Guy; Jarny, Yvon
2007-05-01
Simulation quality is determined by the knowledge of the parameters of the model. Yet the rheological models for polymer are often not very accurate, since the viscosity measurements are made under approximations as homogeneous temperature and empirical corrections as Bagley one. Furthermore rheological behaviors are often traduced by mathematical laws as the Cross or the Carreau-Yasuda ones, whose parameters are fitted from viscosity values, obtained with corrected experimental data, and not appropriate for each polymer. To correct these defaults, a table-like rheological model is proposed. This choice makes easier the estimation of model parameters, since each parameter has the same order of magnitude. As the mathematical shape of the model is not imposed, the estimation process is appropriate for each polymer. The proposed method consists in minimizing the quadratic norm of the difference between calculated variables and measured data. In this study an extrusion die is simulated, in order to provide us temperature along the extrusion channel, pressure and flow references. These data allow to characterize thermal transfers and flow phenomena, in which the viscosity is implied. Furthermore the different natures of data allow to estimate viscosity for a large range of shear rates. The estimated rheological model improves the agreement between measurements and simulation: for numerical cases, the error on the flow becomes less than 0.1% for non-Newtonian rheology. This method couples measurements and simulation, constitutes a very accurate mean of rheology determination, and allows to improve the prediction abilities of the model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kulisek, Jonathan A.; Schweppe, John E.; Stave, Sean C.
2015-06-01
Helicopter-mounted gamma-ray detectors can provide law enforcement officials the means to quickly and accurately detect, identify, and locate radiological threats over a wide geographical area. The ability to accurately distinguish radiological threat-generated gamma-ray signatures from background gamma radiation in real time is essential in order to realize this potential. This problem is non-trivial, especially in urban environments for which the background may change very rapidly during flight. This exacerbates the challenge of estimating background due to the poor counting statistics inherent in real-time airborne gamma-ray spectroscopy measurements. To address this, we have developed a new technique for real-time estimation ofmore » background gamma radiation from aerial measurements. This method is built upon on the noise-adjusted singular value decomposition (NASVD) technique that was previously developed for estimating the potassium (K), uranium (U), and thorium (T) concentrations in soil post-flight. The method can be calibrated using K, U, and T spectra determined from radiation transport simulations along with basis functions, which may be determined empirically by applying maximum likelihood estimation (MLE) to previously measured airborne gamma-ray spectra. The method was applied to both measured and simulated airborne gamma-ray spectra, with and without man-made radiological source injections. Compared to schemes based on simple averaging, this technique was less sensitive to background contamination from the injected man-made sources and may be particularly useful when the gamma-ray background frequently changes during the course of the flight.« less
Wan, Xiaomin; Peng, Liubao; Li, Yuanjian
2015-01-01
Background In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study. Methods A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured. Results All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method. Conclusions The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method. PMID:25803659
Karin, Eyal; Dear, Blake F; Heller, Gillian Z; Crane, Monique F; Titov, Nickolai
2018-04-19
Missing cases following treatment are common in Web-based psychotherapy trials. Without the ability to directly measure and evaluate the outcomes for missing cases, the ability to measure and evaluate the effects of treatment is challenging. Although common, little is known about the characteristics of Web-based psychotherapy participants who present as missing cases, their likely clinical outcomes, or the suitability of different statistical assumptions that can characterize missing cases. Using a large sample of individuals who underwent Web-based psychotherapy for depressive symptoms (n=820), the aim of this study was to explore the characteristics of cases who present as missing cases at posttreatment (n=138), their likely treatment outcomes, and compare between statistical methods for replacing their missing data. First, common participant and treatment features were tested through binary logistic regression models, evaluating the ability to predict missing cases. Second, the same variables were screened for their ability to increase or impede the rate symptom change that was observed following treatment. Third, using recontacted cases at 3-month follow-up to proximally represent missing cases outcomes following treatment, various simulated replacement scores were compared and evaluated against observed clinical follow-up scores. Missing cases were dominantly predicted by lower treatment adherence and increased symptoms at pretreatment. Statistical methods that ignored these characteristics can overlook an important clinical phenomenon and consequently produce inaccurate replacement outcomes, with symptoms estimates that can swing from -32% to 70% from the observed outcomes of recontacted cases. In contrast, longitudinal statistical methods that adjusted their estimates for missing cases outcomes by treatment adherence rates and baseline symptoms scores resulted in minimal measurement bias (<8%). Certain variables can characterize and predict missing cases likelihood and jointly predict lesser clinical improvement. Under such circumstances, individuals with potentially worst off treatment outcomes can become concealed, and failure to adjust for this can lead to substantial clinical measurement bias. Together, this preliminary research suggests that missing cases in Web-based psychotherapeutic interventions may not occur as random events and can be systematically predicted. Critically, at the same time, missing cases may experience outcomes that are distinct and important for a complete understanding of the treatment effect. ©Eyal Karin, Blake F Dear, Gillian Z Heller, Monique F Crane, Nickolai Titov. Originally published in JMIR Mental Health (http://mental.jmir.org), 19.04.2018.
Estimation of effective connectivity via data-driven neural modeling
Freestone, Dean R.; Karoly, Philippa J.; Nešić, Dragan; Aram, Parham; Cook, Mark J.; Grayden, David B.
2014-01-01
This research introduces a new method for functional brain imaging via a process of model inversion. By estimating parameters of a computational model, we are able to track effective connectivity and mean membrane potential dynamics that cannot be directly measured using electrophysiological measurements alone. The ability to track the hidden aspects of neurophysiology will have a profound impact on the way we understand and treat epilepsy. For example, under the assumption the model captures the key features of the cortical circuits of interest, the framework will provide insights into seizure initiation and termination on a patient-specific basis. It will enable investigation into the effect a particular drug has on specific neural populations and connectivity structures using minimally invasive measurements. The method is based on approximating brain networks using an interconnected neural population model. The neural population model is based on a neural mass model that describes the functional activity of the brain, capturing the mesoscopic biophysics and anatomical structure. The model is made subject-specific by estimating the strength of intra-cortical connections within a region and inter-cortical connections between regions using a novel Kalman filtering method. We demonstrate through simulation how the framework can be used to track the mechanisms involved in seizure initiation and termination. PMID:25506315
Ginsburg, Shoshana B; Taimen, Pekka; Merisaari, Harri; Vainio, Paula; Boström, Peter J; Aronen, Hannu J; Jambor, Ivan; Madabhushi, Anant
2016-12-01
To develop and evaluate a prostate-based method (PBM) for estimating pharmacokinetic parameters on dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) by leveraging inherent differences in pharmacokinetic characteristics between the peripheral zone (PZ) and transition zone (TZ). This retrospective study, approved by the Institutional Review Board, included 40 patients who underwent a multiparametric 3T MRI examination and subsequent radical prostatectomy. A two-step PBM for estimating pharmacokinetic parameters exploited the inherent differences in pharmacokinetic characteristics associated with the TZ and PZ. First, the reference region model was implemented to estimate ratios of K trans between normal TZ and PZ. Subsequently, the reference region model was leveraged again to estimate values for K trans and v e for every prostate voxel. The parameters of PBM were compared with those estimated using an arterial input function (AIF) derived from the femoral arteries. The ability of the parameters to differentiate prostate cancer (PCa) from benign tissue was evaluated on a voxel and lesion level. Additionally, the effect of temporal downsampling of the DCE MRI data was assessed. Significant differences (P < 0.05) in PBM K trans between PCa lesions and benign tissue were found in 26/27 patients with TZ lesions and in 33/38 patients with PZ lesions; significant differences in AIF-based K trans occurred in 26/27 and 30/38 patients, respectively. The 75 th and 100 th percentiles of K trans and v e estimated using PBM positively correlated with lesion size (P < 0.05). Pharmacokinetic parameters estimated via PBM outperformed AIF-based parameters in PCa detection. J. Magn. Reson. Imaging 2016;44:1405-1414. © 2016 International Society for Magnetic Resonance in Medicine.
Diagnosis of pneumothorax using a microwave-based detector
NASA Astrophysics Data System (ADS)
Ling, Geoffrey S. F.; Riechers, Ronald G., Sr.; Pasala, Krishna M.; Blanchard, Jeremy; Nozaki, Masako; Ramage, Anthony; Jackson, William; Rosner, Michael; Garcia-Pinto, Patricia; Yun, Catherine; Butler, Nathan; Riechers, Ronald G., Jr.; Williams, Daniel; Zeidman, Seth M.; Rhee, Peter; Ecklund, James M.; Fitzpatrick, Thomas; Lockhart, Stephen
2001-08-01
A novel method for identifying pneumothorax is presented. This method is based on a novel device that uses electromagnetic waves in the microwave radio frequency (RF) region and a modified algorithm previously used for the estimation of the angle of arrival of radar signals. In this study, we employ this radio frequency triage tool (RAFT) to the clinical condition of pneumothorax, which is a collapsed lung. In anesthetized pigs, RAFT can detect changes in the RF signature from a lung that is 20 percent or greater collapsed. These results are compared to chest x-ray. Both studies are equivalent in their ability to detect pneumothorax in pigs.
M-estimation for robust sparse unmixing of hyperspectral images
NASA Astrophysics Data System (ADS)
Toomik, Maria; Lu, Shijian; Nelson, James D. B.
2016-10-01
Hyperspectral unmixing methods often use a conventional least squares based lasso which assumes that the data follows the Gaussian distribution. The normality assumption is an approximation which is generally invalid for real imagery data. We consider a robust (non-Gaussian) approach to sparse spectral unmixing of remotely sensed imagery which reduces the sensitivity of the estimator to outliers and relaxes the linearity assumption. The method consists of several appropriate penalties. We propose to use an lp norm with 0 < p < 1 in the sparse regression problem, which induces more sparsity in the results, but makes the problem non-convex. On the other hand, the problem, though non-convex, can be solved quite straightforwardly with an extensible algorithm based on iteratively reweighted least squares. To deal with the huge size of modern spectral libraries we introduce a library reduction step, similar to the multiple signal classification (MUSIC) array processing algorithm, which not only speeds up unmixing but also yields superior results. In the hyperspectral setting we extend the traditional least squares method to the robust heavy-tailed case and propose a generalised M-lasso solution. M-estimation replaces the Gaussian likelihood with a fixed function ρ(e) that restrains outliers. The M-estimate function reduces the effect of errors with large amplitudes or even assigns the outliers zero weights. Our experimental results on real hyperspectral data show that noise with large amplitudes (outliers) often exists in the data. This ability to mitigate the influence of such outliers can therefore offer greater robustness. Qualitative hyperspectral unmixing results on real hyperspectral image data corroborate the efficacy of the proposed method.
Meaningful improvement in gait speed in hip fracture recovery.
Alley, Dawn E; Hicks, Gregory E; Shardell, Michelle; Hawkes, William; Miller, Ram; Craik, Rebecca L; Mangione, Kathleen K; Orwig, Denise; Hochberg, Marc; Resnick, Barbara; Magaziner, Jay
2011-09-01
To estimate meaningful improvements in gait speed observed during recovery from hip fracture and to evaluate the sensitivity and specificity of gait speed changes in detecting change in self-reported mobility. Secondary longitudinal data analysis from two randomized controlled trials Twelve hospitals in the Baltimore, Maryland, area. Two hundred seventeen women admitted with hip fracture. Usual gait speed and self-reported mobility (ability to walk 1 block and climb 1 flight of stairs) measured 2 and 12 months after fracture. Effect size-based estimates of meaningful differences were 0.03 for small differences and 0.09 for substantial differences. Depending on the anchor (stairs vs walking) and method (mean difference vs regression), anchor-based estimates ranged from 0.10 to 0.17 m/s for small meaningful improvements and 0.17 to 0.26 m/s for substantial meaningful improvement. Optimal gait speed cutpoints yielded low sensitivity (0.39-0.62) and specificity (0.57-0.76) for improvements in self-reported mobility. Results from this sample of women recovering from hip fracture provide only limited support for the 0.10-m/s cut point for substantial meaningful change previously identified in community-dwelling older adults experiencing declines in walking abilities. Anchor-based estimates and cut points derived from receiver operating characteristic curve analysis suggest that greater improvements in gait speed may be required for substantial perceived mobility improvement in female hip fracture patients. Furthermore, gait speed change performed poorly in discriminating change in self-reported mobility. Estimates of meaningful change in gait speed may differ based on the direction of change (improvement vs decline) or between patient populations. © 2011, Copyright the Authors. Journal compilation © 2011, The American Geriatrics Society.
Meaningful Improvement in Gait Speed in Hip Fracture Recovery
Alley, Dawn E.; Hicks, Gregory E.; Shardell, Michelle; Hawkes, William; Miller, Ram; Craik, Rebecca L.; Mangione, Kathleen K.; Orwig, Denise; Hochberg, Marc; Resnick, Barbara; Magaziner, Jay
2011-01-01
OBJECTIVES To estimate meaningful improvements in gait speed observed during recovery from hip fracture and to evaluate the sensitivity and specificity of gait speed changes in detecting change in self-reported mobility. DESIGN Secondary longitudinal data analysis from two randomized controlled trials SETTING Twelve hospitals in the Baltimore, Maryland, area. PARTICIPANTS Two hundred seventeen women admitted with hip fracture. MEASUREMENTS Usual gait speed and self-reported mobility (ability to walk 1 block and climb 1 flight of stairs) measured 2 and 12 months after fracture. RESULTS Effect size–based estimates of meaningful differences were 0.03 for small differences and 0.09 for substantial differences. Depending on the anchor (stairs vs walking) and method (mean difference vs regression), anchor-based estimates ranged from 0.10 to 0.17 m/s for small meaningful improvements and 0.17 to 0.26 m/s for substantial meaningful improvement. Optimal gait speed cut-points yielded low sensitivity (0.39–0.62) and specificity (0.57–0.76) for improvements in self-reported mobility. CONCLUSION Results from this sample of women recovering from hip fracture provide only limited support for the 0.10-m/s cut point for substantial meaningful change previously identified in community-dwelling older adults experiencing declines in walking abilities. Anchor-based estimates and cut points derived from receiver operating characteristic curve analysis suggest that greater improvements in gait speed may be required for substantial perceived mobility improvement in female hip fracture patients. Furthermore, gait speed change performed poorly in discriminating change in self-reported mobility. Estimates of meaningful change in gait speed may differ based on the direction of change (improvement vs decline) or between patient populations. PMID:21883109
Indoor Spatial Updating with Reduced Visual Information
Legge, Gordon E.; Gage, Rachel; Baek, Yihwa; Bochsler, Tiana M.
2016-01-01
Purpose Spatial updating refers to the ability to keep track of position and orientation while moving through an environment. People with impaired vision may be less accurate in spatial updating with adverse consequences for indoor navigation. In this study, we asked how artificial restrictions on visual acuity and field size affect spatial updating, and also judgments of the size of rooms. Methods Normally sighted young adults were tested with artificial restriction of acuity in Mild Blur (Snellen 20/135) and Severe Blur (Snellen 20/900) conditions, and a Narrow Field (8°) condition. The subjects estimated the dimensions of seven rectangular rooms with and without these visual restrictions. They were also guided along three-segment paths in the rooms. At the end of each path, they were asked to estimate the distance and direction to the starting location. In Experiment 1, the subjects walked along the path. In Experiment 2, they were pushed in a wheelchair to determine if reduced proprioceptive input would result in poorer spatial updating. Results With unrestricted vision, mean Weber fractions for room-size estimates were near 20%. Severe Blur but not Mild Blur yielded larger errors in room-size judgments. The Narrow Field was associated with increased error, but less than with Severe Blur. There was no effect of visual restriction on estimates of distance back to the starting location, and only Severe Blur yielded larger errors in the direction estimates. Contrary to expectation, the wheelchair subjects did not exhibit poorer updating performance than the walking subjects, nor did they show greater dependence on visual condition. Discussion If our results generalize to people with low vision, severe deficits in acuity or field will adversely affect the ability to judge the size of indoor spaces, but updating of position and orientation may be less affected by visual impairment. PMID:26943674
Wan, Xiaomin; Peng, Liubao; Li, Yuanjian
2015-01-01
In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study. A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured. All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method. The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method.
Developing population models with data from marked individuals
Hae Yeong Ryu,; Kevin T. Shoemaker,; Eva Kneip,; Anna Pidgeon,; Patricia Heglund,; Brooke Bateman,; Thogmartin, Wayne E.; Reşit Akçakaya,
2016-01-01
Population viability analysis (PVA) is a powerful tool for biodiversity assessments, but its use has been limited because of the requirements for fully specified population models such as demographic structure, density-dependence, environmental stochasticity, and specification of uncertainties. Developing a fully specified population model from commonly available data sources – notably, mark–recapture studies – remains complicated due to lack of practical methods for estimating fecundity, true survival (as opposed to apparent survival), natural temporal variability in both survival and fecundity, density-dependence in the demographic parameters, and uncertainty in model parameters. We present a general method that estimates all the key parameters required to specify a stochastic, matrix-based population model, constructed using a long-term mark–recapture dataset. Unlike standard mark–recapture analyses, our approach provides estimates of true survival rates and fecundities, their respective natural temporal variabilities, and density-dependence functions, making it possible to construct a population model for long-term projection of population dynamics. Furthermore, our method includes a formal quantification of parameter uncertainty for global (multivariate) sensitivity analysis. We apply this approach to 9 bird species and demonstrate the feasibility of using data from the Monitoring Avian Productivity and Survivorship (MAPS) program. Bias-correction factors for raw estimates of survival and fecundity derived from mark–recapture data (apparent survival and juvenile:adult ratio, respectively) were non-negligible, and corrected parameters were generally more biologically reasonable than their uncorrected counterparts. Our method allows the development of fully specified stochastic population models using a single, widely available data source, substantially reducing the barriers that have until now limited the widespread application of PVA. This method is expected to greatly enhance our understanding of the processes underlying population dynamics and our ability to analyze viability and project trends for species of conservation concern.
Huang, C.; Townshend, J.R.G.; Liang, S.; Kalluri, S.N.V.; DeFries, R.S.
2002-01-01
Measured and modeled point spread functions (PSF) of sensor systems indicate that a significant portion of the recorded signal of each pixel of a satellite image originates from outside the area represented by that pixel. This hinders the ability to derive surface information from satellite images on a per-pixel basis. In this study, the impact of the PSF of the Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m bands was assessed using four images representing different landscapes. Experimental results showed that though differences between pixels derived with and without PSF effects were small on the average, the PSF generally brightened dark objects and darkened bright objects. This impact of the PSF lowered the performance of a support vector machine (SVM) classifier by 5.4% in overall accuracy and increased the overall root mean square error (RMSE) by 2.4% in estimating subpixel percent land cover. An inversion method based on the known PSF model reduced the signals originating from surrounding areas by as much as 53%. This method differs from traditional PSF inversion deconvolution methods in that the PSF was adjusted with lower weighting factors for signals originating from neighboring pixels than those specified by the PSF model. By using this deconvolution method, the lost classification accuracy due to residual impact of PSF effects was reduced to only 1.66% in overall accuracy. The increase in the RMSE of estimated subpixel land cover proportions due to the residual impact of PSF effects was reduced to 0.64%. Spatial aggregation also effectively reduced the errors in estimated land cover proportion images. About 50% of the estimation errors were removed after applying the deconvolution method and aggregating derived proportion images to twice their dimensional pixel size. ?? 2002 Elsevier Science Inc. All rights reserved.
Digital camera auto white balance based on color temperature estimation clustering
NASA Astrophysics Data System (ADS)
Zhang, Lei; Liu, Peng; Liu, Yuling; Yu, Feihong
2010-11-01
Auto white balance (AWB) is an important technique for digital cameras. Human vision system has the ability to recognize the original color of an object in a scene illuminated by a light source that has a different color temperature from D65-the standard sun light. However, recorded images or video clips, can only record the original information incident into the sensor. Therefore, those recorded will appear different from the real scene observed by the human. Auto white balance is a technique to solve this problem. Traditional methods such as gray world assumption, white point estimation, may fail for scenes with large color patches. In this paper, an AWB method based on color temperature estimation clustering is presented and discussed. First, the method gives a list of several lighting conditions that are common for daily life, which are represented by their color temperatures, and thresholds for each color temperature to determine whether a light source is this kind of illumination; second, an image to be white balanced are divided into N blocks (N is determined empirically). For each block, the gray world assumption method is used to calculate the color cast, which can be used to estimate the color temperature of that block. Third, each calculated color temperature are compared with the color temperatures in the given illumination list. If the color temperature of a block is not within any of the thresholds in the given list, that block is discarded. Fourth, the remaining blocks are given a majority selection, the color temperature having the most blocks are considered as the color temperature of the light source. Experimental results show that the proposed method works well for most commonly used light sources. The color casts are removed and the final images look natural.
A Dirichlet-Multinomial Bayes Classifier for Disease Diagnosis with Microbial Compositions.
Gao, Xiang; Lin, Huaiying; Dong, Qunfeng
2017-01-01
Dysbiosis of microbial communities is associated with various human diseases, raising the possibility of using microbial compositions as biomarkers for disease diagnosis. We have developed a Bayes classifier by modeling microbial compositions with Dirichlet-multinomial distributions, which are widely used to model multicategorical count data with extra variation. The parameters of the Dirichlet-multinomial distributions are estimated from training microbiome data sets based on maximum likelihood. The posterior probability of a microbiome sample belonging to a disease or healthy category is calculated based on Bayes' theorem, using the likelihood values computed from the estimated Dirichlet-multinomial distribution, as well as a prior probability estimated from the training microbiome data set or previously published information on disease prevalence. When tested on real-world microbiome data sets, our method, called DMBC (for Dirichlet-multinomial Bayes classifier), shows better classification accuracy than the only existing Bayesian microbiome classifier based on a Dirichlet-multinomial mixture model and the popular random forest method. The advantage of DMBC is its built-in automatic feature selection, capable of identifying a subset of microbial taxa with the best classification accuracy between different classes of samples based on cross-validation. This unique ability enables DMBC to maintain and even improve its accuracy at modeling species-level taxa. The R package for DMBC is freely available at https://github.com/qunfengdong/DMBC. IMPORTANCE By incorporating prior information on disease prevalence, Bayes classifiers have the potential to estimate disease probability better than other common machine-learning methods. Thus, it is important to develop Bayes classifiers specifically tailored for microbiome data. Our method shows higher classification accuracy than the only existing Bayesian classifier and the popular random forest method, and thus provides an alternative option for using microbial compositions for disease diagnosis.
NASA Astrophysics Data System (ADS)
Shao, Zhongshi; Pi, Dechang; Shao, Weishi
2017-11-01
This article proposes an extended continuous estimation of distribution algorithm (ECEDA) to solve the permutation flow-shop scheduling problem (PFSP). In ECEDA, to make a continuous estimation of distribution algorithm (EDA) suitable for the PFSP, the largest order value rule is applied to convert continuous vectors to discrete job permutations. A probabilistic model based on a mixed Gaussian and Cauchy distribution is built to maintain the exploration ability of the EDA. Two effective local search methods, i.e. revolver-based variable neighbourhood search and Hénon chaotic-based local search, are designed and incorporated into the EDA to enhance the local exploitation. The parameters of the proposed ECEDA are calibrated by means of a design of experiments approach. Simulation results and comparisons based on some benchmark instances show the efficiency of the proposed algorithm for solving the PFSP.
Correlation techniques to determine model form in robust nonlinear system realization/identification
NASA Technical Reports Server (NTRS)
Stry, Greselda I.; Mook, D. Joseph
1991-01-01
The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.
Active life expectancy from annual follow-up data with missing responses.
Izmirlian, G; Brock, D; Ferrucci, L; Phillips, C
2000-03-01
Active life expectancy (ALE) at a given age is defined as the expected remaining years free of disability. In this study, three categories of health status are defined according to the ability to perform activities of daily living independently. Several studies have used increment-decrement life tables to estimate ALE, without error analysis, from only a baseline and one follow-up interview. The present work conducts an individual-level covariate analysis using a three-state Markov chain model for multiple follow-up data. Using a logistic link, the model estimates single-year transition probabilities among states of health, accounting for missing interviews. This approach has the advantages of smoothing subsequent estimates and increased power by using all follow-ups. We compute ALE and total life expectancy from these estimated single-year transition probabilities. Variance estimates are computed using the delta method. Data from the Iowa Established Population for the Epidemiologic Study of the Elderly are used to test the effects of smoking on ALE on all 5-year age groups past 65 years, controlling for sex and education.
Surround-Masking Affects Visual Estimation Ability
Jastrzebski, Nicola R.; Hugrass, Laila E.; Crewther, Sheila G.; Crewther, David P.
2017-01-01
Visual estimation of numerosity involves the discrimination of magnitude between two distributions or perceptual sets that vary in number of elements. How performance on such estimation depends on peripheral sensory stimulation is unclear, even in typically developing adults. Here, we varied the central and surround contrast of stimuli that comprised a visual estimation task in order to determine whether mechanisms involved with the removal of unessential visual input functionally contributes toward number acuity. The visual estimation judgments of typically developed adults were significantly impaired for high but not low contrast surround stimulus conditions. The center and surround contrasts of the stimuli also differentially affected the accuracy of numerosity estimation depending on whether fewer or more dots were presented. Remarkably, observers demonstrated the highest mean percentage accuracy across stimulus conditions in the discrimination of more elements when the surround contrast was low and the background luminance of the central region containing the elements was dark (black center). Conversely, accuracy was severely impaired during the discrimination of fewer elements when the surround contrast was high and the background luminance of the central region was mid level (gray center). These findings suggest that estimation ability is functionally related to the quality of low-order filtration of unessential visual information. These surround masking results may help understanding of the poor visual estimation ability commonly observed in developmental dyscalculia. PMID:28360845
Hatta, Tomoko; Fujinaga, Yasunari; Kadoya, Masumi; Ueda, Hitoshi; Murayama, Hiroaki; Kurozumi, Masahiro; Ueda, Kazuhiko; Komatsu, Michiharu; Nagaya, Tadanobu; Joshita, Satoru; Kodama, Ryo; Tanaka, Eiji; Uehara, Tsuyoshi; Sano, Kenji; Tanaka, Naoki
2010-12-01
To assess the degree of hepatic fat content, simple and noninvasive methods with high objectivity and reproducibility are required. Magnetic resonance imaging (MRI) is one such candidate, although its accuracy remains unclear. We aimed to validate an MRI method for quantifying hepatic fat content by calibrating MRI reading with a phantom and comparing MRI measurements in human subjects with estimates of liver fat content in liver biopsy specimens. The MRI method was performed by a combination of MRI calibration using a phantom and double-echo chemical shift gradient-echo sequence (double-echo fast low-angle shot sequence) that has been widely used on a 1.5-T scanner. Liver fat content in patients with nonalcoholic fatty liver disease (NAFLD, n = 26) was derived from a calibration curve generated by scanning the phantom. Liver fat was also estimated by optical image analysis. The correlation between the MRI measurements and liver histology findings was examined prospectively. Magnetic resonance imaging measurements showed a strong correlation with liver fat content estimated from the results of light microscopic examination (correlation coefficient 0.91, P < 0.001) regardless of the degree of hepatic steatosis. Moreover, the severity of lobular inflammation or fibrosis did not influence the MRI measurements. This MRI method is simple and noninvasive, has excellent ability to quantify hepatic fat content even in NAFLD patients with mild steatosis or advanced fibrosis, and can be performed easily without special devices.
Normalization of metabolomics data with applications to correlation maps.
Jauhiainen, Alexandra; Madhu, Basetti; Narita, Masako; Narita, Masashi; Griffiths, John; Tavaré, Simon
2014-08-01
In metabolomics, the goal is to identify and measure the concentrations of different metabolites (small molecules) in a cell or a biological system. The metabolites form an important layer in the complex metabolic network, and the interactions between different metabolites are often of interest. It is crucial to perform proper normalization of metabolomics data, but current methods may not be applicable when estimating interactions in the form of correlations between metabolites. We propose a normalization approach based on a mixed model, with simultaneous estimation of a correlation matrix. We also investigate how the common use of a calibration standard in nuclear magnetic resonance (NMR) experiments affects the estimation of correlations. We show with both real and simulated data that our proposed normalization method is robust and has good performance when discovering true correlations between metabolites. The standardization of NMR data is shown in simulation studies to affect our ability to discover true correlations to a small extent. However, comparing standardized and non-standardized real data does not result in any large differences in correlation estimates. Source code is freely available at https://sourceforge.net/projects/metabnorm/ alexandra.jauhiainen@ki.se Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Scolari, Enrica; Sossan, Fabrizio; Paolone, Mario
2018-01-01
Due to the increasing proportion of distributed photovoltaic (PV) production in the generation mix, the knowledge of the PV generation capacity has become a key factor. In this work, we propose to compute the PV plant maximum power starting from the indirectly-estimated irradiance. Three estimators are compared in terms of i) ability to compute the PV plant maximum power, ii) bandwidth and iii) robustness against measurements noise. The approaches rely on measurements of the DC voltage, current, and cell temperature and on a model of the PV array. We show that the considered methods can accurately reconstruct the PV maximum generation even during curtailment periods, i.e. when the measured PV power is not representative of the maximum potential of the PV array. Performance evaluation is carried out by using a dedicated experimental setup on a 14.3 kWp rooftop PV installation. Results also proved that the analyzed methods can outperform pyranometer-based estimations, with a less complex sensing system. We show how the obtained PV maximum power values can be applied to train time series-based solar maximum power forecasting techniques. This is beneficial when the measured power values, commonly used as training, are not representative of the maximum PV potential.
Wu, Hui Qiong; Yan, Chang Sheng; Luo, Feng; Krishna, Rajamani
2018-04-02
Different from the established crystal engineering method for enhancing gas-separation performance, we demonstrate herein a distinct approach. In contrast to the pristine MOF (metal-organic framework) material, the C 2 H 2 /CO 2 separation ability for the resultant Ag NPs (nanoparticle)@Fe 2 O 3 @MOF composite material, estimated from breakthrough calculations, is greatly enhanced by 2 times, and further magnified up to 3 times under visible light irradiation.
Self-Estimation of Blood Alcohol Concentration: A Review
Aston, Elizabeth R.; Liguori, Anthony
2013-01-01
This article reviews the history of blood alcohol concentration (BAC) estimation training, which trains drinkers to discriminate distinct BAC levels and thus avoid excessive alcohol consumption. BAC estimation training typically combines education concerning alcohol metabolism with attention to subjective internal cues associated with specific concentrations. Estimation training was originally conceived as a component of controlled drinking programs. However, dependent drinkers were unsuccessful in BAC estimation, likely due to extreme tolerance. In contrast, moderate drinkers successfully acquired this ability. A subsequent line of research translated laboratory estimation studies to naturalistic settings by studying large samples of drinkers in their preferred drinking environments. Thus far, naturalistic studies have provided mixed results regarding the most effective form of BAC feedback. BAC estimation training is important because it imparts an ability to perceive individualized impairment that may be present below the legal limit for driving. Consequently, the training can be a useful component for moderate drinkers in drunk driving prevention programs. PMID:23380489
Wang, Jun; Zhou, Bihua; Zhou, Shudao
2016-01-01
This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874
Williams, Benjamin B.; Dong, Ruhong; Nicolalde, Roberto J.; Matthews, Thomas P.; Gladstone, David J.; Demidenko, Eugene; Zaki, Bassem I.; Salikhov, Ildar K.; Lesniewski, Piotr N.; Swartz, Harold M.
2014-01-01
Purpose The ability to estimate individual exposures to radiation following a large attack or incident has been identified as a necessity for rational and effective emergency medical response. In vivo electron paramagnetic resonance (EPR) spectroscopy of tooth enamel has been developed to meet this need. Materials and methods A novel transportable EPR spectrometer, developed to facilitate tooth dosimetry in an emergency response setting, was used to measure upper incisors in a model system, in unirradiated subjects, and in patients who had received total body doses of 2 Gy. Results A linear dose response was observed in the model system. A statistically significant increase in the intensity of the radiation-induced EPR signal was observed in irradiated versus unirradiated subjects, with an estimated standard error of dose prediction of 0.9 + 0.3 Gy. Conclusions These results demonstrate the current ability of in vivo EPR tooth dosimetry to distinguish between subjects who have not been irradiated and those who have received exposures that place them at risk for acute radiation syndrome. Procedural and technical developments to further increase the precision of dose estimation and ensure reliable operation in the emergency setting are underway. With these developments EPR tooth dosimetry is likely to be a valuable resource for triage following potential radiation exposure of a large population. PMID:21696339
Structural Reliability Using Probability Density Estimation Methods Within NESSUS
NASA Technical Reports Server (NTRS)
Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric
2003-01-01
A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.
ERIC Educational Resources Information Center
Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M.
2010-01-01
Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…
Accounting for imperfect detection of groups and individuals when estimating abundance.
Clement, Matthew J; Converse, Sarah J; Royle, J Andrew
2017-09-01
If animals are independently detected during surveys, many methods exist for estimating animal abundance despite detection probabilities <1. Common estimators include double-observer models, distance sampling models and combined double-observer and distance sampling models (known as mark-recapture-distance-sampling models; MRDS). When animals reside in groups, however, the assumption of independent detection is violated. In this case, the standard approach is to account for imperfect detection of groups, while assuming that individuals within groups are detected perfectly. However, this assumption is often unsupported. We introduce an abundance estimator for grouped animals when detection of groups is imperfect and group size may be under-counted, but not over-counted. The estimator combines an MRDS model with an N-mixture model to account for imperfect detection of individuals. The new MRDS-Nmix model requires the same data as an MRDS model (independent detection histories, an estimate of distance to transect, and an estimate of group size), plus a second estimate of group size provided by the second observer. We extend the model to situations in which detection of individuals within groups declines with distance. We simulated 12 data sets and used Bayesian methods to compare the performance of the new MRDS-Nmix model to an MRDS model. Abundance estimates generated by the MRDS-Nmix model exhibited minimal bias and nominal coverage levels. In contrast, MRDS abundance estimates were biased low and exhibited poor coverage. Many species of conservation interest reside in groups and could benefit from an estimator that better accounts for imperfect detection. Furthermore, the ability to relax the assumption of perfect detection of individuals within detected groups may allow surveyors to re-allocate resources toward detection of new groups instead of extensive surveys of known groups. We believe the proposed estimator is feasible because the only additional field data required are a second estimate of group size.
Accounting for imperfect detection of groups and individuals when estimating abundance
Clement, Matthew J.; Converse, Sarah J.; Royle, J. Andrew
2017-01-01
If animals are independently detected during surveys, many methods exist for estimating animal abundance despite detection probabilities <1. Common estimators include double-observer models, distance sampling models and combined double-observer and distance sampling models (known as mark-recapture-distance-sampling models; MRDS). When animals reside in groups, however, the assumption of independent detection is violated. In this case, the standard approach is to account for imperfect detection of groups, while assuming that individuals within groups are detected perfectly. However, this assumption is often unsupported. We introduce an abundance estimator for grouped animals when detection of groups is imperfect and group size may be under-counted, but not over-counted. The estimator combines an MRDS model with an N-mixture model to account for imperfect detection of individuals. The new MRDS-Nmix model requires the same data as an MRDS model (independent detection histories, an estimate of distance to transect, and an estimate of group size), plus a second estimate of group size provided by the second observer. We extend the model to situations in which detection of individuals within groups declines with distance. We simulated 12 data sets and used Bayesian methods to compare the performance of the new MRDS-Nmix model to an MRDS model. Abundance estimates generated by the MRDS-Nmix model exhibited minimal bias and nominal coverage levels. In contrast, MRDS abundance estimates were biased low and exhibited poor coverage. Many species of conservation interest reside in groups and could benefit from an estimator that better accounts for imperfect detection. Furthermore, the ability to relax the assumption of perfect detection of individuals within detected groups may allow surveyors to re-allocate resources toward detection of new groups instead of extensive surveys of known groups. We believe the proposed estimator is feasible because the only additional field data required are a second estimate of group size.
Temporal rainfall estimation using input data reduction and model inversion
NASA Astrophysics Data System (ADS)
Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.
2016-12-01
Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
Alikhani, Jamal; Takacs, Imre; Al-Omari, Ahmed; Murthy, Sudhir; Massoudieh, Arash
2017-03-01
A parameter estimation framework was used to evaluate the ability of observed data from a full-scale nitrification-denitrification bioreactor to reduce the uncertainty associated with the bio-kinetic and stoichiometric parameters of an activated sludge model (ASM). Samples collected over a period of 150 days from the effluent as well as from the reactor tanks were used. A hybrid genetic algorithm and Bayesian inference were used to perform deterministic and parameter estimations, respectively. The main goal was to assess the ability of the data to obtain reliable parameter estimates for a modified version of the ASM. The modified ASM model includes methylotrophic processes which play the main role in methanol-fed denitrification. Sensitivity analysis was also used to explain the ability of the data to provide information about each of the parameters. The results showed that the uncertainty in the estimates of the most sensitive parameters (including growth rate, decay rate, and yield coefficients) decreased with respect to the prior information.
Lynöe, Niels; Wessel, Maja; Olsson, Daniel; Alexanderson, Kristina; Helgesson, Gert
2013-03-23
Previous research shows that how patients perceive encounters with healthcare staff may affect their health and self-estimated ability to return to work. The aim of the present study was to explore long-term sick-listed patients' encounters with social insurance office staff and the impact of these encounters on self-estimated ability to return to work. A random sample of long-term sick-listed patients (n = 10,042) received a questionnaire containing questions about their experiences of positive and negative encounters and item lists specifying such experiences. Respondents were also asked whether the encounters made them feel respected or wronged and how they estimated the effect of these encounters on their ability to return to work. Statistical analysis was conducted using 95% confidence intervals (CI) for proportions, and attributable risk (AR) with 95% CI. The response rate was 58%. Encounter items strongly associated with feeling respected were, among others: listened to me, believed me, and answered my questions. Encounter items strongly associated with feeling wronged were, among others: did not believe me, doubted my condition, and questioned my motivation to work. Positive encounters facilitated patients' self-estimated ability to return to work [26.9% (CI: 22.1-31.7)]. This effect was significantly increased if the patients also felt respected [49.3% (CI: 47.5-51.1)]. Negative encounters impeded self-estimated ability to return to work [29.1% (CI: 24.6-33.6)]; when also feeling wronged return to work was significantly further impeded [51.3% (CI: 47.1-55.5)]. Long-term sick-listed patients find that their self-reported ability to return to work is affected by positive and negative encounters with social insurance office staff. This effect is further enhanced by feeling respected or wronged, respectively.
Davis, Amy J; Leland, Bruce; Bodenchuk, Michael; VerCauteren, Kurt C; Pepin, Kim M
2017-06-01
Population density is a key driver of disease dynamics in wildlife populations. Accurate disease risk assessment and determination of management impacts on wildlife populations requires an ability to estimate population density alongside management actions. A common management technique for controlling wildlife populations to monitor and mitigate disease transmission risk is trapping (e.g., box traps, corral traps, drop nets). Although abundance can be estimated from trapping actions using a variety of analytical approaches, inference is limited by the spatial extent to which a trap attracts animals on the landscape. If the "area of influence" were known, abundance estimates could be converted to densities. In addition to being an important predictor of contact rate and thus disease spread, density is more informative because it is comparable across sites of different sizes. The goal of our study is to demonstrate the importance of determining the area sampled by traps (area of influence) so that density can be estimated from management-based trapping designs which do not employ a trapping grid. To provide one example of how area of influence could be calculated alongside management, we conducted a small pilot study on wild pigs (Sus scrofa) using two removal methods 1) trapping followed by 2) aerial gunning, at three sites in northeast Texas in 2015. We estimated abundance from trapping data with a removal model. We calculated empirical densities as aerial counts divided by the area searched by air (based on aerial flight tracks). We inferred the area of influence of traps by assuming consistent densities across the larger spatial scale and then solving for area impacted by the traps. Based on our pilot study we estimated the area of influence for corral traps in late summer in Texas to be ∼8.6km 2 . Future work showing the effects of behavioral and environmental factors on area of influence will help mangers obtain estimates of density from management data, and determine conditions where trap-attraction is strongest. The ability to estimate density alongside population control activities will improve risk assessment and response operations against disease outbreaks. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Chang, Seung Jin; Lee, Chun Ku; Shin, Yong-June; Park, Jin Bae
2016-12-01
A multiple chirp reflectometry system with a fault estimation process is proposed to obtain multiple resolution and to measure the degree of fault in a target cable. A multiple resolution algorithm has the ability to localize faults, regardless of fault location. The time delay information, which is derived from the normalized cross-correlation between the incident signal and bandpass filtered reflected signals, is converted to a fault location and cable length. The in-phase and quadrature components are obtained by lowpass filtering of the mixed signal of the incident signal and the reflected signal. Based on in-phase and quadrature components, the reflection coefficient is estimated by the proposed fault estimation process including the mixing and filtering procedure. Also, the measurement uncertainty for this experiment is analyzed according to the Guide to the Expression of Uncertainty in Measurement. To verify the performance of the proposed method, we conduct comparative experiments to detect and measure faults under different conditions. Considering the installation environment of the high voltage cable used in an actual vehicle, target cable length and fault position are designed. To simulate the degree of fault, the variety of termination impedance (10 Ω , 30 Ω , 50 Ω , and 1 \\text{k} Ω ) are used and estimated by the proposed method in this experiment. The proposed method demonstrates advantages in that it has multiple resolution to overcome the blind spot problem, and can assess the state of the fault.
Bright, Peter; Hale, Emily; Gooch, Vikki Jayne; Myhill, Thomas; van der Linde, Ian
2018-09-01
Since publication in 1982, the 50-item National Adult Reading Test (NART; Nelson, 1982; NART-R; Nelson & Willison, 1991) has remained a widely adopted method for estimating premorbid intelligence both for clinical and research purposes. However, the NART has not been standardised against the most recent revisions of the Wechsler Adult Intelligence Scale (WAIS-III; Wechsler, 1997, and WAIS-IV; Wechsler, 2008). Our objective, therefore, was to produce reliable standardised estimates of WAIS-IV IQ from the NART. Ninety-two neurologically healthy British adults were assessed and regression equations calculated to produce population estimates of WAIS-IV full-scale IQ (FSIQ) and constituent index scores. Results showed strong NART/WAIS-IV FSIQ correlations with more moderate correlations observed between NART error and constituent index scores. FSIQ estimates were closely similar to the published WAIS and WAIS-R estimates at the high end of the distribution, but at the lower end were approximately equidistant from the highly discrepant WAIS (low) and WAIS-R (high) values. We conclude that the NART is likely to remain an important tool for estimating the impact of neurological damage on general cognitive ability. We advise caution in the use of older published WAIS and/or WAIS-R estimates for estimating premorbid WAIS-IV FSIQ, particularly for those with low NART scores.
Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera.
Nguyen, Thuy Tuong; Slaughter, David C; Hanson, Bradley D; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin
2015-07-28
This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images.
Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera
Nguyen, Thuy Tuong; Slaughter, David C.; Hanson, Bradley D.; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin
2015-01-01
This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images. PMID:26225982
Dong, Tuochuan; Kang, Le; Hutson, Alan; Xiong, Chengjie; Tian, Lili
2014-03-01
Although most of the statistical methods for diagnostic studies focus on disease processes with binary disease status, many diseases can be naturally classified into three ordinal diagnostic categories, that is normal, early stage, and fully diseased. For such diseases, the volume under the ROC surface (VUS) is the most commonly used index of diagnostic accuracy. Because the early disease stage is most likely the optimal time window for therapeutic intervention, the sensitivity to the early diseased stage has been suggested as another diagnostic measure. For the purpose of comparing the diagnostic abilities on early disease detection between two markers, it is of interest to estimate the confidence interval of the difference between sensitivities to the early diseased stage. In this paper, we present both parametric and non-parametric methods for this purpose. An extensive simulation study is carried out for a variety of settings for the purpose of evaluating and comparing the performance of the proposed methods. A real example of Alzheimer's disease (AD) is analyzed using the proposed approaches. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Drawing causal inferences using propensity scores: a practical guide for community psychologists.
Lanza, Stephanie T; Moore, Julia E; Butera, Nicole M
2013-12-01
Confounding present in observational data impede community psychologists' ability to draw causal inferences. This paper describes propensity score methods as a conceptually straightforward approach to drawing causal inferences from observational data. A step-by-step demonstration of three propensity score methods-weighting, matching, and subclassification-is presented in the context of an empirical examination of the causal effect of preschool experiences (Head Start vs. parental care) on reading development in kindergarten. Although the unadjusted population estimate indicated that children with parental care had substantially higher reading scores than children who attended Head Start, all propensity score adjustments reduce the size of this overall causal effect by more than half. The causal effect was also defined and estimated among children who attended Head Start. Results provide no evidence for improved reading if those children had instead received parental care. We carefully define different causal effects and discuss their respective policy implications, summarize advantages and limitations of each propensity score method, and provide SAS and R syntax so that community psychologists may conduct causal inference in their own research.
Drawing Causal Inferences Using Propensity Scores: A Practical Guide for Community Psychologists
Lanza, Stephanie T.; Moore, Julia E.; Butera, Nicole M.
2014-01-01
Confounding present in observational data impede community psychologists’ ability to draw causal inferences. This paper describes propensity score methods as a conceptually straightforward approach to drawing causal inferences from observational data. A step-by-step demonstration of three propensity score methods – weighting, matching, and subclassification – is presented in the context of an empirical examination of the causal effect of preschool experiences (Head Start vs. parental care) on reading development in kindergarten. Although the unadjusted population estimate indicated that children with parental care had substantially higher reading scores than children who attended Head Start, all propensity score adjustments reduce the size of this overall causal effect by more than half. The causal effect was also defined and estimated among children who attended Head Start. Results provide no evidence for improved reading if those children had instead received parental care. We carefully define different causal effects and discuss their respective policy implications, summarize advantages and limitations of each propensity score method, and provide SAS and R syntax so that community psychologists may conduct causal inference in their own research. PMID:24185755
Blane, Alison; Falkmer, Torbjörn; Lee, Hoe C; Dukic Willstrand, Tania
2018-01-01
Background Safe driving is a complex activity that requires calibration. This means the driver can accurately assess the level of task demand required for task completion and can accurately evaluate their driving capability. There is much debate on the calibration ability of post-stroke drivers. Objectives The aim of this study was to assess the cognition, self-rated performance, and estimation of task demand in a driving simulator with post-stroke drivers and controls. Methods A between-groups study design was employed, which included a post-stroke driver group and a group of similarly aged older control drivers. Both groups were observed driving in two simulator-based driving scenarios and asked to complete the NASA Task Load Index (TLX) to assess their perceived task demand and self-rate their driving performance. Participants also completed a battery of psychometric tasks to assess attention and executive function, which was used to determine whether post-stroke cognitive impairment impacted on calibration. Results There was no difference in the amount of perceived task demand required to complete the driving task. Despite impairments in cognition, the post-stroke drivers were not more likely to over-estimate their driving abilities than controls. On average, the post-stroke drivers self-rated themselves more poorly than the controls and this rating was related to cognitive ability. Conclusion This study suggests that post-stroke drivers may be aware of their deficits and adjust their driving behavior. Furthermore, using self-performance measures alongside a driving simulator and cognitive assessments may provide complementary fitness-to-drive assessments, as well as rehabilitation tools during post-stroke recovery.
Predictive model for risk of cesarean section in pregnant women after induction of labor.
Hernández-Martínez, Antonio; Pascual-Pedreño, Ana I; Baño-Garnés, Ana B; Melero-Jiménez, María R; Tenías-Burillo, José M; Molina-Alarcón, Milagros
2016-03-01
To develop a predictive model for risk of cesarean section in pregnant women after induction of labor. A retrospective cohort study was conducted of 861 induced labors during 2009, 2010, and 2011 at Hospital "La Mancha-Centro" in Alcázar de San Juan, Spain. Multivariate analysis was used with binary logistic regression and areas under the ROC curves to determine predictive ability. Two predictive models were created: model A predicts the outcome at the time the woman is admitted to the hospital (before the decision to of the method of induction); and model B predicts the outcome at the time the woman is definitely admitted to the labor room. The predictive factors in the final model were: maternal height, body mass index, nulliparity, Bishop score, gestational age, macrosomia, gender of fetus, and the gynecologist's overall cesarean section rate. The predictive ability of model A was 0.77 [95% confidence interval (CI) 0.73-0.80] and model B was 0.79 (95% CI 0.76-0.83). The predictive ability for pregnant women with previous cesarean section with model A was 0.79 (95% CI 0.64-0.94) and with model B was 0.80 (95% CI 0.64-0.96). For a probability of estimated cesarean section ≥80%, the models A and B presented a positive likelihood ratio (+LR) for cesarean section of 22 and 20, respectively. Also, for a likelihood of estimated cesarean section ≤10%, the models A and B presented a +LR for vaginal delivery of 13 and 6, respectively. These predictive models have a good discriminative ability, both overall and for all subgroups studied. This tool can be useful in clinical practice, especially for pregnant women with previous cesarean section and diabetes.
Availability of new drugs and Americans' ability to work.
Lichtenberg, Frank R
2005-04-01
The objective of this work was the investigation of the extent to which the introduction of new drugs has increased society's ability to produce goods and services by increasing the number of hours worked per member of the working-age population. Econometric models of ability-to-work measures from data on approximately 200,000 individuals with 47 major chronic conditions observed throughout a 15-year period (1982-1996) were estimated. Under very conservative assumptions, the estimates indicate that the value of the increase in ability to work attributable to new drugs is 2.5 times as great as expenditure on new drugs. The potential of drugs to increase employee productivity should be considered in the design of drug-reimbursement policies. Conversely, policies that broadly reduce the development and utilization of new drugs may ultimately reduce our ability to produce other goods and services.
Why do we differ in number sense? Evidence from a genetically sensitive investigation☆
Tosto, M.G.; Petrill, S.A.; Halberda, J.; Trzaskowski, M.; Tikhomirova, T.N.; Bogdanova, O.Y.; Ly, R.; Wilmer, J.B.; Naiman, D.Q.; Germine, L.; Plomin, R.; Kovas, Y.
2014-01-01
Basic intellectual abilities of quantity and numerosity estimation have been detected across animal species. Such abilities are referred to as ‘number sense’. For human species, individual differences in number sense are detectable early in life, persist in later development, and relate to general intelligence. The origins of these individual differences are unknown. To address this question, we conducted the first large-scale genetically sensitive investigation of number sense, assessing numerosity discrimination abilities in 837 pairs of monozygotic and 1422 pairs of dizygotic 16-year-old twin pairs. Univariate genetic analysis of the twin data revealed that number sense is modestly heritable (32%), with individual differences being largely explained by non-shared environmental influences (68%) and no contribution from shared environmental factors. Sex-Limitation model fitting revealed no differences between males and females in the etiology of individual differences in number sense abilities. We also carried out Genome-wide Complex Trait Analysis (GCTA) that estimates the population variance explained by additive effects of DNA differences among unrelated individuals. For 1118 unrelated individuals in our sample with genotyping information on 1.7 million DNA markers, GCTA estimated zero heritability for number sense, unlike other cognitive abilities in the same twin study where the GCTA heritability estimates were about 25%. The low heritability of number sense, observed in this study, is consistent with the directional selection explanation whereby additive genetic variance for evolutionary important traits is reduced. PMID:24696527
Preserved, deteriorated, and premorbidly impaired patterns of intellectual ability in schizophrenia.
Ammari, Narmeen; Heinrichs, R Walter; Pinnock, Farena; Miles, Ashley A; Muharib, Eva; McDermid Vaz, Stephanie
2014-05-01
The main purpose of this investigation was to identify patterns of intellectual performance in schizophrenia patients suggesting preserved, deteriorated, and premorbidly impaired ability, and to determine clinical, cognitive, and functional correlates of these patterns. We assessed 101 patients with schizophrenia or schizoaffective disorder and 80 non-psychiatric control participants. The "preserved" performance pattern was defined by average-range estimated premorbid and current IQ with no evidence of decline (premorbid-current IQ difference <10 points). The "deteriorated" pattern was defined by a difference between estimated premorbid and current IQ estimates of 10 points or more. The premorbidly "impaired" pattern was defined by below average estimated premorbid and current IQ and no evidence of decline greater than 10 points. Preserved and deteriorated patterns in healthy controls were also identified and studied in comparison to patient findings. The groups were compared on demographic, neurocognitive, clinical and functionality variables. Patients with the preserved pattern outperformed those meeting criteria for deteriorated and compromised intellectual ability on a composite measure of neurocognitive ability as well as in terms of functional competence. Patients demonstrating the deteriorated and compromised patterns were equivalent across all measures. However, "preserved" patients failed to show any advantage in terms of community functioning and demonstrated cognitive impairments relative to control participants. Our results suggest that proposed patterns of intellectual decline and stability exist in both the schizophrenia and general populations, but may not hold true across other cognitive abilities and do not translate into differential functional outcome.
Stellar Parameters in an Instant with Machine Learning. Application to Kepler LEGACY Targets
NASA Astrophysics Data System (ADS)
Bellinger, Earl P.; Angelou, George C.; Hekker, Saskia; Basu, Sarbani; Ball, Warrick H.; Guggenberger, Elisabet
2017-10-01
With the advent of dedicated photometric space missions, the ability to rapidly process huge catalogues of stars has become paramount. Bellinger and Angelou et al. [1] recently introduced a new method based on machine learning for inferring the stellar parameters of main-sequence stars exhibiting solar-like oscillations. The method makes precise predictions that are consistent with other methods, but with the advantages of being able to explore many more parameters while costing practically no time. Here we apply the method to 52 so-called "LEGACY" main-sequence stars observed by the Kepler space mission. For each star, we present estimates and uncertainties of mass, age, radius, luminosity, core hydrogen abundance, surface helium abundance, surface gravity, initial helium abundance, and initial metallicity as well as estimates of their evolutionary model parameters of mixing length, overshooting coeffcient, and diffusion multiplication factor. We obtain median uncertainties in stellar age, mass, and radius of 14.8%, 3.6%, and 1.7%, respectively. The source code for all analyses and for all figures appearing in this manuscript can be found electronically at
Strategies for Estimating Discrete Quantities.
ERIC Educational Resources Information Center
Crites, Terry W.
1993-01-01
Describes the benchmark and decomposition-recomposition estimation strategies and presents five techniques to develop students' estimation ability. Suggests situations involving quantities of candy and popcorn in which the teacher can model those strategies for the students. (MDH)