Sample records for combining ability estimates

  1. Efficiency of circulant diallels via mixed models in the selection of papaya genotypes resistant to foliar fungal diseases.

    PubMed

    Vivas, M; Silveira, S F; Viana, A P; Amaral, A T; Cardoso, D L; Pereira, M G

    2014-07-02

    Diallel crossing methods provide information regarding the performance of genitors between themselves and their hybrid combinations. However, with a large number of parents, the number of hybrid combinations that can be obtained and evaluated become limited. One option regarding the number of parents involved is the adoption of circulant diallels. However, information is lacking regarding diallel analysis using mixed models. This study aimed to evaluate the efficacy of the method of linear mixed models to estimate, for variable resistance to foliar fungal diseases, components of general and specific combining ability in a circulant table with different s values. Subsequently, 50 diallels were simulated for each s value, and the correlations and estimates of the combining abilities of the different diallel combinations were analyzed. The circulant diallel method using mixed modeling was effective in the classification of genitors regarding their combining abilities relative to the complete diallels. The numbers of crosses in which each genitor(s) will compose the circulant diallel and the estimated heritability affect the combining ability estimates. With three crosses per parent, it is possible to obtain good concordance (correlation above 0.8) between the combining ability estimates.

  2. Three estimates of the association between linear growth failure and cognitive ability.

    PubMed

    Cheung, Y B; Lam, K F

    2009-09-01

    To compare three estimators of association between growth stunting as measured by height-for-age Z-score and cognitive ability in children, and to examine the extent statistical adjustment for covariates is useful for removing confounding due to socio-economic status. Three estimators, namely random-effects, within- and between-cluster estimators, for panel data were used to estimate the association in a survey of 1105 pairs of siblings who were assessed for anthropometry and cognition. Furthermore, a 'combined' model was formulated to simultaneously provide the within- and between-cluster estimates. Random-effects and between-cluster estimators showed strong association between linear growth and cognitive ability, even after adjustment for a range of socio-economic variables. In contrast, the within-cluster estimator showed a much more modest association: For every increase of one Z-score in linear growth, cognitive ability increased by about 0.08 standard deviation (P < 0.001). The combined model verified that the between-cluster estimate was significantly larger than the within-cluster estimate (P = 0.004). Residual confounding by socio-economic situations may explain a substantial proportion of the observed association between linear growth and cognition in studies that attempt to control the confounding by means of multivariable regression analysis. The within-cluster estimator provides more convincing and modest results about the strength of association.

  3. Genetic Variation and Combining Ability Analysis of Bruising Sensitivity in Agaricus bisporus

    PubMed Central

    Gao, Wei; Baars, Johan J. P.; Dolstra, Oene; Visser, Richard G. F.; Sonnenberg, Anton S. M.

    2013-01-01

    Advanced button mushroom cultivars that are less sensitive to mechanical bruising are required by the mushroom industry, where automated harvesting still cannot be used for the fresh mushroom market. The genetic variation in bruising sensitivity (BS) of Agaricus bisporus was studied through an incomplete set of diallel crosses to get insight in the heritability of BS and the combining ability of the parental lines used and, in this way, to estimate their breeding value. To this end nineteen homokaryotic lines recovered from wild strains and cultivars were inter-crossed in a diallel scheme. Fifty-one successful hybrids were grown under controlled conditions, and the BS of these hybrids was assessed. BS was shown to be a trait with a very high heritability. The results also showed that brown hybrids were generally less sensitive to bruising than white hybrids. The diallel scheme allowed to estimate the general combining ability (GCA) for each homokaryotic parental line and to estimate the specific combining ability (SCA) of each hybrid. The line with the lowest GCA is seen as the most attractive donor for improving resistance to bruising. The line gave rise to hybrids sensitive to bruising having the highest GCA value. The highest negative SCA possibly indicates heterosis effects for resistance to bruising. This study provides a foundation for estimating breeding value of parental lines to further study the genetic factors underlying bruising sensitivity and other quality-related traits, and to select potential parental lines for further heterosis breeding. The approach of studying combining ability in a diallel scheme was used for the first time in button mushroom breeding. PMID:24116171

  4. Shared-environmental contributions to high cognitive ability.

    PubMed

    Kirkpatrick, Robert M; McGue, Matt; Iacono, William G

    2009-07-01

    Using a combined sample of adolescent twins, biological siblings, and adoptive siblings, we estimated and compared the differential shared-environmentality for high cognitive ability and the shared-environmental variance for the full range of ability during adolescence. Estimates obtained via multiple methods were in the neighborhood of 0.20, and suggest a modest effect of the shared environment on both high and full-range ability. We then examined the association of ability with three measures of the family environment in a subsample of adoptive siblings: parental occupational status, parental education, and disruptive life events. Only parental education showed significant (albeit modest) association with ability in both the biological and adoptive samples. We discuss these results in terms of the need for cognitive-development research to combine genetically sensitive designs and modern statistical methods with broad, thorough environmental measurement.

  5. Hierarchical State-Space Estimation of Leatherback Turtle Navigation Ability

    PubMed Central

    Mills Flemming, Joanna; Jonsen, Ian D.; Field, Christopher A.

    2010-01-01

    Remotely sensed tracking technology has revealed remarkable migration patterns that were previously unknown; however, models to optimally use such data have developed more slowly. Here, we present a hierarchical Bayes state-space framework that allows us to combine tracking data from a collection of animals and make inferences at both individual and broader levels. We formulate models that allow the navigation ability of animals to be estimated and demonstrate how information can be combined over many animals to allow improved estimation. We also show how formal hypothesis testing regarding navigation ability can easily be accomplished in this framework. Using Argos satellite tracking data from 14 leatherback turtles, 7 males and 7 females, during their southward migration from Nova Scotia, Canada, we find that the circle of confusion (the radius around an animal's location within which it is unable to determine its location precisely) is approximately 96 km. This estimate suggests that the turtles' navigation does not need to be highly accurate, especially if they are able to use more reliable cues as they near their destination. Moreover, for the 14 turtles examined, there is little evidence to suggest that male and female navigation abilities differ. Because of the minimal assumptions made about the movement process, our approach can be used to estimate and compare navigation ability for many migratory species that are able to carry electronic tracking devices. PMID:21203382

  6. Combining computer adaptive testing technology with cognitively diagnostic assessment.

    PubMed

    McGlohen, Meghan; Chang, Hua-Hua

    2008-08-01

    A major advantage of computerized adaptive testing (CAT) is that it allows the test to home in on an examinee's ability level in an interactive manner. The aim of the new area of cognitive diagnosis is to provide information about specific content areas in which an examinee needs help. The goal of this study was to combine the benefit of specific feedback from cognitively diagnostic assessment with the advantages of CAT. In this study, three approaches to combining these were investigated: (1) item selection based on the traditional ability level estimate (theta), (2) item selection based on the attribute mastery feedback provided by cognitively diagnostic assessment (alpha), and (3) item selection based on both the traditional ability level estimate (theta) and the attribute mastery feedback provided by cognitively diagnostic assessment (alpha). The results from these three approaches were compared for theta estimation accuracy, attribute mastery estimation accuracy, and item exposure control. The theta- and alpha-based condition outperformed the alpha-based condition regarding theta estimation, attribute mastery pattern estimation, and item exposure control. Both the theta-based condition and the theta- and alpha-based condition performed similarly with regard to theta estimation, attribute mastery estimation, and item exposure control, but the theta- and alpha-based condition has an additional advantage in that it uses the shadow test method, which allows the administrator to incorporate additional constraints in the item selection process, such as content balancing, item type constraints, and so forth, and also to select items on the basis of both the current theta and alpha estimates, which can be built on top of existing 3PL testing programs.

  7. Application of the Combination Approach for Estimating Evapotranspiration in Puerto Rico

    NASA Technical Reports Server (NTRS)

    Harmsen, Eric; Luvall, Jeffrey; Gonzalez, Jorge

    2005-01-01

    The ability to estimate short-term fluxes of water vapor from the land surface is important for validating latent heat flux estimates from high resolution remote sensing techniques. A new, relatively inexpensive method is presented for estimating t h e ground-based values of the surface latent heat flux or evapotranspiration.

  8. Processing Satellite Data for Slant Total Electron Content Measurements

    NASA Technical Reports Server (NTRS)

    Stephens, Philip John (Inventor); Komjathy, Attila (Inventor); Wilson, Brian D. (Inventor); Mannucci, Anthony J. (Inventor)

    2016-01-01

    A method, system, and apparatus provide the ability to estimate ionospheric observables using space-borne observations. Space-borne global positioning system (GPS) data of ionospheric delay are obtained from a satellite. The space-borne GPS data are combined with ground-based GPS observations. The combination is utilized in a model to estimate a global three-dimensional (3D) electron density field.

  9. Clinical validation of the General Ability Index--Estimate (GAI-E): estimating premorbid GAI.

    PubMed

    Schoenberg, Mike R; Lange, Rael T; Iverson, Grant L; Chelune, Gordon J; Scott, James G; Adams, Russell L

    2006-09-01

    The clinical utility of the General Ability Index--Estimate (GAI-E; Lange, Schoenberg, Chelune, Scott, & Adams, 2005) for estimating premorbid GAI scores was investigated using the WAIS-III standardization clinical trials sample (The Psychological Corporation, 1997). The GAI-E algorithms combine Vocabulary, Information, Matrix Reasoning, and Picture Completion subtest raw scores with demographic variables to predict GAI. Ten GAI-E algorithms were developed combining demographic variables with single subtest scaled scores and with two subtests. Estimated GAI are presented for participants diagnosed with dementia (n = 50), traumatic brain injury (n = 20), Huntington's disease (n = 15), Korsakoff's disease (n = 12), chronic alcohol abuse (n = 32), temporal lobectomy (n = 17), and schizophrenia (n = 44). In addition, a small sample of participants without dementia and diagnosed with depression (n = 32) was used as a clinical comparison group. The GAI-E algorithms provided estimates of GAI that closely approximated scores expected for a healthy adult population. The greatest differences between estimated GAI and obtained GAI were observed for the single subtest GAI-E algorithms using the Vocabulary, Information, and Matrix Reasoning subtests. Based on these data, recommendations for the use of the GAI-E algorithms are presented.

  10. Combining ability of S3 progenies for key agronomic traits in popcorn: comparison of testers in top-crosses.

    PubMed

    de Lima, V J; do Amaral Junior, A T; Kamphorst, S H; Pena, G F; Leite, J T; Schmitt, K F M; Vittorazzi, C; de Almeida Filho, J E; Mora, F

    2016-12-02

    The successful development of hybrid cultivars depends on the reliability of estimated combining ability of the parent lines. The objectives of this study were to assess the combining ability of partially inbred S 3 families of popcorn derived from the open-pollinated variety UENF 14, via top-crosses with four testers, and to compare the testers for their ability to discriminate the S 3 progenies. The experiment was conducted in the 2015/2016 crop season, in an incomplete-block (Lattice) design with three replications. The following agronomic traits were evaluated: average plant height, grain yield (GY), popping expansion (PE), and expanded popcorn volume per hectare. The top-cross hybrid, originating from the BRS-Angela vs S 3 progeny 10 combination, was indicated as promising, showing high values for specific combining ability for GY and PE. For the S 3 progenies that showed high and positive GCA values for GY and PE, the continuity of the breeding program is recommended, with the advance of self-pollination generations. Fasoulas' differentiation index discriminated the BRS-Angela tester as the most suitable for identifying the superior progenies.

  11. Diallel analysis for technological traits in upland cotton.

    PubMed

    Queiroz, D R; Farias, F J C; Cavalcanti, J J V; Carvalho, L P; Neder, D G; Souza, L S S; Farias, F C; Teodoro, P E

    2017-09-21

    Final cotton quality is of great importance, and it depends on intrinsic and extrinsic fiber characteristics. The objective of this study was to estimate general (GCA) and specific (SCA) combining abilities for technological fiber traits among six upland cotton genotypes and their fifteen hybrid combinations, as well as to determine the effective genetic effects in controlling the traits evaluated. In 2015, six cotton genotypes: FM 993, CNPA 04-2080, PSC 355, TAM B 139-17, IAC 26, and TAMCOT-CAMD-E and fifteen hybrid combinations were evaluated at the Experimental Station of Embrapa Algodão, located in Patos, PB, Brazil. The experimental design was a randomized block with three replications. Technological fiber traits evaluated were: length (mm); strength (gf/tex); fineness (Micronaire index); uniformity (%); short fiber index (%), and spinning index. The diallel analysis was carried out according to the methodology proposed by Griffing, using method II and model I. Significant differences were detected between the treatments and combining abilities (GCA and SCA), indicating the variability of the study material. There was a predominance of additive effects for the genetic control of all traits. TAM B 139-17 presented the best GCA estimates for all traits. The best combinations were: FM 993 x TAM B 139-17, CNPA 04-2080 x PSC 355, FM 993 x TAMCOT-CAMD-E, PSC 355 x TAM B 139-17, and TAM B 139-17 x TAMCOT-CAMD-E, by obtaining the best estimates of SCA, with one of the parents having favorable estimates for GCA.

  12. Generalized shrunken type-GM estimator and its application

    NASA Astrophysics Data System (ADS)

    Ma, C. Z.; Du, Y. L.

    2014-03-01

    The parameter estimation problem in linear model is considered when multicollinearity and outliers exist simultaneously. A class of new robust biased estimator, Generalized Shrunken Type-GM Estimation, with their calculated methods are established by combination of GM estimator and biased estimator include Ridge estimate, Principal components estimate and Liu estimate and so on. A numerical example shows that the most attractive advantage of these new estimators is that they can not only overcome the multicollinearity of coefficient matrix and outliers but also have the ability to control the influence of leverage points.

  13. Self-Estimation of Blood Alcohol Concentration: A Review

    PubMed Central

    Aston, Elizabeth R.; Liguori, Anthony

    2013-01-01

    This article reviews the history of blood alcohol concentration (BAC) estimation training, which trains drinkers to discriminate distinct BAC levels and thus avoid excessive alcohol consumption. BAC estimation training typically combines education concerning alcohol metabolism with attention to subjective internal cues associated with specific concentrations. Estimation training was originally conceived as a component of controlled drinking programs. However, dependent drinkers were unsuccessful in BAC estimation, likely due to extreme tolerance. In contrast, moderate drinkers successfully acquired this ability. A subsequent line of research translated laboratory estimation studies to naturalistic settings by studying large samples of drinkers in their preferred drinking environments. Thus far, naturalistic studies have provided mixed results regarding the most effective form of BAC feedback. BAC estimation training is important because it imparts an ability to perceive individualized impairment that may be present below the legal limit for driving. Consequently, the training can be a useful component for moderate drinkers in drunk driving prevention programs. PMID:23380489

  14. The Predicted Cross Value for Genetic Introgression of Multiple Alleles

    PubMed Central

    Han, Ye; Cameron, John N.; Wang, Lizhi; Beavis, William D.

    2017-01-01

    We consider the plant genetic improvement challenge of introgressing multiple alleles from a homozygous donor to a recipient. First, we frame the project as an algorithmic process that can be mathematically formulated. We then introduce a novel metric for selecting breeding parents that we refer to as the predicted cross value (PCV). Unlike estimated breeding values, which represent predictions of general combining ability, the PCV predicts specific combining ability. The PCV takes estimates of recombination frequencies as an input vector and calculates the probability that a pair of parents will produce a gamete with desirable alleles at all specified loci. We compared the PCV approach with existing estimated-breeding-value approaches in two simulation experiments, in which 7 and 20 desirable alleles were to be introgressed from a donor line into a recipient line. Results suggest that the PCV is more efficient and effective for multi-allelic trait introgression. We also discuss how operations research can be used for other crop genetic improvement projects and suggest several future research directions. PMID:28122824

  15. Combining ability of tropical and temperate inbred lines of popcorn.

    PubMed

    da Silva, V Q R; do Amaral Júnior, A T; Gonçalves, L S A; Freitas Júnior, S P; Candido, L S; Vittorazzi, C; Moterle, L M; Vieira, R A; Scapim, C A

    2010-08-31

    In Brazil, using combining ability of popcorn genotypes to achieve superior hybrids has been unsuccessful because the local genotypes are all members of the same heterotic group. To overcome this constraint, 10 lines (P(1) to P(10)) with different adaptations to tropical or temperate edaphoclimatic environments were used to obtain 45 F(1) hybrids in a complete diallel. These hybrids and three controls were evaluated in two environments in Rio de Janeiro State. Grain yield (GY), popping expansion (PE), plant height (PH), ear height (EH), and days to silking (FL) were evaluated in randomized complete blocks with three replications. Significant differences between genotypes (P

  16. Genetic distances between popcorn populations based on molecular markers and correlations with heterosis estimates made by diallel analysis of hybrids.

    PubMed

    Munhoz, R E F; Prioli, A J; Amaral, A T; Scapim, C A; Simon, G A

    2009-08-11

    Diallel analysis was used to obtain information on combining ability, heterosis, estimates of genetic distances by random amplified polymorphic DNA (RAPD) and on their correlations with heterosis, for the popcorn varieties RS 20, UNB2, CMS 43, CMS 42, Zélia, UEM J1, UEM M2, Beija-Flor, and Viçosa, which were crossed to obtain all possible combinations, without reciprocals. The genitors and the 36 F(1) hybrids were evaluated in field trials in Maringá during two growing seasons in a randomized complete block design with three replications. Based on the results, strategies for further studies were developed, including the construction of composites by joining varieties with high general combining ability for grain yield (UNB2 and CMS 42) with those with high general combining ability for popping expansion (Zélia, RS 20 and UEM M2). Based on the RAPD markers, UEM J1 and Zélia were the most genetically distant and RS 20 and UNB2 were the most similar. The low correlation between heterosis and genetic distances may be explained by the random dispersion of the RAPD markers, which were insufficient for the exploitation of the popcorn genome. We concluded that an association between genetic dissimilarity and heterosis based only on genetic distance is not expected without considering the effect of dominant loci.

  17. The Impact of Three Factors on the Recovery of Item Parameters for the Three-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Kim, Kyung Yong; Lee, Won-Chan

    2017-01-01

    This article provides a detailed description of three factors (specification of the ability distribution, numerical integration, and frame of reference for the item parameter estimates) that might affect the item parameter estimation of the three-parameter logistic model, and compares five item calibration methods, which are combinations of the…

  18. Estimating Driving Performance Based on EEG Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Lin, Chin-Teng; Wu, Ruei-Cheng; Jung, Tzyy-Ping; Liang, Sheng-Fu; Huang, Teng-Yi

    2005-12-01

    The growing number of traffic accidents in recent years has become a serious concern to society. Accidents caused by driver's drowsiness behind the steering wheel have a high fatality rate because of the marked decline in the driver's abilities of perception, recognition, and vehicle control abilities while sleepy. Preventing such accidents caused by drowsiness is highly desirable but requires techniques for continuously detecting, estimating, and predicting the level of alertness of drivers and delivering effective feedbacks to maintain their maximum performance. This paper proposes an EEG-based drowsiness estimation system that combines electroencephalogram (EEG) log subband power spectrum, correlation analysis, principal component analysis, and linear regression models to indirectly estimate driver's drowsiness level in a virtual-reality-based driving simulator. Our results demonstrated that it is feasible to accurately estimate quantitatively driving performance, expressed as deviation between the center of the vehicle and the center of the cruising lane, in a realistic driving simulator.

  19. Genetic control of number of flowers and pod set in common bean.

    PubMed

    Martins, E S; Pinto Júnior, R A; Abreu, A F B; Ramalho, M A P

    2017-09-21

    This article aimed to study the genetic control of some flowers and pod set of common bean and to verify if its estimate varies with environmental conditions and gene pool. A complete diallel was used among six lines, but no reciprocal ones. The treatments were evaluated in three harvests/generations - F 2 , F 3 , and F 4 - in 2015/2016, in a randomized complete block design with four replications. The plot consisted of 3 lines with 4 m. In the center line, a receptacle to collect the aborted flowers/pods was placed. The traits considered were the number of flowers/plant (N), the percentage of pod set (V), and the production of grain/plant (W). A joint diallel analysis was performed, and the correlations between N, V, ​​and W were estimated. N was 31.9 on average, and V was 40.4%. The average of Mesoamerican parents, for N and V, was higher than for Andean. Specific combining ability explained most of the variation for N, evidencing predominance of dominance effect. For V, specific combining ability was slightly lower than general combining ability, indicating additive loci and also dominance effects. These two traits were very influenced by environment and should be considered a strategy for greater grain yield stability of common bean.

  20. Combined methods of tolerance increasing for embedded SRAM

    NASA Astrophysics Data System (ADS)

    Shchigorev, L. A.; Shagurin, I. I.

    2016-10-01

    The abilities of combined use of different methods of fault tolerance increasing for SRAM such as error detection and correction codes, parity bits, and redundant elements are considered. Area penalties due to using combinations of these methods are investigated. Estimation is made for different configurations of 4K x 128 RAM memory block for 28 nm manufacturing process. Evaluation of the effectiveness of the proposed combinations is also reported. The results of these investigations can be useful for designing fault-tolerant “system on chips”.

  1. Using GIS-based methods and lidar data to estimate rooftop solar technical potential in US cities

    NASA Astrophysics Data System (ADS)

    Margolis, Robert; Gagnon, Pieter; Melius, Jennifer; Phillips, Caleb; Elmore, Ryan

    2017-07-01

    We estimate the technical potential of rooftop solar photovoltaics (PV) for select US cities by combining light detection and ranging (lidar) data, a validated analytical method for determining rooftop PV suitability employing geographic information systems, and modeling of PV electricity generation. We find that rooftop PV’s ability to meet estimated city electricity consumption varies widely—from meeting 16% of annual consumption (in Washington, DC) to meeting 88% (in Mission Viejo, CA). Important drivers include average rooftop suitability, household footprint/per-capita roof space, the quality of the solar resource, and the city’s estimated electricity consumption. In addition to city-wide results, we also estimate the ability of aggregations of households to offset their electricity consumption with PV. In a companion article, we will use statistical modeling to extend our results and estimate national rooftop PV technical potential. In addition, our publically available data and methods may help policy makers, utilities, researchers, and others perform customized analyses to meet their specific needs.

  2. Breeding maize for resistance to ear rot caused by Fusarium moniliforme.

    PubMed

    Hefny, M; Attaa, S; Bayoumi, T; Ammar, S; El-Bramawy, M

    2012-01-15

    Maize ear rots are among the most important impediments to increased maize production in Egypt. The present research was conducted to estimate combining abilities, heterosis and correlation coefficients for resistance to ear rot disease in seven corn inbred lines and their 21 crosses under field conditions. Results demonstrated that both additive and non-additive gene actions were responsible for the genetic expression of all characters with the preponderance of non-additive actions for days to 50% silking. The parental line L51 was the best combiner for earliness, low infection severity %, high phenols content, short plants and reasonable grain yield, while L101 was good combiner for low ear rot infection only. The cross: L122 x L84, L122 x L101, L51 x L101, L76 x L36, L76 x L84, L36 x L84, L36 x L81 and L36 x L101 which involved one or both parents with good General Combining Ability (GCA) effects expressed useful significant heterosis and Specific Combining Ability (SCA) effects for low infection severity %, high phenol contents, early silking, tall plants and high grain yield. Phenotypic and genotypic correlation coefficients suggest that selection for resistance to ear rot should identify lines with high yielding ability, early silking, tall plants, high phenols content and chitinase activity.

  3. Chaos synchronization and Nelder-Mead search for parameter estimation in nonlinear pharmacological systems: Estimating tumor antigenicity in a model of immunotherapy.

    PubMed

    Pillai, Nikhil; Craig, Morgan; Dokoumetzidis, Aristeidis; Schwartz, Sorell L; Bies, Robert; Freedman, Immanuel

    2018-06-19

    In mathematical pharmacology, models are constructed to confer a robust method for optimizing treatment. The predictive capability of pharmacological models depends heavily on the ability to track the system and to accurately determine parameters with reference to the sensitivity in projected outcomes. To closely track chaotic systems, one may choose to apply chaos synchronization. An advantageous byproduct of this methodology is the ability to quantify model parameters. In this paper, we illustrate the use of chaos synchronization combined with Nelder-Mead search to estimate parameters of the well-known Kirschner-Panetta model of IL-2 immunotherapy from noisy data. Chaos synchronization with Nelder-Mead search is shown to provide more accurate and reliable estimates than Nelder-Mead search based on an extended least squares (ELS) objective function. Our results underline the strength of this approach to parameter estimation and provide a broader framework of parameter identification for nonlinear models in pharmacology. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Estimates of general combining ability in Hevea breeding at the Rubber Research Institute of Malaysia : I. Phases II and III A.

    PubMed

    Tan, H

    1977-01-01

    Estimates of general combining ability of parents for yield and girth obtained separately from seedlings and their corresponding clonal families in Phases II and IIIA of the RRIM breeding programme are compared. A highly significant positive correlation (r = 0.71***) is found between GCA estimates from seedling and clonal families for yield in Phase IIIA, but not in Phase II (r = -0.03(NS)) nor for girth (r= -0.27(NS)) in Phase IIIA. The correlations for Phase II yield and Phase IIIA girth, however, improve when the GCA estimates based on small sample size or reversed rankings are excluded.When the best selections (based on present clonal and seedling information) are compared, all five of the parents top-ranking for yield are common in Phase IIIA but only two parents are common for yield and girth in Phases II and IIIA respectively. However, only one parent for yield in Phase II and two parents for girth in Phase IIIA would, if selected on clonal performance, have been omitted from the top ranking selections made by previous workers using seedling information.These findings, therefore, justify the choice of parents based on GCA estimates for yield obtained from seedling performance. Similar justification cannot be offered for girth, for which analysis is confounded by uninterpretable site and seasonal effects.

  5. Estimation of phenotypic variability in symbiotic nitrogen fixation ability of common bean under drought stress using 15N natural abundance in grain.

    PubMed

    Polania, Jose; Poschenrieder, Charlotte; Rao, Idupulapati; Beebe, Stephen

    2016-09-01

    Common bean ( Phaseolus vulgaris L.) is the most important food legume, cultivated by small farmers and is usually exposed to unfavorable conditions with minimum use of inputs. Drought and low soil fertility, especially phosphorus and nitrogen (N) deficiencies, are major limitations to bean yield in smallholder systems. Beans can derive part of their required N from the atmosphere through symbiotic nitrogen fixation (SNF). Drought stress severely limits SNF ability of plants. The main objectives of this study were to: (i) test and validate the use of 15 N natural abundance in grain to quantify phenotypic differences in SNF ability for its implementation in breeding programs of common bean with bush growth habit aiming to improve SNF, and (ii) quantify phenotypic differences in SNF under drought to identify superior genotypes that could serve as parents. Field studies were conducted at CIAT-Palmira, Colombia using a set of 36 bean genotypes belonging to the Middle American gene pool for evaluation in two seasons with two levels of water supply (irrigated and drought stress). We used 15 N natural abundance method to compare SNF ability estimated from shoot tissue sampled at mid-pod filling growth stage vs. grain tissue sampled at harvest. Our results showed positive and significant correlation between nitrogen derived from the atmosphere (%Ndfa) estimated using shoot tissue at mid-pod filling and %Ndfa estimated using grain tissue at harvest. Both methods showed phenotypic variability in SNF ability under both drought and irrigated conditions and a significant reduction in SNF ability was observed under drought stress. We suggest that the method of estimating Ndfa using grain tissue (Ndfa-G) could be applied in bean breeding programs to improve SNF ability. Using this method of Ndfa-G, we identified four bean lines (RCB 593, SEA 15, NCB 226 and BFS 29) that combine greater SNF ability with greater grain yield under drought stress and these could serve as potential parents to further improve SNF ability of common bean.

  6. Bayesian and “Anti-Bayesian” Biases in Sensory Integration for Action and Perception in the Size–Weight Illusion

    PubMed Central

    Brayanov, Jordan B.

    2010-01-01

    Which is heavier: a pound of lead or a pound of feathers? This classic trick question belies a simple but surprising truth: when lifted, the pound of lead feels heavier—a phenomenon known as the size–weight illusion. To estimate the weight of an object, our CNS combines two imperfect sources of information: a prior expectation, based on the object's appearance, and direct sensory information from lifting it. Bayes' theorem (or Bayes' law) defines the statistically optimal way to combine multiple information sources for maximally accurate estimation. Here we asked whether the mechanisms for combining these information sources produce statistically optimal weight estimates for both perceptions and actions. We first studied the ability of subjects to hold one hand steady when the other removed an object from it, under conditions in which sensory information about the object's weight sometimes conflicted with prior expectations based on its size. Since the ability to steady the supporting hand depends on the generation of a motor command that accounts for lift timing and object weight, hand motion can be used to gauge biases in weight estimation by the motor system. We found that these motor system weight estimates reflected the integration of prior expectations with real-time proprioceptive information in a Bayesian, statistically optimal fashion that discounted unexpected sensory information. This produces a motor size–weight illusion that consistently biases weight estimates toward prior expectations. In contrast, when subjects compared the weights of two objects, their perceptions defied Bayes' law, exaggerating the value of unexpected sensory information. This produces a perceptual size–weight illusion that biases weight perceptions away from prior expectations. We term this effect “anti-Bayesian” because the bias is opposite that seen in Bayesian integration. Our findings suggest that two fundamentally different strategies for the integration of prior expectations with sensory information coexist in the nervous system for weight estimation. PMID:20089821

  7. Using GIS-based methods and lidar data to estimate rooftop solar technical potential in US cities

    DOE PAGES

    Margolis, Robert; Gagnon, Pieter; Melius, Jennifer; ...

    2017-07-06

    Here, we estimate the technical potential of rooftop solar photovoltaics (PV) for select US cities by combining light detection and ranging (lidar) data, a validated analytical method for determining rooftop PV suitability employing geographic information systems, and modeling of PV electricity generation. We find that rooftop PV's ability to meet estimated city electricity consumption varies widely - from meeting 16% of annual consumption (in Washington, DC) to meeting 88% (in Mission Viejo, CA). Important drivers include average rooftop suitability, household footprint/per-capita roof space, the quality of the solar resource, and the city's estimated electricity consumption. In addition to city-wide results,more » we also estimate the ability of aggregations of households to offset their electricity consumption with PV. In a companion article, we will use statistical modeling to extend our results and estimate national rooftop PV technical potential. In addition, our publically available data and methods may help policy makers, utilities, researchers, and others perform customized analyses to meet their specific needs.« less

  8. Using GIS-based methods and lidar data to estimate rooftop solar technical potential in US cities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margolis, Robert; Gagnon, Pieter; Melius, Jennifer

    Here, we estimate the technical potential of rooftop solar photovoltaics (PV) for select US cities by combining light detection and ranging (lidar) data, a validated analytical method for determining rooftop PV suitability employing geographic information systems, and modeling of PV electricity generation. We find that rooftop PV's ability to meet estimated city electricity consumption varies widely - from meeting 16% of annual consumption (in Washington, DC) to meeting 88% (in Mission Viejo, CA). Important drivers include average rooftop suitability, household footprint/per-capita roof space, the quality of the solar resource, and the city's estimated electricity consumption. In addition to city-wide results,more » we also estimate the ability of aggregations of households to offset their electricity consumption with PV. In a companion article, we will use statistical modeling to extend our results and estimate national rooftop PV technical potential. In addition, our publically available data and methods may help policy makers, utilities, researchers, and others perform customized analyses to meet their specific needs.« less

  9. Entropy Econometrics for combining regional economic forecasts: A Data-Weighted Prior Estimator

    NASA Astrophysics Data System (ADS)

    Fernández-Vázquez, Esteban; Moreno, Blanca

    2017-10-01

    Forecast combination has been studied in econometrics for a long time, and the literature has shown the superior performance of forecast combination over individual predictions. However, there is still controversy on which is the best procedure to specify the forecast weights. This paper explores the possibility of using a procedure based on Entropy Econometrics, which allows setting the weights for the individual forecasts as a mixture of different alternatives. In particular, we examine the ability of the Data-Weighted Prior Estimator proposed by Golan (J Econom 101(1):165-193, 2001) to combine forecasting models in a context of small sample sizes, a relative common scenario when dealing with time series for regional economies. We test the validity of the proposed approach using a simulation exercise and a real-world example that aims at predicting gross regional product growth rates for a regional economy. The forecasting performance of the Data-Weighted Prior Estimator proposed is compared with other combining methods. The simulation results indicate that in scenarios of heavily ill-conditioned datasets the approach suggested dominates other forecast combination strategies. The empirical results are consistent with the conclusions found in the numerical experiment.

  10. Photometric redshifts for the next generation of deep radio continuum surveys - II. Gaussian processes and hybrid estimates

    NASA Astrophysics Data System (ADS)

    Duncan, Kenneth J.; Jarvis, Matt J.; Brown, Michael J. I.; Röttgering, Huub J. A.

    2018-07-01

    Building on the first paper in this series (Duncan et al. 2018), we present a study investigating the performance of Gaussian process photometric redshift (photo-z) estimates for galaxies and active galactic nuclei (AGNs) detected in deep radio continuum surveys. A Gaussian process redshift code is used to produce photo-z estimates targeting specific subsets of both the AGN population - infrared (IR), X-ray, and optically selected AGNs - and the general galaxy population. The new estimates for the AGN population are found to perform significantly better at z > 1 than the template-based photo-z estimates presented in our previous study. Our new photo-z estimates are then combined with template estimates through hierarchical Bayesian combination to produce a hybrid consensus estimate that outperforms both of the individual methods across all source types. Photo-z estimates for radio sources that are X-ray sources or optical/IR AGNs are significantly improved in comparison to previous template-only estimates - with outlier fractions and robust scatter reduced by up to a factor of ˜4. The ability of our method to combine the strengths of the two input photo-z techniques and the large improvements we observe illustrate its potential for enabling future exploitation of deep radio continuum surveys for both the study of galaxy and black hole coevolution and for cosmological studies.

  11. Spectrotemporal Modulation Sensitivity as a Predictor of Speech Intelligibility for Hearing-Impaired Listeners

    PubMed Central

    Bernstein, Joshua G.W.; Mehraei, Golbarg; Shamma, Shihab; Gallun, Frederick J.; Theodoroff, Sarah M.; Leek, Marjorie R.

    2014-01-01

    Background A model that can accurately predict speech intelligibility for a given hearing-impaired (HI) listener would be an important tool for hearing-aid fitting or hearing-aid algorithm development. Existing speech-intelligibility models do not incorporate variability in suprathreshold deficits that are not well predicted by classical audiometric measures. One possible approach to the incorporation of such deficits is to base intelligibility predictions on sensitivity to simultaneously spectrally and temporally modulated signals. Purpose The likelihood of success of this approach was evaluated by comparing estimates of spectrotemporal modulation (STM) sensitivity to speech intelligibility and to psychoacoustic estimates of frequency selectivity and temporal fine-structure (TFS) sensitivity across a group of HI listeners. Research Design The minimum modulation depth required to detect STM applied to an 86 dB SPL four-octave noise carrier was measured for combinations of temporal modulation rate (4, 12, or 32 Hz) and spectral modulation density (0.5, 1, 2, or 4 cycles/octave). STM sensitivity estimates for individual HI listeners were compared to estimates of frequency selectivity (measured using the notched-noise method at 500, 1000measured using the notched-noise method at 500, 2000, and 4000 Hz), TFS processing ability (2 Hz frequency-modulation detection thresholds for 500, 10002 Hz frequency-modulation detection thresholds for 500, 2000, and 4000 Hz carriers) and sentence intelligibility in noise (at a 0 dB signal-to-noise ratio) that were measured for the same listeners in a separate study. Study Sample Eight normal-hearing (NH) listeners and 12 listeners with a diagnosis of bilateral sensorineural hearing loss participated. Data Collection and Analysis STM sensitivity was compared between NH and HI listener groups using a repeated-measures analysis of variance. A stepwise regression analysis compared STM sensitivity for individual HI listeners to audiometric thresholds, age, and measures of frequency selectivity and TFS processing ability. A second stepwise regression analysis compared speech intelligibility to STM sensitivity and the audiogram-based Speech Intelligibility Index. Results STM detection thresholds were elevated for the HI listeners, but only for low rates and high densities. STM sensitivity for individual HI listeners was well predicted by a combination of estimates of frequency selectivity at 4000 Hz and TFS sensitivity at 500 Hz but was unrelated to audiometric thresholds. STM sensitivity accounted for an additional 40% of the variance in speech intelligibility beyond the 40% accounted for by the audibility-based Speech Intelligibility Index. Conclusions Impaired STM sensitivity likely results from a combination of a reduced ability to resolve spectral peaks and a reduced ability to use TFS information to follow spectral-peak movements. Combining STM sensitivity estimates with audiometric threshold measures for individual HI listeners provided a more accurate prediction of speech intelligibility than audiometric measures alone. These results suggest a significant likelihood of success for an STM-based model of speech intelligibility for HI listeners. PMID:23636210

  12. Breeding Potential of Introgression Lines Developed from Interspecific Crossing between Upland Cotton (Gossypium hirsutum) and Gossypium barbadense: Heterosis, Combining Ability and Genetic Effects

    PubMed Central

    Li, Xingli; Pei, Wenfeng

    2016-01-01

    Upland cotton (Gossypium hirstum L.), which produces more than 95% of the world natural cotton fibers, has a narrow genetic base which hinders progress in cotton breeding. Introducing germplasm from exotic sources especially from another cultivated tetraploid G. barbadense L. can broaden the genetic base of Upland cotton. However, the breeding potential of introgression lines (ILs) in Upland cotton with G. barbadense germplasm integration has not been well addressed. This study involved six ILs developed from an interspecific crossing and backcrossing between Upland cotton and G. barbadense and represented one of the first studies to investigate breeding potentials of a set of ILs using a full diallel analysis. High mid-parent heterosis was detected in several hybrids between ILs and a commercial cultivar, which also out-yielded the high-yielding cultivar parent in F1, F2 and F3 generations. A further analysis indicated that general ability (GCA) variance was predominant for all the traits, while specific combining ability (SCA) variance was either non-existent or much lower than GCA. The estimated GCA effects and predicted additive effects for parents in each trait were positively correlated (at P<0.01). Furthermore, GCA and additive effects for each trait were also positively correlated among generations (at P<0.05), suggesting that F2 and F3 generations can be used as a proxy to F1 in analyzing combining abilities and estimating genetic parameters. In addition, differences between reciprocal crosses in F1 and F2 were not significant for yield, yield components and fiber quality traits. But maternal effects appeared to be present for seed oil and protein contents in F3. This study identified introgression lines as good general combiners for yield and fiber quality improvement and hybrids with high heterotic vigor in yield, and therefore provided useful information for further utilization of introgression lines in cotton breeding. PMID:26730964

  13. Breeding Potential of Introgression Lines Developed from Interspecific Crossing between Upland Cotton (Gossypium hirsutum) and Gossypium barbadense: Heterosis, Combining Ability and Genetic Effects.

    PubMed

    Zhang, Jinfa; Wu, Man; Yu, Jiwen; Li, Xingli; Pei, Wenfeng

    2016-01-01

    Upland cotton (Gossypium hirstum L.), which produces more than 95% of the world natural cotton fibers, has a narrow genetic base which hinders progress in cotton breeding. Introducing germplasm from exotic sources especially from another cultivated tetraploid G. barbadense L. can broaden the genetic base of Upland cotton. However, the breeding potential of introgression lines (ILs) in Upland cotton with G. barbadense germplasm integration has not been well addressed. This study involved six ILs developed from an interspecific crossing and backcrossing between Upland cotton and G. barbadense and represented one of the first studies to investigate breeding potentials of a set of ILs using a full diallel analysis. High mid-parent heterosis was detected in several hybrids between ILs and a commercial cultivar, which also out-yielded the high-yielding cultivar parent in F1, F2 and F3 generations. A further analysis indicated that general ability (GCA) variance was predominant for all the traits, while specific combining ability (SCA) variance was either non-existent or much lower than GCA. The estimated GCA effects and predicted additive effects for parents in each trait were positively correlated (at P<0.01). Furthermore, GCA and additive effects for each trait were also positively correlated among generations (at P<0.05), suggesting that F2 and F3 generations can be used as a proxy to F1 in analyzing combining abilities and estimating genetic parameters. In addition, differences between reciprocal crosses in F1 and F2 were not significant for yield, yield components and fiber quality traits. But maternal effects appeared to be present for seed oil and protein contents in F3. This study identified introgression lines as good general combiners for yield and fiber quality improvement and hybrids with high heterotic vigor in yield, and therefore provided useful information for further utilization of introgression lines in cotton breeding.

  14. Estimation of diversity and combining abilities in Helianthus annuus L. under water stress and normal conditions.

    PubMed

    Saba, M; Khan, F A; Sadaqat, H A; Rana, I A

    2016-10-24

    Sunflower cannot produce high yields under water-limiting conditions. The aim of the present study was to prevent the impediments on yield and to develop varieties with high-yield potential under water scarce conditions. For achieving this objective, it is necessary to detect parents with desirable traits that mainly depend on the action of genes controlling the trait under improvement, combining ability, and genetic makeup of the parents. Heterosis can also be used to pool the desirable genes from genetically divergent varieties and these divergent parents could be detected by molecular studies. Ten tolerant and five susceptible tester lines were selected, crossed, and tested for genetic diversity using simple sequence repeat primers. We identified two parents (A-10.8 and G-60) that showed maximum (46.7%) genetic dissimilarity. On an average 3.1 alleles per locus were detected for twenty pair of primers. Evaluation of mean values revealed that under stress conditions the mean performances of the genotypes were reduced for all traits under study. Parent A-10.8 was consistent as a good general combiner for achene yield per plant under both non-stress and stress conditions. Line A-10.8 in the hybrid A-10.8 x G-60 proved to be a good combiner as it showed negative specific combining ability (SCA) effects for plant height and internodal length and positive SCA effects for head weight, achene yield per plant, and membrane stability index. Valuable information on gene action, combining ability, and heterosis was generated, which could be used in further breeding programs.

  15. Inversion for Refractivity Parameters Using a Dynamic Adaptive Cuckoo Search with Crossover Operator Algorithm

    PubMed Central

    Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang

    2016-01-01

    Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter. PMID:27212938

  16. Prediction of industrial tomato hybrids from agronomic traits and ISSR molecular markers.

    PubMed

    Figueiredo, A S T; Resende, J T V; Faria, M V; Da-Silva, P R; Fagundes, B S; Morales, R G F

    2016-05-13

    Heterosis is a highly relevant phenomenon in plant breeding. This condition is usually established in hybrids derived from crosses of highly divergent parents. The success of a breeder in obtaining heterosis is directly related to the correct identification of genetically contrasting parents. Currently, the diallel cross is the most commonly used methodology to detect contrasting parents; however, it is a time- and cost-consuming procedure. Therefore, new tools capable of performing this task quickly and accurately are required. Thus, the purpose of this study was to estimate the genetic divergence in industrial tomato lines, based on agronomic traits, and to compare with estimates obtained using inter-simple sequence repeat (ISSR) molecular markers. The genetic divergence among 10 industrial tomato lines, based on nine morphological characters and 12 ISSR primers was analyzed. For data analysis, Pearson and Spearman correlation coefficients were calculated between the genetic dissimilarity measures estimated by Mahalanobis distance and Jaccard's coefficient of genetic dissimilarity from the heterosis estimates, combining ability, and means of important traits of industrial tomato. The ISSR markers efficiently detected contrasting parents for hybrid production in tomato. Parent RVTD-08 was indicated as the most divergent, both by molecular and morphological markers, that positively contributed to increased heterosis and by the specific combining ability in the crosses in which it participated. The genetic dissimilarity estimated by ISSR molecular markers aided the identification of the best hybrids of the experiment in terms of total fruit yield, pulp yield, and soluble solids content.

  17. Network design for quantifying urban CO2 emissions: assessing trade-offs between precision and network density

    NASA Astrophysics Data System (ADS)

    Turner, Alexander J.; Shusterman, Alexis A.; McDonald, Brian C.; Teige, Virginia; Harley, Robert A.; Cohen, Ronald C.

    2016-11-01

    The majority of anthropogenic CO2 emissions are attributable to urban areas. While the emissions from urban electricity generation often occur in locations remote from consumption, many of the other emissions occur within the city limits. Evaluating the effectiveness of strategies for controlling these emissions depends on our ability to observe urban CO2 emissions and attribute them to specific activities. Cost-effective strategies for doing so have yet to be described. Here we characterize the ability of a prototype measurement network, modeled after the Berkeley Atmospheric CO2 Observation Network (BEACO2N) in California's Bay Area, in combination with an inverse model based on the coupled Weather Research and Forecasting/Stochastic Time-Inverted Lagrangian Transport (WRF-STILT) to improve our understanding of urban emissions. The pseudo-measurement network includes 34 sites at roughly 2 km spacing covering an area of roughly 400 km2. The model uses an hourly 1 × 1 km2 emission inventory and 1 × 1 km2 meteorological calculations. We perform an ensemble of Bayesian atmospheric inversions to sample the combined effects of uncertainties of the pseudo-measurements and the model. We vary the estimates of the combined uncertainty of the pseudo-observations and model over a range of 20 to 0.005 ppm and vary the number of sites from 1 to 34. We use these inversions to develop statistical models that estimate the efficacy of the combined model-observing system in reducing uncertainty in CO2 emissions. We examine uncertainty in estimated CO2 fluxes on the urban scale, as well as for sources embedded within the city such as a line source (e.g., a highway) or a point source (e.g., emissions from the stacks of small industrial facilities). Using our inversion framework, we find that a dense network with moderate precision is the preferred setup for estimating area, line, and point sources from a combined uncertainty and cost perspective. The dense network considered here (modeled after the BEACO2N network with an assumed mismatch error of 1 ppm at an hourly temporal resolution) could estimate weekly CO2 emissions from an urban region with less than 5 % error, given our characterization of the combined observation and model uncertainty.

  18. Network design for quantifying urban CO 2 emissions: assessing trade-offs between precision and network density

    DOE PAGES

    Turner, Alexander J.; Shusterman, Alexis A.; McDonald, Brian C.; ...

    2016-11-01

    The majority of anthropogenic CO 2 emissions are attributable to urban areas. While the emissions from urban electricity generation often occur in locations remote from consumption, many of the other emissions occur within the city limits. Evaluating the effectiveness of strategies for controlling these emissions depends on our ability to observe urban CO 2 emissions and attribute them to specific activities. Cost-effective strategies for doing so have yet to be described. Here we characterize the ability of a prototype measurement network, modeled after the Berkeley Atmospheric CO 2 Observation Network (BEACO 2N) in California's Bay Area, in combination with anmore » inverse model based on the coupled Weather Research and Forecasting/Stochastic Time-Inverted Lagrangian Transport (WRF-STILT) to improve our understanding of urban emissions. The pseudo-measurement network includes 34 sites at roughly 2 km spacing covering an area of roughly 400 km 2. The model uses an hourly 1 × 1 km 2 emission inventory and 1 × 1 km 2 meteorological calculations. We perform an ensemble of Bayesian atmospheric inversions to sample the combined effects of uncertainties of the pseudo-measurements and the model. We vary the estimates of the combined uncertainty of the pseudo-observations and model over a range of 20 to 0.005 ppm and vary the number of sites from 1 to 34. We use these inversions to develop statistical models that estimate the efficacy of the combined model–observing system in reducing uncertainty in CO 2 emissions. We examine uncertainty in estimated CO 2 fluxes on the urban scale, as well as for sources embedded within the city such as a line source (e.g., a highway) or a point source (e.g., emissions from the stacks of small industrial facilities). Using our inversion framework, we find that a dense network with moderate precision is the preferred setup for estimating area, line, and point sources from a combined uncertainty and cost perspective. The dense network considered here (modeled after the BEACO 2N network with an assumed mismatch error of 1 ppm at an hourly temporal resolution) could estimate weekly CO 2 emissions from an urban region with less than 5 % error, given our characterization of the combined observation and model uncertainty.« less

  19. Network design for quantifying urban CO 2 emissions: assessing trade-offs between precision and network density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Alexander J.; Shusterman, Alexis A.; McDonald, Brian C.

    The majority of anthropogenic CO 2 emissions are attributable to urban areas. While the emissions from urban electricity generation often occur in locations remote from consumption, many of the other emissions occur within the city limits. Evaluating the effectiveness of strategies for controlling these emissions depends on our ability to observe urban CO 2 emissions and attribute them to specific activities. Cost-effective strategies for doing so have yet to be described. Here we characterize the ability of a prototype measurement network, modeled after the Berkeley Atmospheric CO 2 Observation Network (BEACO 2N) in California's Bay Area, in combination with anmore » inverse model based on the coupled Weather Research and Forecasting/Stochastic Time-Inverted Lagrangian Transport (WRF-STILT) to improve our understanding of urban emissions. The pseudo-measurement network includes 34 sites at roughly 2 km spacing covering an area of roughly 400 km 2. The model uses an hourly 1 × 1 km 2 emission inventory and 1 × 1 km 2 meteorological calculations. We perform an ensemble of Bayesian atmospheric inversions to sample the combined effects of uncertainties of the pseudo-measurements and the model. We vary the estimates of the combined uncertainty of the pseudo-observations and model over a range of 20 to 0.005 ppm and vary the number of sites from 1 to 34. We use these inversions to develop statistical models that estimate the efficacy of the combined model–observing system in reducing uncertainty in CO 2 emissions. We examine uncertainty in estimated CO 2 fluxes on the urban scale, as well as for sources embedded within the city such as a line source (e.g., a highway) or a point source (e.g., emissions from the stacks of small industrial facilities). Using our inversion framework, we find that a dense network with moderate precision is the preferred setup for estimating area, line, and point sources from a combined uncertainty and cost perspective. The dense network considered here (modeled after the BEACO 2N network with an assumed mismatch error of 1 ppm at an hourly temporal resolution) could estimate weekly CO 2 emissions from an urban region with less than 5 % error, given our characterization of the combined observation and model uncertainty.« less

  20. Exploiting Satellite Archives to Estimate Global Glacier Volume Changes

    NASA Astrophysics Data System (ADS)

    McNabb, R. W.; Nuth, C.; Kääb, A.; Girod, L.

    2017-12-01

    In the past decade, the availability of, and ability to process, remote sensing data over glaciers has expanded tremendously. Newly opened satellite image archives, combined with new processing techniques as well as increased computing power and storage capacity, have given the glaciological community the ability to observe and investigate glaciological processes and changes on a truly global scale. In particular, the opening of the ASTER archives provides further opportunities to both estimate and monitor glacier elevation and volume changes globally, including potentially on sub-annual timescales. With this explosion of data availability, however, comes the challenge of seeing the forest instead of the trees. The high volume of data available means that automated detection and proper handling of errors and biases in the data becomes critical, in order to properly study the processes that we wish to see. This includes holes and blunders in digital elevation models (DEMs) derived from optical data or penetration of radar signals leading to biases in DEMs derived from radar data, among other sources. Here, we highlight new advances in the ability to sift through high-volume datasets, and apply these techniques to estimate recent glacier volume changes in the Caucasus Mountains, Scandinavia, Africa, and South America. By properly estimating and correcting for these biases, we additionally provide a detailed accounting of the uncertainties in these estimates of volume changes, leading to more reliable results that have applicability beyond the glaciological community.

  1. Estimation of Genetic Relationships Between Individuals Across Cohorts and Platforms: Application to Childhood Height.

    PubMed

    Fedko, Iryna O; Hottenga, Jouke-Jan; Medina-Gomez, Carolina; Pappa, Irene; van Beijsterveldt, Catharina E M; Ehli, Erik A; Davies, Gareth E; Rivadeneira, Fernando; Tiemeier, Henning; Swertz, Morris A; Middeldorp, Christel M; Bartels, Meike; Boomsma, Dorret I

    2015-09-01

    Combining genotype data across cohorts increases power to estimate the heritability due to common single nucleotide polymorphisms (SNPs), based on analyzing a Genetic Relationship Matrix (GRM). However, the combination of SNP data across multiple cohorts may lead to stratification, when for example, different genotyping platforms are used. In the current study, we address issues of combining SNP data from different cohorts, the Netherlands Twin Register (NTR) and the Generation R (GENR) study. Both cohorts include children of Northern European Dutch background (N = 3102 + 2826, respectively) who were genotyped on different platforms. We explore imputation and phasing as a tool and compare three GRM-building strategies, when data from two cohorts are (1) just combined, (2) pre-combined and cross-platform imputed and (3) cross-platform imputed and post-combined. We test these three strategies with data on childhood height for unrelated individuals (N = 3124, average age 6.7 years) to explore their effect on SNP-heritability estimates and compare results to those obtained from the independent studies. All combination strategies result in SNP-heritability estimates with a standard error smaller than those of the independent studies. We did not observe significant difference in estimates of SNP-heritability based on various cross-platform imputed GRMs. SNP-heritability of childhood height was on average estimated as 0.50 (SE = 0.10). Introducing cohort as a covariate resulted in ≈2 % drop. Principal components (PCs) adjustment resulted in SNP-heritability estimates of about 0.39 (SE = 0.11). Strikingly, we did not find significant difference between cross-platform imputed and combined GRMs. All estimates were significant regardless the use of PCs adjustment. Based on these analyses we conclude that imputation with a reference set helps to increase power to estimate SNP-heritability by combining cohorts of the same ethnicity genotyped on different platforms. However, important factors should be taken into account such as remaining cohort stratification after imputation and/or phenotypic heterogeneity between and within cohorts. Whether one should use imputation, or just combine the genotype data, depends on the number of overlapping SNPs in relation to the total number of genotyped SNPs for both cohorts, and their ability to tag all the genetic variance related to the specific trait of interest.

  2. NASA MEaSUREs Combined ASTER and MODIS Emissivity over Land (CAMEL) Uncertainty Estimation

    NASA Astrophysics Data System (ADS)

    Feltz, M.; Borbas, E. E.; Knuteson, R. O.; Hulley, G. C.; Hook, S. J.

    2016-12-01

    Under the NASA MEASUREs project a new global, land surface emissivity database is being made available as part of the Unified and Coherent Land Surface Temperature and Emissivity Earth System Data Record. This new CAMEL emissivity database is created by the merging of the MODIS baseline-fit emissivity database (UWIREMIS) developed at the University of Wisconsin-Madison and the ASTER Global Emissivity Dataset v4 produced at the Jet Propulsion Labratory. The combined CAMEL product leverages the ability of ASTER's 5 bands to more accurately resolve the TIR (8-12 micron) region and the ability of UWIREMIS to provide information throughout the 3.6-12 micron IR region. It will be made available for 2000 through 2017 at monthly mean, 5 km resolution for 13 bands within the 3.6-14.3 micron region, and will also be extended to 417 infrared spectral channels using a principal component regression approach. Uncertainty estimates of the CAMEL will be provided that combine temporal, spatial, and algorithm variability as part of a total uncertainty estimate for the emissivity product. The spatial and temporal uncertainties are calculated as the standard deviation of the surrounding 5x5 pixels and 3 neighboring months respectively while the algorithm uncertainty is calculated using a measure of the difference between the two CAMEL emissivity inputs—the ASTER GED and MODIS baseline-fit products. This work describes these uncertainty estimation methods in detail and shows first results. Global, monthly results for different seasons are shown as well as case study examples at locations with different land surface types. Comparisons of the case studies to both lab values and an independent emissivity climatology derived from IASI measurements (Dan Zhou et al., IEEE Trans., 2011) are included.

  3. Improved blood glucose estimation through multi-sensor fusion.

    PubMed

    Xiong, Feiyu; Hipszer, Brian R; Joseph, Jeffrey; Kam, Moshe

    2011-01-01

    Continuous glucose monitoring systems are an integral component of diabetes management. Efforts to improve the accuracy and robustness of these systems are at the forefront of diabetes research. Towards this goal, a multi-sensor approach was evaluated in hospitalized patients. In this paper, we report on a multi-sensor fusion algorithm to combine glucose sensor measurements in a retrospective fashion. The results demonstrate the algorithm's ability to improve the accuracy and robustness of the blood glucose estimation with current glucose sensor technology.

  4. Diallel analysis and growth parameters as selection tools for drought tolerance in young Theobroma cacao plants

    USDA-ARS?s Scientific Manuscript database

    Technical Abstract: This study was aimed to estimate the combining ability, through diallel crosses, of T. cacao genotypes preselected for drought tolerance. The experiment was conducted under greenhouse conditions at the Cacao Research Center (CEPEC), Ilhéus, Bahia, Brazil, in a completely randomiz...

  5. Robustness of Ability Estimation to Multidimensionality in CAST with Implications to Test Assembly

    ERIC Educational Resources Information Center

    Zhang, Yanwei; Nandakumar, Ratna

    2006-01-01

    Computer Adaptive Sequential Testing (CAST) is a test delivery model that combines features of the traditional conventional paper-and-pencil testing and item-based computerized adaptive testing (CAT). The basic structure of CAST is a panel composed of multiple testlets adaptively administered to examinees at different stages. Current applications…

  6. Comparison of clinician-predicted to measured low vision outcomes.

    PubMed

    Chan, Tiffany L; Goldstein, Judith E; Massof, Robert W

    2013-08-01

    To compare low-vision rehabilitation (LVR) clinicians' predictions of the probability of success of LVR with patients' self-reported outcomes after provision of usual outpatient LVR services and to determine if patients' traits influence clinician ratings. The Activity Inventory (AI), a self-report visual function questionnaire, was administered pre-and post-LVR to 316 low-vision patients served by 28 LVR centers that participated in a collaborative observational study. The physical component of the Short Form-36, Geriatric Depression Scale, and Telephone Interview for Cognitive Status were also administered pre-LVR to measure physical capability, depression, and cognitive status. After patient evaluation, 38 LVR clinicians estimated the probability of outcome success (POS) using their own criteria. The POS ratings and change in functional ability were used to assess the effects of patients' baseline traits on predicted outcomes. A regression analysis with a hierarchical random-effects model showed no relationship between LVR physician POS estimates and AI-based outcomes. In another analysis, kappa statistics were calculated to determine the probability of agreement between POS and AI-based outcomes for different outcome criteria. Across all comparisons, none of the kappa values were significantly different from 0, which indicates that the rate of agreement is equivalent to chance. In an exploratory analysis, hierarchical mixed-effects regression models show that POS ratings are associated with information about the patient's cognitive functioning and the combination of visual acuity and functional ability, as opposed to visual acuity or functional ability alone. Clinicians' predictions of LVR outcomes seem to be influenced by knowledge of patients' cognitive functioning and the combination of visual acuity and functional ability-information clinicians acquire from the patient's history and examination. However, clinicians' predictions do not agree with observed changes in functional ability from the patient's perspective; they are no better than chance.

  7. A designed screening study with prespecified combinations of factor settings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson-cook, Christine M; Robinson, Timothy J

    2009-01-01

    In many applications, the experimenter has limited options about what factor combinations can be chosen for a designed study. Consider a screening study for a production process involving five input factors whose levels have been previously established. The goal of the study is to understand the effect of each factor on the response, a variable that is expensive to measure and results in destruction of the part. From an inventory of available parts with known factor values, we wish to identify a best collection of factor combinations with which to estimate the factor effects. Though the observational nature of themore » study cannot establish a causal relationship involving the response and the factors, the study can increase understanding of the underlying process. The study can also help determine where investment should be made to control input factors during production that will maximally influence the response. Because the factor combinations are observational, the chosen model matrix will be nonorthogonal and will not allow independent estimation of factor effects. In this manuscript we borrow principles from design of experiments to suggest an 'optimal' selection of factor combinations. Specifically, we consider precision of model parameter estimates, the issue of replication, and abilities to detect lack of fit and to estimate two-factor interactions. Through an example, we present strategies for selecting a subset of factor combinations that simultaneously balance multiple objectives, conduct a limited sensitivity analysis, and provide practical guidance for implementing our techniques across a variety of quality engineering disciplines.« less

  8. Differential item functioning analysis with ordinal logistic regression techniques. DIFdetect and difwithpar.

    PubMed

    Crane, Paul K; Gibbons, Laura E; Jolley, Lance; van Belle, Gerald

    2006-11-01

    We present an ordinal logistic regression model for identification of items with differential item functioning (DIF) and apply this model to a Mini-Mental State Examination (MMSE) dataset. We employ item response theory ability estimation in our models. Three nested ordinal logistic regression models are applied to each item. Model testing begins with examination of the statistical significance of the interaction term between ability and the group indicator, consistent with nonuniform DIF. Then we turn our attention to the coefficient of the ability term in models with and without the group term. If including the group term has a marked effect on that coefficient, we declare that it has uniform DIF. We examined DIF related to language of test administration in addition to self-reported race, Hispanic ethnicity, age, years of education, and sex. We used PARSCALE for IRT analyses and STATA for ordinal logistic regression approaches. We used an iterative technique for adjusting IRT ability estimates on the basis of DIF findings. Five items were found to have DIF related to language. These same items also had DIF related to other covariates. The ordinal logistic regression approach to DIF detection, when combined with IRT ability estimates, provides a reasonable alternative for DIF detection. There appear to be several items with significant DIF related to language of test administration in the MMSE. More attention needs to be paid to the specific criteria used to determine whether an item has DIF, not just the technique used to identify DIF.

  9. Operational wave forecasting with spaceborne SAR: Prospects and pitfalls

    NASA Technical Reports Server (NTRS)

    Beal, R. C.

    1986-01-01

    Measurements collected in the Shuttle Imaging Radar (SIR-B) Extreme Waves Experiment confirm the ability of Synthetic Aperture Radar (SAR) to yield useful estimates of wave directional energy spectra over global scales, at least for shuttle altitudes. However, azimuth fall-off effects tend to become severe for wavelengths shorter than about 100 m in most sea states. Moreover, the azimuth fall-off problem becomes increasingly severe as the platform altitude increases beyond 300 km. The most viable solution to the global wave measurements problem may be a low altitude spacecraft containing a combination of both the SAR and the Radar Ocean Wave Spectrometry (ROWS). Such a combination could have a synergy which yield global spectral estimates superior to those of either instrument singly employed.

  10. Detection and Estimation of 2-D Distributions of Greenhouse Gas Source Concentrations and Emissions over Complex Urban Environments and Industrial Sites

    NASA Astrophysics Data System (ADS)

    Zaccheo, T. S.; Pernini, T.; Dobler, J. T.; Blume, N.; Braun, M.

    2017-12-01

    This work highlights the use of the greenhouse-gas laser imaging tomography experiment (GreenLITETM) data in conjunction with a sparse tomography approach to identify and quantify both urban and industrial sources of CO2 and CH4. The GreenLITETM system provides a user-defined set of time-sequenced intersecting chords or integrated column measurements at a fixed height through a quasi-horizontal plane of interest. This plane, with unobstructed views along the lines of sight, may range from complex industrial facilities to a small city scale or urban sector. The continuous time phased absorption measurements are converted to column concentrations and combined with a plume based model to estimate the 2-D distribution of gas concentration over extended areas ranging from 0.04-25 km2. Finally, these 2-D maps of concentration are combined with ancillary meteorological and atmospheric data to identify potential emission sources and provide first order estimates of their associated fluxes. In this presentation, we will provide a brief overview of the systems and results from both controlled release experiments and a long-term system deployment in Paris, FR. These results provide a quantitative assessment of the system's ability to detect and estimate CO2 and CH4 sources, and demonstrate its ability to perform long-term autonomous monitoring and quantification of either persistent or sporadic emissions that may have both health and safety as well as environmental impacts.

  11. Estimating the extent of impervious surfaces and turf grass across large regions

    USGS Publications Warehouse

    Claggett, Peter; Irani, Frederick M.; Thompson, Renee L.

    2013-01-01

    The ability of researchers to accurately assess the extent of impervious and pervious developed surfaces, e.g., turf grass, using land-cover data derived from Landsat satellite imagery in the Chesapeake Bay watershed is limited due to the resolution of the data and systematic discrepancies between developed land-cover classes, surface mines, forests, and farmlands. Estimates of impervious surface and turf grass area in the Mid-Atlantic, United States that were based on 2006 Landsat-derived land-cover data were substantially lower than estimates based on more authoritative and independent sources. New estimates of impervious surfaces and turf grass area derived using land-cover data combined with ancillary information on roads, housing units, surface mines, and sampled estimates of road width and residential impervious area were up to 57 and 45% higher than estimates based strictly on land-cover data. These new estimates closely approximate estimates derived from authoritative and independent sources in developed counties.

  12. The 'robust' capture-recapture design allows components of recruitment to be estimated

    USGS Publications Warehouse

    Pollock, K.H.; Kendall, W.L.; Nichols, J.D.; Lebreton, J.-D.; North, P.M.

    1993-01-01

    The 'robust' capture-recapture design (Pollock 1982) allows analyses which combine features of closed population model analyses (Otis et aI., 1978, White et aI., 1982) and open population model analyses (Pollock et aI., 1990). Estimators obtained under these analyses are more robust to unequal catch ability than traditional Jolly-Seber estimators (Pollock, 1982; Pollock et al., 1990; Kendall, 1992). The robust design also allows estimation of parameters for population size, survival rate and recruitment numbers for all periods of the study unlike under Jolly-Seber type models. The major advantage of this design that we emphasize in this short review paper is that it allows separate estimation of immigration and in situ recruitment numbers for a two or more age class model (Nichols and Pollock, 1990). This is contrasted with the age-dependent Jolly-Seber model (Pollock, 1981; Stokes, 1984; Pollock et L, 1990) which provides separate estimates for immigration and in situ recruitment for all but the first two age classes where there is at least a three age class model. The ability to achieve this separation of recruitment components can be very important to population modelers and wildlife managers as many species can only be separated into two easily identified age classes in the field.

  13. Predicting absenteeism: screening for work ability or burnout.

    PubMed

    Schouteten, R

    2017-01-01

    In determining the predictors of occupational health problems, two factors can be distinguished: personal (work ability) factors and work-related factors (burnout, job characteristics). However, these risk factors are hardly ever combined and it is not clear whether burnout or work ability best predicts absenteeism. To relate measures of work ability, burnout and job characteristics to absenteeism as the indicators of occupational health problems. Survey data on work ability, burnout and job characteristics from a Dutch university were related to the absenteeism data from the university's occupational health and safety database in the year following the survey study. The survey contained the Work Ability Index (WAI), Utrecht Burnout Scale (UBOS) and seven job characteristics from the Questionnaire on Experience and Evaluation of Work (QEEW). There were 242 employees in the study group. Logistic regression analyses revealed that job characteristics did not predict absenteeism. Exceptional absenteeism was most consistently predicted by the WAI dimensions 'employees' own prognosis of work ability in two years from now' and 'mental resources/vitality' and the burnout dimension 'emotional exhaustion'. Other significant predictors of exceptional absenteeism frequency included estimated work impairment due to diseases (WAI) and feelings of depersonalization or emotional distance from the work (burnout). Absenteeism among university personnel was best predicted by a combination of work ability and burnout. As a result, measures to prevent absenteeism and health problems may best be aimed at improving an individual's work ability and/or preventing the occurrence of burnout. © The Author 2016. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Derived Optimal Linear Combination Evapotranspiration (DOLCE): a global gridded synthesis ET estimate

    NASA Astrophysics Data System (ADS)

    Hobeichi, Sanaa; Abramowitz, Gab; Evans, Jason; Ukkola, Anna

    2018-02-01

    Accurate global gridded estimates of evapotranspiration (ET) are key to understanding water and energy budgets, in addition to being required for model evaluation. Several gridded ET products have already been developed which differ in their data requirements, the approaches used to derive them and their estimates, yet it is not clear which provides the most reliable estimates. This paper presents a new global ET dataset and associated uncertainty with monthly temporal resolution for 2000-2009. Six existing gridded ET products are combined using a weighting approach trained by observational datasets from 159 FLUXNET sites. The weighting method is based on a technique that provides an analytically optimal linear combination of ET products compared to site data and accounts for both the performance differences and error covariance between the participating ET products. We examine the performance of the weighting approach in several in-sample and out-of-sample tests that confirm that point-based estimates of flux towers provide information on the grid scale of these products. We also provide evidence that the weighted product performs better than its six constituent ET product members in four common metrics. Uncertainty in the ET estimate is derived by rescaling the spread of participating ET products so that their spread reflects the ability of the weighted mean estimate to match flux tower data. While issues in observational data and any common biases in participating ET datasets are limitations to the success of this approach, future datasets can easily be incorporated and enhance the derived product.

  15. Assessing Risk Prediction Models Using Individual Participant Data From Multiple Studies

    PubMed Central

    Pennells, Lisa; Kaptoge, Stephen; White, Ian R.; Thompson, Simon G.; Wood, Angela M.; Tipping, Robert W.; Folsom, Aaron R.; Couper, David J.; Ballantyne, Christie M.; Coresh, Josef; Goya Wannamethee, S.; Morris, Richard W.; Kiechl, Stefan; Willeit, Johann; Willeit, Peter; Schett, Georg; Ebrahim, Shah; Lawlor, Debbie A.; Yarnell, John W.; Gallacher, John; Cushman, Mary; Psaty, Bruce M.; Tracy, Russ; Tybjærg-Hansen, Anne; Price, Jackie F.; Lee, Amanda J.; McLachlan, Stela; Khaw, Kay-Tee; Wareham, Nicholas J.; Brenner, Hermann; Schöttker, Ben; Müller, Heiko; Jansson, Jan-Håkan; Wennberg, Patrik; Salomaa, Veikko; Harald, Kennet; Jousilahti, Pekka; Vartiainen, Erkki; Woodward, Mark; D'Agostino, Ralph B.; Bladbjerg, Else-Marie; Jørgensen, Torben; Kiyohara, Yutaka; Arima, Hisatomi; Doi, Yasufumi; Ninomiya, Toshiharu; Dekker, Jacqueline M.; Nijpels, Giel; Stehouwer, Coen D. A.; Kauhanen, Jussi; Salonen, Jukka T.; Meade, Tom W.; Cooper, Jackie A.; Cushman, Mary; Folsom, Aaron R.; Psaty, Bruce M.; Shea, Steven; Döring, Angela; Kuller, Lewis H.; Grandits, Greg; Gillum, Richard F.; Mussolino, Michael; Rimm, Eric B.; Hankinson, Sue E.; Manson, JoAnn E.; Pai, Jennifer K.; Kirkland, Susan; Shaffer, Jonathan A.; Shimbo, Daichi; Bakker, Stephan J. L.; Gansevoort, Ron T.; Hillege, Hans L.; Amouyel, Philippe; Arveiler, Dominique; Evans, Alun; Ferrières, Jean; Sattar, Naveed; Westendorp, Rudi G.; Buckley, Brendan M.; Cantin, Bernard; Lamarche, Benoît; Barrett-Connor, Elizabeth; Wingard, Deborah L.; Bettencourt, Richele; Gudnason, Vilmundur; Aspelund, Thor; Sigurdsson, Gunnar; Thorsson, Bolli; Kavousi, Maryam; Witteman, Jacqueline C.; Hofman, Albert; Franco, Oscar H.; Howard, Barbara V.; Zhang, Ying; Best, Lyle; Umans, Jason G.; Onat, Altan; Sundström, Johan; Michael Gaziano, J.; Stampfer, Meir; Ridker, Paul M.; Michael Gaziano, J.; Ridker, Paul M.; Marmot, Michael; Clarke, Robert; Collins, Rory; Fletcher, Astrid; Brunner, Eric; Shipley, Martin; Kivimäki, Mika; Ridker, Paul M.; Buring, Julie; Cook, Nancy; Ford, Ian; Shepherd, James; Cobbe, Stuart M.; Robertson, Michele; Walker, Matthew; Watson, Sarah; Alexander, Myriam; Butterworth, Adam S.; Angelantonio, Emanuele Di; Gao, Pei; Haycock, Philip; Kaptoge, Stephen; Pennells, Lisa; Thompson, Simon G.; Walker, Matthew; Watson, Sarah; White, Ian R.; Wood, Angela M.; Wormser, David; Danesh, John

    2014-01-01

    Individual participant time-to-event data from multiple prospective epidemiologic studies enable detailed investigation into the predictive ability of risk models. Here we address the challenges in appropriately combining such information across studies. Methods are exemplified by analyses of log C-reactive protein and conventional risk factors for coronary heart disease in the Emerging Risk Factors Collaboration, a collation of individual data from multiple prospective studies with an average follow-up duration of 9.8 years (dates varied). We derive risk prediction models using Cox proportional hazards regression analysis stratified by study and obtain estimates of risk discrimination, Harrell's concordance index, and Royston's discrimination measure within each study; we then combine the estimates across studies using a weighted meta-analysis. Various weighting approaches are compared and lead us to recommend using the number of events in each study. We also discuss the calculation of measures of reclassification for multiple studies. We further show that comparison of differences in predictive ability across subgroups should be based only on within-study information and that combining measures of risk discrimination from case-control studies and prospective studies is problematic. The concordance index and discrimination measure gave qualitatively similar results throughout. While the concordance index was very heterogeneous between studies, principally because of differing age ranges, the increments in the concordance index from adding log C-reactive protein to conventional risk factors were more homogeneous. PMID:24366051

  16. Assessing risk prediction models using individual participant data from multiple studies.

    PubMed

    Pennells, Lisa; Kaptoge, Stephen; White, Ian R; Thompson, Simon G; Wood, Angela M

    2014-03-01

    Individual participant time-to-event data from multiple prospective epidemiologic studies enable detailed investigation into the predictive ability of risk models. Here we address the challenges in appropriately combining such information across studies. Methods are exemplified by analyses of log C-reactive protein and conventional risk factors for coronary heart disease in the Emerging Risk Factors Collaboration, a collation of individual data from multiple prospective studies with an average follow-up duration of 9.8 years (dates varied). We derive risk prediction models using Cox proportional hazards regression analysis stratified by study and obtain estimates of risk discrimination, Harrell's concordance index, and Royston's discrimination measure within each study; we then combine the estimates across studies using a weighted meta-analysis. Various weighting approaches are compared and lead us to recommend using the number of events in each study. We also discuss the calculation of measures of reclassification for multiple studies. We further show that comparison of differences in predictive ability across subgroups should be based only on within-study information and that combining measures of risk discrimination from case-control studies and prospective studies is problematic. The concordance index and discrimination measure gave qualitatively similar results throughout. While the concordance index was very heterogeneous between studies, principally because of differing age ranges, the increments in the concordance index from adding log C-reactive protein to conventional risk factors were more homogeneous.

  17. Combining markers with and without the limit of detection

    PubMed Central

    Dong, Ting; Liu, Catherine Chunling; Petricoin, Emanuel F.; Tang, Liansheng Larry

    2014-01-01

    In this paper, we consider the combination of markers with and without the limit of detection (LOD). LOD is often encountered when measuring proteomic markers. Because of the limited detecting ability of an equipment or instrument, it is difficult to measure markers at a relatively low level. Suppose that after some monotonic transformation, the marker values approximately follow multivariate normal distributions. We propose to estimate distribution parameters while taking the LOD into account, and then combine markers using the results from the linear discriminant analysis. Our simulation results show that the ROC curve parameter estimates generated from the proposed method are much closer to the truth than simply using the linear discriminant analysis to combine markers without considering the LOD. In addition, we propose a procedure to select and combine a subset of markers when many candidate markers are available. The procedure based on the correlation among markers is different from a common understanding that a subset of the most accurate markers should be selected for the combination. The simulation studies show that the accuracy of a combined marker can be largely impacted by the correlation of marker measurements. Our methods are applied to a protein pathway dataset to combine proteomic biomarkers to distinguish cancer patients from non-cancer patients. PMID:24132938

  18. Validation of the Child Premorbid Intelligence Estimate method to predict premorbid Wechsler Intelligence Scale for Children-Fourth Edition Full Scale IQ among children with brain injury.

    PubMed

    Schoenberg, Mike R; Lange, Rael T; Saklofske, Donald H; Suarez, Mariann; Brickell, Tracey A

    2008-12-01

    Determination of neuropsychological impairment involves contrasting obtained performances with a comparison standard, which is often an estimate of premorbid IQ. M. R. Schoenberg, R. T. Lange, T. A. Brickell, and D. H. Saklofske (2007) proposed the Child Premorbid Intelligence Estimate (CPIE) to predict premorbid Full Scale IQ (FSIQ) using the Wechsler Intelligence Scale for Children-4th Edition (WISC-IV; Wechsler, 2003). The CPIE includes 12 algorithms to predict FSIQ, 1 using demographic variables and 11 algorithms combining WISC-IV subtest raw scores with demographic variables. The CPIE was applied to a sample of children with acquired traumatic brain injury (TBI sample; n = 40) and a healthy demographically matched sample (n = 40). Paired-samples t tests found estimated premorbid FSIQ differed from obtained FSIQ when applied to the TBI sample (ps .02). The demographic only algorithm performed well at a group level, but estimates were restricted in range. Algorithms combining single subtest scores with demographics performed adequately. Results support the clinical application of the CPIE algorithms. However, limitations to estimating individual premorbid ability, including statistical and developmental factors, must be considered. (c) 2008 APA, all rights reserved.

  19. The detection of carbon dioxide leaks using quasi-tomographic laser absorption spectroscopy measurements in variable wind

    DOE PAGES

    Levine, Zachary H.; Pintar, Adam L.; Dobler, Jeremy T.; ...

    2016-04-13

    Laser absorption spectroscopy (LAS) has been used over the last several decades for the measurement of trace gasses in the atmosphere. For over a decade, LAS measurements from multiple sources and tens of retroreflectors have been combined with sparse-sample tomography methods to estimate the 2-D distribution of trace gas concentrations and underlying fluxes from point-like sources. In this work, we consider the ability of such a system to detect and estimate the position and rate of a single point leak which may arise as a failure mode for carbon dioxide storage. The leak is assumed to be at a constant ratemore » giving rise to a plume with a concentration and distribution that depend on the wind velocity. Lastly, we demonstrate the ability of our approach to detect a leak using numerical simulation and also present a preliminary measurement.« less

  20. Investigating the running abilities of Tyrannosaurus rex using stress-constrained multibody dynamic analysis.

    PubMed

    Sellers, William I; Pond, Stuart B; Brassey, Charlotte A; Manning, Philip L; Bates, Karl T

    2017-01-01

    The running ability of Tyrannosaurus rex has been intensively studied due to its relevance to interpretations of feeding behaviour and the biomechanics of scaling in giant predatory dinosaurs. Different studies using differing methodologies have produced a very wide range of top speed estimates and there is therefore a need to develop techniques that can improve these predictions. Here we present a new approach that combines two separate biomechanical techniques (multibody dynamic analysis and skeletal stress analysis) to demonstrate that true running gaits would probably lead to unacceptably high skeletal loads in T. rex . Combining these two approaches reduces the high-level of uncertainty in previous predictions associated with unknown soft tissue parameters in dinosaurs, and demonstrates that the relatively long limb segments of T. rex -long argued to indicate competent running ability-would actually have mechanically limited this species to walking gaits. Being limited to walking speeds contradicts arguments of high-speed pursuit predation for the largest bipedal dinosaurs like T. rex , and demonstrates the power of multiphysics approaches for locomotor reconstructions of extinct animals.

  1. Figure of merit and different combinations of observational data sets

    NASA Astrophysics Data System (ADS)

    Su, Qiping; Tuo, Zhong-Liang; Cai, Rong-Gen

    2011-11-01

    To constrain cosmological parameters, one often makes a joint analysis with different combinations of observational data sets. In this paper we take the figure of merit (FoM) for Dark Energy Task Force fiducial model (Chevallier-Polarski-Linder model) to estimate goodness of different combinations of data sets, which include 11 widely used observational data sets (type Ia supernovae, observational hubble parameter, baryon acoustic oscillation, cosmic microwave background, x-ray cluster baryon mass fraction, and gamma-ray bursts). We analyze different combinations and make a comparison for two types of combinations based on two types of basic combinations, which are often adopted in the literature. We find two sets of combinations, which have a strong ability to constrain the dark energy parameters: one has the largest FoM, and the other contains less observational data with a relatively large FoM and a simple fitting procedure.

  2. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  3. Feedback in Software and a Desktop Manufacturing Context for Learning Estimation Strategies in Middle School

    ERIC Educational Resources Information Center

    Malcolm, Peter

    2013-01-01

    The ability and to make good estimates is essential, as is the ability to assess the reasonableness of estimates. These abilities are becoming increasingly important as digital technologies transform the ways in which people work. To estimate is to provide an approximation to a problem that is mathematical in nature, and the ability to estimate is…

  4. The Prediction of Job Ability Requirements Using Attribute Data Based Upon the Position Analysis Questionnaire (PAQ). Technical Report No. 1.

    ERIC Educational Resources Information Center

    Shaw, James B.; McCormick, Ernest J.

    The study was directed towards the further exploration of the use of attribute ratings as the basis for establishing the job component validity of tests, in particular by using different methods of combining "attribute-based" data with "job analysis" data to form estimates of the aptitude requirements of jobs. The primary focus…

  5. Comparison of Physician-Predicted to Measured Low Vision Outcomes

    PubMed Central

    Chan, Tiffany L.; Goldstein, Judith E.; Massof, Robert W.

    2013-01-01

    Purpose To compare low vision rehabilitation (LVR) physicians’ predictions of the probability of success of LVR to patients’ self-reported outcomes after provision of usual outpatient LVR services; and to determine if patients’ traits influence physician ratings. Methods The Activity Inventory (AI), a self-report visual function questionnaire, was administered pre and post-LVR to 316 low vision patients served by 28 LVR centers that participated in a collaborative observational study. The physical component of the Short Form-36, Geriatric Depression Scale, and Telephone Interview for Cognitive Status were also administered pre-LVR to measure physical capability, depression and cognitive status. Following patient evaluation, 38 LVR physicians estimated the probability of outcome success (POS), using their own criteria. The POS ratings and change in functional ability were used to assess the effects of patients’ baseline traits on predicted outcomes. Results A regression analysis with a hierarchical random effects model showed no relationship between LVR physician POS estimates and AI-based outcomes. In another analysis, Kappa statistics were calculated to determine the probability of agreement between POS and AI-based outcomes for different outcome criteria. Across all comparisons, none of the kappa values were significantly different from 0, which indicates the rate of agreement is equivalent to chance. In an exploratory analysis, hierarchical mixed effects regression models show that POS ratings are associated with information about the patient’s cognitive functioning and the combination of visual acuity and functional ability, as opposed to visual acuity or functional ability alone. Conclusions Physicians’ predictions of LVR outcomes appear to be influenced by knowledge of patients’ cognitive functioning and the combination of visual acuity and functional ability - information physicians acquire from the patient’s history and examination. However, physicians’ predictions do not agree with observed changes in functional ability from the patient’s perspective; they are no better than chance. PMID:23873036

  6. Diallel Analysis and Growth Parameters as Selection Tools for Drought Tolerance in Young Theobroma cacao Plants

    PubMed Central

    dos Santos, Emerson Alves; de Almeida, Alex-Alan Furtado; Ahnert, Dario; Branco, Marcia Christina da Silva; Valle, Raúl René; Baligar, Virupax C.

    2016-01-01

    This study aimed to estimate the combining ability, of T. cacao genotypes preselected for drought tolerance through diallel crosses. The experiment was conducted under greenhouse conditions at the Cacao Research Center (CEPEC), Ilhéus, Bahia, Brazil, in a completely randomized block design, in an experimental arrangement 21 x 2 [21 complete diallel crosses and two water regimes (control and stressed)]. In the control, soil moisture was kept close to field capacity, with predawn leaf water potential (ΨWL) ranging from -0.1 to -0.5 MPa. In the drought regime, the soil moisture was reduced gradually by decreasing the amount of water application until ΨWL reached -2.0 to -2.5 MPa. Significant differences (p < 0.05) were observed for most morphological attributes analyzed regarding progenies, water regime and their interactions. The results of the joint diallel analysis revealed significant effects between general combining ability (GCA) x water regimes and between specific combining ability (SCA) x water regimes. The SCA 6 genetic material showed high general combining ability for growth variables regardless of the water regime. In general, the water deficit influenced the production of biomass in most of the evaluated T. cacao crosses, except for SCA-6 x IMC-67, Catongo x SCA, MOC-01 x Catongo, Catongo x IMC-67 and RB-40 x Catongo. Multivariate analysis showed that stem diameter (CD), total leaf area (TLA), leaf dry biomass (LDB), stem dry biomass (SDB), root dry biomass (RDB), total dry biomass (TDB), root length (RL), root volume (RV), root diameter (RD) <1 mm and 1 <(RD) <2 mm were the most important growth parameters in the separation of T. cacao genotypes in to tolerant and intolerant to soil water deficit. PMID:27504627

  7. Diallel Analysis and Growth Parameters as Selection Tools for Drought Tolerance in Young Theobroma cacao Plants.

    PubMed

    Dos Santos, Emerson Alves; Almeida, Alex-Alan Furtado de; Ahnert, Dario; Branco, Marcia Christina da Silva; Valle, Raúl René; Baligar, Virupax C

    2016-01-01

    This study aimed to estimate the combining ability, of T. cacao genotypes preselected for drought tolerance through diallel crosses. The experiment was conducted under greenhouse conditions at the Cacao Research Center (CEPEC), Ilhéus, Bahia, Brazil, in a completely randomized block design, in an experimental arrangement 21 x 2 [21 complete diallel crosses and two water regimes (control and stressed)]. In the control, soil moisture was kept close to field capacity, with predawn leaf water potential (ΨWL) ranging from -0.1 to -0.5 MPa. In the drought regime, the soil moisture was reduced gradually by decreasing the amount of water application until ΨWL reached -2.0 to -2.5 MPa. Significant differences (p < 0.05) were observed for most morphological attributes analyzed regarding progenies, water regime and their interactions. The results of the joint diallel analysis revealed significant effects between general combining ability (GCA) x water regimes and between specific combining ability (SCA) x water regimes. The SCA 6 genetic material showed high general combining ability for growth variables regardless of the water regime. In general, the water deficit influenced the production of biomass in most of the evaluated T. cacao crosses, except for SCA-6 x IMC-67, Catongo x SCA, MOC-01 x Catongo, Catongo x IMC-67 and RB-40 x Catongo. Multivariate analysis showed that stem diameter (CD), total leaf area (TLA), leaf dry biomass (LDB), stem dry biomass (SDB), root dry biomass (RDB), total dry biomass (TDB), root length (RL), root volume (RV), root diameter (RD) <1 mm and 1 <(RD) <2 mm were the most important growth parameters in the separation of T. cacao genotypes in to tolerant and intolerant to soil water deficit.

  8. Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation

    ERIC Educational Resources Information Center

    Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting

    2011-01-01

    Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…

  9. Cross-shift changes in FEV1 in relation to wood dust exposure: the implications of different exposure assessment methods

    PubMed Central

    Schlunssen, V; Sigsgaard, T; Schaumburg, I; Kromhout, H

    2004-01-01

    Background: Exposure-response analyses in occupational studies rely on the ability to distinguish workers with regard to exposures of interest. Aims: To evaluate different estimates of current average exposure in an exposure-response analysis on dust exposure and cross-shift decline in FEV1 among woodworkers. Methods: Personal dust samples (n = 2181) as well as data on lung function parameters were available for 1560 woodworkers from 54 furniture industries. The exposure to wood dust for each worker was calculated in eight different ways using individual measurements, group based exposure estimates, a weighted estimate of individual and group based exposure estimates, and predicted values from mixed models. Exposure-response relations on cross-shift changes in FEV1 and exposure estimates were explored. Results: A positive exposure-response relation between average dust exposure and cross-shift FEV1 was shown for non-smokers only and appeared to be most pronounced among pine workers. In general, the highest slope and standard error (SE) was revealed for grouping by a combination of task and factory size, the lowest slope and SE was revealed for estimates based on individual measurements, with the weighted estimate and the predicted values in between. Grouping by quintiles of average exposure for task and factory combinations revealed low slopes and high SE, despite a high contrast. Conclusion: For non-smokers, average dust exposure and cross-shift FEV1 were associated in an exposure dependent manner, especially among pine workers. This study confirms the consequences of using different exposure assessment strategies studying exposure-response relations. It is possible to optimise exposure assessment combining information from individual and group based exposure estimates, for instance by applying predicted values from mixed effects models. PMID:15377768

  10. Effectiveness of Combination of Dentin and Enamel Layers on the Masking Ability of Porcelain.

    PubMed

    Boscato, Noéli; Hauschild, Fernando Gabriel; Kaizer, Marina da Rosa; De Moraes, Rafael Ratto

    2015-01-01

    This study evaluated the masking ability of different porcelain thicknesses and combination of enamel and/or dentin porcelain layers over simulated background dental substrates with higher (A2) and lower (C4) color values. Combination of the enamel (E) and dentin (D) monolayer porcelain disks with different thicknesses (0.5 mm, 0.8 mm, and 1 mm) resulted in the following bilayer groups (n=10): D1E1, D1E0.8; D1E0.5; D0.8E0.8; D0.8E0.5, and D0.5E0.5. CIELAB color coordinates were measured with a spectrophotometer. The translucency parameter of mono and bilayer specimens and the masking ability estimated by color variation (ΔE*ab) of bilayer specimens over simulated dental substrates were evaluated. Linear regression analysis was used to investigate the relationships translucency parameter × ΔE*, translucency parameter × porcelain thickness, and ΔE* × porcelain thickness. Data were analyzed statistically (α= 0.05). Thinner porcelain disks were associated with higher translucency. Porcelain monolayers were considerably more translucent than bilayers (enamel + dentin). Dentin porcelain was less translucent than enamel porcelain with same thickness. ΔE* was always lower when measured over A2 background. Higher ΔE* was observed for the C4 background, indicating poorer masking ability. Increased ΔE* was significantly associated with increased translucency for both backgrounds. Decreased translucency and ΔE* were associated with increased total porcelain thickness or increased dentin thickness for both backgrounds. In conclusion, increased porcelain thickness (particularly increased dentin layer) and increased porcelain opacity resulted in better masking ability of the dental backgrounds.

  11. Testlet-Based Multidimensional Adaptive Testing.

    PubMed

    Frey, Andreas; Seitz, Nicki-Nils; Brandt, Steffen

    2016-01-01

    Multidimensional adaptive testing (MAT) is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT). MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, and 1.5) and testlet sizes (3, 6, and 9 items) with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range.

  12. Opportunities for Improving Federally Assisted Manpower Programs Identified As a Result of Review in the Atlanta, Georgia, Area. Report to the Congress.

    ERIC Educational Resources Information Center

    Comptroller General of the U.S., Washington, DC.

    The combined impact of all federally assisted manpower programs in the Atlanta area was evaluated, with emphasis on outreach, eligibility, identification of needs and abilities, and screening for course assignment. In fiscal year 1970, training was provided for 10,300 persons and job placement for 5,600, although most of the estimated 70,000 poor…

  13. Combination of TOPEX/POSEIDON Data with a Hydrographic Inversion for Determination of the Oceanic General Circulation and its Relation to Geoid Accuracy

    NASA Technical Reports Server (NTRS)

    Ganachaud, Alexandre; Wunsch, Carl; Kim, Myung-Chan; Tapley, Byron

    1997-01-01

    A global estimate of the absolute oceanic general circulation from a geostrophic inversion of in situ hydrographic data is tested against and then combined with an estimate obtained from TOPEX/POSEIDON altimetric data and a geoid model computed using the JGM-3 gravity-field solution. Within the quantitative uncertainties of both the hydrographic inversion and the geoid estimate, the two estimates derived by very different methods are consistent. When the in situ inversion is combined with the altimetry/geoid scheme using a recursive inverse procedure, a new solution, fully consistent with both hydrography and altimetry, is found. There is, however, little reduction in the uncertainties of the calculated ocean circulation and its mass and heat fluxes because the best available geoid estimate remains noisy relative to the purely oceanographic inferences. The conclusion drawn from this is that the comparatively large errors present in the existing geoid models now limit the ability of satellite altimeter data to improve directly the general ocean circulation models derived from in situ measurements. Because improvements in the geoid could be realized through a dedicated spaceborne gravity recovery mission, the impact of hypothetical much better, future geoid estimates on the circulation uncertainty is also quantified, showing significant hypothetical reductions in the uncertainties of oceanic transport calculations. Full ocean general circulation models could better exploit both existing oceanographic data and future gravity-mission data, but their present use is severely limited by the inability to quantify their error budgets.

  14. Quantitative genetic analysis of agronomic and morphological traits in sorghum, Sorghum bicolor

    PubMed Central

    Mohammed, Riyazaddin; Are, Ashok K.; Bhavanasi, Ramaiah; Munghate, Rajendra S.; Kavi Kishor, Polavarapu B.; Sharma, Hari C.

    2015-01-01

    The productivity in sorghum is low, owing to various biotic and abiotic constraints. Combining insect resistance with desirable agronomic and morphological traits is important to increase sorghum productivity. Therefore, it is important to understand the variability for various agronomic traits, their heritabilities and nature of gene action to develop appropriate strategies for crop improvement. Therefore, a full diallel set of 10 parents and their 90 crosses including reciprocals were evaluated in replicated trials during the 2013–14 rainy and postrainy seasons. The crosses between the parents with early- and late-flowering flowered early, indicating dominance of earliness for anthesis in the test material used. Association between the shoot fly resistance, morphological, and agronomic traits suggested complex interactions between shoot fly resistance and morphological traits. Significance of the mean sum of squares for GCA (general combining ability) and SCA (specific combining ability) of all the studied traits suggested the importance of both additive and non-additive components in inheritance of these traits. The GCA/SCA, and the predictability ratios indicated predominance of additive gene effects for majority of the traits studied. High broad-sense and narrow-sense heritability estimates were observed for most of the morphological and agronomic traits. The significance of reciprocal combining ability effects for days to 50% flowering, plant height and 100 seed weight, suggested maternal effects for inheritance of these traits. Plant height and grain yield across seasons, days to 50% flowering, inflorescence exsertion, and panicle shape in the postrainy season showed greater specific combining ability variance, indicating the predominance of non-additive type of gene action/epistatic interactions in controlling the expression of these traits. Additive gene action in the rainy season, and dominance in the postrainy season for days to 50% flowering and plant height suggested G X E interactions for these traits. PMID:26579183

  15. Heritability of body size in the polar bears of Western Hudson Bay.

    PubMed

    Malenfant, René M; Davis, Corey S; Richardson, Evan S; Lunn, Nicholas J; Coltman, David W

    2018-04-18

    Among polar bears (Ursus maritimus), fitness is dependent on body size through males' abilities to win mates, females' abilities to provide for their young and all bears' abilities to survive increasingly longer fasting periods caused by climate change. In the Western Hudson Bay subpopulation (near Churchill, Manitoba, Canada), polar bears have declined in body size and condition, but nothing is known about the genetic underpinnings of body size variation, which may be subject to natural selection. Here, we combine a 4449-individual pedigree and an array of 5,433 single nucleotide polymorphisms (SNPs) to provide the first quantitative genetic study of polar bears. We used animal models to estimate heritability (h 2 ) among polar bears handled between 1966 and 2011, obtaining h 2 estimates of 0.34-0.48 for strictly skeletal traits and 0.18 for axillary girth (which is also dependent on fatness). We genotyped 859 individuals with the SNP array to test for marker-trait association and combined p-values over genetic pathways using gene-set analysis. Variation in all traits appeared to be polygenic, but we detected one region of moderately large effect size in body length near a putative noncoding RNA in an unannotated region of the genome. Gene-set analysis suggested that variation in body length was associated with genes in the regulatory cascade of cyclin expression, which has previously been associated with body size in mice. A greater understanding of the genetic architecture of body size variation will be valuable in understanding the potential for adaptation in polar bear populations challenged by climate change. © 2018 John Wiley & Sons Ltd.

  16. Joint association of multimorbidity and work ability with risk of long-term sickness absence: a prospective cohort study with register follow-up.

    PubMed

    Sundstrup, Emil; Jakobsen, Markus Due; Mortensen, Ole Steen; Andersen, Lars Louis

    2017-03-01

    Objectives The aim of this study was to determine the joint association of multimorbidity and work ability with the risk of long-term sickness absence (LTSA) in the general working population. Methods Cox regression analysis censoring for competing events (statutory retirement, early retirement, disability pension, immigration, or death) was performed to estimate the joint association of chronic diseases and work ability in relation to physical and mental demands of the job with the prospective risk for LTSA (defined as ≥6 consecutive weeks during 2-year follow-up) among 10 427 wage earners from the general working population (2010 Danish Work Environment Cohort Study). Control variables were age, gender, psychosocial work environment, smoking, leisure physical activity, body mass index, job group, and previous LTSA. Results Of the 10 427 respondents, 56.8% had experienced ≥1 chronic disease at baseline. The fully adjusted model showed an association between number of chronic diseases and risk of LTSA. This association was stronger among employees with poor work ability (either physical or mental). Compared to employees with no diseases and good physical work ability, the risk estimate for LTSA was 1.95 [95% confidence interval (95% CI) 1.50-2.52] for employees with ≥3 chronic diseases and good physical work ability, whereas it was 3.60 (95% CI 2.50-5.19) for those with ≥3 chronic diseases and poor physical work ability. Overall, the joint association of chronic disease and work ability with LTSA appears to be additive. Conclusions Poor work ability combined with ≥1 chronic diseases is associated with high risk of long-term sickness absence in the general working population. Initiatives to improve or maintain work ability should be highly prioritized to secure sustainable employability among workers with ≥1 chronic diseases.

  17. An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.

    ERIC Educational Resources Information Center

    De Ayala, R. J.; And Others

    Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…

  18. Inferences about landbird abundance from count data: recent advances and future directions

    USGS Publications Warehouse

    Nichols, J.D.; Thomas, L.; Conn, P.B.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.

    2009-01-01

    We summarize results of a November 2006 workshop dealing with recent research on the estimation of landbird abundance from count data. Our conceptual framework includes a decomposition of the probability of detecting a bird potentially exposed to sampling efforts into four separate probabilities. Primary inference methods are described and include distance sampling, multiple observers, time of detection, and repeated counts. The detection parameters estimated by these different approaches differ, leading to different interpretations of resulting estimates of density and abundance. Simultaneous use of combinations of these different inference approaches can not only lead to increased precision but also provides the ability to decompose components of the detection process. Recent efforts to test the efficacy of these different approaches using natural systems and a new bird radio test system provide sobering conclusions about the ability of observers to detect and localize birds in auditory surveys. Recent research is reported on efforts to deal with such potential sources of error as bird misclassification, measurement error, and density gradients. Methods for inference about spatial and temporal variation in avian abundance are outlined. Discussion topics include opinions about the need to estimate detection probability when drawing inference about avian abundance, methodological recommendations based on the current state of knowledge and suggestions for future research.

  19. Estimating oxygen diffusive conductances of gas-exchange systems: A stereological approach illustrated with the human placenta.

    PubMed

    Mayhew, Terry M

    2014-01-01

    For many organisms, respiratory gas exchange is a vital activity and different types of gas-exchange apparatus have evolved to meet individual needs. They include not only skin, gills, tracheal systems and lungs but also transient structures such as the chorioallantois of avian eggs and the placenta of eutherian mammals. The ability of these structures to allow passage of oxygen by passive diffusion can be expressed as a diffusive conductance (units: cm(3) O2 min(-1) kPa(-1)). Occasionally, the ability to estimate diffusive conductance by physiological techniques is compromised by the difficulty of obtaining O2 partial pressures on opposite sides of the tissue interface between the delivery medium (air, water, blood) and uptake medium (usually blood). An alternative strategy is to estimate a morphometric diffusive conductance by combining stereological estimates of key structural quantities (volumes, surface areas, membrane thicknesses) with complementary physicochemical data (O2-haemoglobin chemical reaction rates and Krogh's permeability coefficients). This approach has proved valuable in a variety of comparative studies on respiratory organs from diverse species. The underlying principles were formulated in pioneering studies on the pulmonary lung but are illustrated here by taking the human placenta as the gas exchanger. Copyright © 2012 Elsevier GmbH. All rights reserved.

  20. Effects of Technology on Experienced Job Characteristics and Job Satisfaction.

    DTIC Science & Technology

    1980-07-01

    Ability to discriminate between odors (sense of smell) 23. Ability to discriminate between salty , sour, sweet (sense of taste ) 24. Ability to remember...Ability to estimate speed Ability to estimate quality Sense of touch Sense of smell Sense of taste Cognitive .833 Ability to remember names Ability to

  1. Estimating workload using EEG spectral power and ERPs in the n-back task

    NASA Astrophysics Data System (ADS)

    Brouwer, Anne-Marie; Hogervorst, Maarten A.; van Erp, Jan B. F.; Heffelaar, Tobias; Zimmerman, Patrick H.; Oostenveld, Robert

    2012-08-01

    Previous studies indicate that both electroencephalogram (EEG) spectral power (in particular the alpha and theta band) and event-related potentials (ERPs) (in particular the P300) can be used as a measure of mental work or memory load. We compare their ability to estimate workload level in a well-controlled task. In addition, we combine both types of measures in a single classification model to examine whether this results in higher classification accuracy than either one alone. Participants watched a sequence of visually presented letters and indicated whether or not the current letter was the same as the one (n instances) before. Workload was varied by varying n. We developed different classification models using ERP features, frequency power features or a combination (fusion). Training and testing of the models simulated an online workload estimation situation. All our ERP, power and fusion models provide classification accuracies between 80% and 90% when distinguishing between the highest and the lowest workload condition after 2 min. For 32 out of 35 participants, classification was significantly higher than chance level after 2.5 s (or one letter) as estimated by the fusion model. Differences between the models are rather small, though the fusion model performs better than the other models when only short data segments are available for estimating workload.

  2. Intercalibration of research survey vessels on Lake Erie

    USGS Publications Warehouse

    Tyson, J.T.; Johnson, T.B.; Knight, C.T.; Bur, M.T.

    2006-01-01

    Fish abundance indices obtained from annual research trawl surveys are an integral part of fisheries stock assessment and management in the Great Lakes. It is difficult, however, to administer trawl surveys using a single vessel-gear combination owing to the large size of these systems, the jurisdictional boundaries that bisect the Great Lakes, and changes in vessels as a result of fleet replacement. When trawl surveys are administered by multiple vessel-gear combinations, systematic error may be introduced in combining catch-per-unit-effort (CPUE) data across vessels. This bias is associated with relative differences in catchability among vessel-gear combinations. In Lake Erie, five different research vessels conduct seasonal trawl surveys in the western half of the lake. To eliminate this systematic bias, the Lake Erie agencies conducted a side-by-side trawling experiment in 2003 to develop correction factors for CPUE data associated with different vessel-gear combinations. Correcting for systematic bias in CPUE data should lead to more accurate and comparable estimates of species density and biomass. We estimated correction factors for the 10 most commonly collected species age-groups for each vessel during the experiment. Most of the correction factors (70%) ranged from 0.5 to 2.0, indicating that the systematic bias associated with different vessel-gear combinations was not large. Differences in CPUE were most evident for vessels using different sampling gears, although significant differences also existed for vessels using the same gears. These results suggest that standardizing gear is important for multiple-vessel surveys, but there will still be significant differences in catchability stemming from the vessel effects and agencies must correct for this. With standardized estimates of CPUE, the Lake Erie agencies will have the ability to directly compare and combine time series for species abundance. ?? Copyright by the American Fisheries Society 2006.

  3. Refinement of a Bias-Correction Procedure for the Weighted Likelihood Estimator of Ability. Research Report. ETS RR-07-23

    ERIC Educational Resources Information Center

    Zhang, Jinming; Lu, Ting

    2007-01-01

    In practical applications of item response theory (IRT), item parameters are usually estimated first from a calibration sample. After treating these estimates as fixed and known, ability parameters are then estimated. However, the statistical inferences based on the estimated abilities can be misleading if the uncertainty of the item parameter…

  4. GGOS and the EOP - the key role of SLR for a stable estimation of highly accurate Earth orientation parameters

    NASA Astrophysics Data System (ADS)

    Bloßfeld, Mathis; Panzetta, Francesca; Müller, Horst; Gerstl, Michael

    2016-04-01

    The GGOS vision is to integrate geometric and gravimetric observation techniques to estimate consistent geodetic-geophysical parameters. In order to reach this goal, the common estimation of station coordinates, Stokes coefficients and Earth Orientation Parameters (EOP) is necessary. Satellite Laser Ranging (SLR) provides the ability to study correlations between the different parameter groups since the observed satellite orbit dynamics are sensitive to the above mentioned geodetic parameters. To decrease the correlations, SLR observations to multiple satellites have to be combined. In this paper, we compare the estimated EOP of (i) single satellite SLR solutions and (ii) multi-satellite SLR solutions. Therefore, we jointly estimate station coordinates, EOP, Stokes coefficients and orbit parameters using different satellite constellations. A special focus in this investigation is put on the de-correlation of different geodetic parameter groups due to the combination of SLR observations. Besides SLR observations to spherical satellites (commonly used), we discuss the impact of SLR observations to non-spherical satellites such as, e.g., the JASON-2 satellite. The goal of this study is to discuss the existing parameter interactions and to present a strategy how to obtain reliable estimates of station coordinates, EOP, orbit parameter and Stokes coefficients in one common adjustment. Thereby, the benefits of a multi-satellite SLR solution are evaluated.

  5. Pathways to fraction learning: Numerical abilities mediate the relation between early cognitive competencies and later fraction knowledge.

    PubMed

    Ye, Ai; Resnick, Ilyse; Hansen, Nicole; Rodrigues, Jessica; Rinne, Luke; Jordan, Nancy C

    2016-12-01

    The current study investigated the mediating role of number-related skills in the developmental relationship between early cognitive competencies and later fraction knowledge using structural equation modeling. Fifth-grade numerical skills (i.e., whole number line estimation, non-symbolic proportional reasoning, multiplication, and long division skills) mapped onto two distinct factors: magnitude reasoning and calculation. Controlling for participants' (N=536) demographic characteristics, these two factors fully mediated relationships between third-grade general cognitive competencies (attentive behavior, verbal and nonverbal intellectual abilities, and working memory) and sixth-grade fraction knowledge (concepts and procedures combined). However, specific developmental pathways differed by type of fraction knowledge. Magnitude reasoning ability fully mediated paths from all four cognitive competencies to knowledge of fraction concepts, whereas calculation ability fully mediated paths from attentive behavior and verbal ability to knowledge of fraction procedures (all with medium to large effect sizes). These findings suggest that there are partly overlapping, yet distinct, developmental pathways from cognitive competencies to general fraction knowledge, fraction concepts, and fraction procedures. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Standardized shrinking LORETA-FOCUSS (SSLOFO): a new algorithm for spatio-temporal EEG source reconstruction.

    PubMed

    Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai

    2005-10-01

    This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.

  7. Diallel analysis of provitamin A carotenoid and dry matter content in cassava (Manihot esculenta Crantz)

    PubMed Central

    Esuma, Williams; Kawuki, Robert S.; Herselman, Liezel; Labuschagne, Maryke Tine

    2016-01-01

    Global efforts are underway to biofortify cassava (Manihot esculenta Crantz) with provitamin A carotenoids to help combat dietary vitamin A deficiency afflicting the health of more than 500 million resource-poor people in Sub-Saharan Africa. To further the biofortification initiative in Uganda, a 6×6 diallel analysis was conducted to estimate combining ability of six provitamin A clones and gene actions controlling total carotenoid content (TCC), dry matter content (DMC) in cassava roots and other relevant traits. Fifteen F1 families generated from the diallel crosses were evaluated in two environments using a randomized complete block design. General combining ability (GCA) effects were significant for TCC and DMC, suggesting the relative importance of additive gene effects in controlling these traits in cassava. On the other hand, non-additive effects were predominant for root and shoot weight. MH02-073HS, with the highest level of TCC, was the best general combiner for TCC while NASE 3, a popular white-fleshed variety grown by farmers in Uganda, was the best general combiner for DMC. Such progenitors with superior GCA effects could form the genetic source for future programs targeting cassava breeding for TCC and DMC. A negative correlation was observed between TCC and DMC, which will require breeding strategies to combine both traits for increased adoption of provitamin A cassava varieties. PMID:27795688

  8. Genetic control and combining ability of agronomic attributes and northern leaf blight-related attributes in popcorn.

    PubMed

    Santos, J S; Amaral Júnior, A T; Vivas, M; Mafra, G S; Pena, G F; Silva, F H L; Guimarães, A G

    2017-09-27

    The present study was conducted to investigate the genetic control and to estimate the general and specific combining abilities of popcorn for agronomic attributes and attributes related to resistance to northern leaf blight (NLB). The 56 hybrids (F 1 and reciprocals), together with the eight parent lines and six controls, were evaluated in two harvests, in a randomized-block design with four replications. Dominance components were more expressive than the additive components for grain yield and expression of resistance, and hybridization was the most suitable option for obtaining resistant and productive genotypes. For grain yield, popping expansion, and resistance to NLB, there was no significance for reciprocal effects, which indicates that the direction in which the cross is performed does not interfere with the hybrid's performance. Then, the superior hybrids recommended for more profitable growth were P8 x L61, L61 x L76, and L61 x L77.

  9. Strategies to predict rheumatoid arthritis development in at-risk populations

    PubMed Central

    van der Helm-van Mil, Annette H.

    2016-01-01

    The development of RA is conceived as a multiple hit process and the more hits that are acquired, the greater the risk of developing clinically apparent RA. Several at-risk phases have been described, including the presence of genetic and environmental factors, RA-related autoantibodies and biomarkers and symptoms. Intervention in these preclinical phases may be more effective compared with intervention in the clinical phase. One prerequisite for preventive strategies is the ability to estimate an individual’s risk adequately. This review evaluates the ability to predict the risk of RA in the various preclinical stages. Present data suggest that a combination of genetic and environmental factors is helpful to identify persons at high risk of RA among first-degree relatives. Furthermore, a combination of symptoms, antibody characteristics and environmental factors has been shown to be relevant for risk prediction in seropositive arthralgia patients. Large prospective studies are needed to validate and improve risk prediction in preclinical disease stages. PMID:25096602

  10. Herpes Simplex Virus-based gene Therapy Enhances the Efficacy of Mitomycin-C in the Treatment of Human Bladder Transitional Cell Carcinoma

    PubMed Central

    Mullerad, Michael; Bochner, Bernard H.; Adusumilli, Prasad S.; Bhargava, Amit; Kikuchi, Eiji; Hui-Ni, Chen; Kattan, Michael W.; Chou, Ting-Chao; Fong, Yuman

    2005-01-01

    Purpose Oncolytic replication-competent herpes simplex virus type-1 (HSV) mutants have the ability to replicate in and kill malignant cells. We have previously demonstrated the ability of replication-competent HSV to control bladder cancer growth in an orthotopic murine model. We hypothesized that a combination of a chemotherapeutic agent used for intravesical treatment - mitomycin-C (MMC) - and oncolytic HSV would exert a synergistic effect in the treatment of human transitional cell carcinoma (TCC). Materials and Methods We used the mutant HSV NV1066, which is deleted for viral genes ICP0 and ICP4 and selectively infects cancer cells, to treat TCC lines, KU19-19 and SKUB. Cell survival was determined by lactate dehydrogenase (LDH) assay for each agent as well as for drug-viral combinations from days 1 to 5. The isobologram method and the combination index method of Chou-Talalay were used to assess for synergistic effect. Results NV1066 enhanced MMC mediated cytotoxicity at all combinations tested for both KU19-19 and SKUB. Combination of both agents demonstrated a synergistic effect and allowed dose reduction by 12 and 10.4 times (NV1066) and by 3 and 156 times (MMC) in the treatment of KU19-19 and SKUB respectively, while achieving an estimated 90% cell kill. Conclusion These data provide the cellular basis for the clinical investigation of combined mitomycin-C and oncolytic HSV therapy in the treatment of bladder cancer. PMID:16006968

  11. Self-estimation of physical ability in stepping over an obstacle is not mediated by visual height perception: a comparison between young and older adults.

    PubMed

    Sakurai, Ryota; Fujiwara, Yoshinori; Ishihara, Masami; Yasunaga, Masashi; Ogawa, Susumu; Suzuki, Hiroyuki; Imanaka, Kuniyasu

    2017-07-01

    Older adults tend to overestimate their step-over ability. However, it is unclear as to whether this is caused by inaccurate self-estimation of physical ability or inaccurate perception of height. We, therefore, measured both visual height perception ability and self-estimation of step-over ability among young and older adults. Forty-seven older and 16 young adults performed a height perception test (HPT) and a step-over test (SOT). Participants visually judged the height of vertical bars from distances of 7 and 1 m away in the HPT, then self-estimated and, subsequently, actually performed a step-over action in the SOT. The results showed no significant difference between young and older adults in visual height perception. In the SOT, young adults tended to underestimate their step-over ability, whereas older adults either overestimated their abilities or underestimated them to a lesser extent than did the young adults. Moreover, visual height perception was not correlated with the self-estimation of step-over ability in both young and older adults. These results suggest that the self-overestimation of step-over ability which appeared in some healthy older adults may not be caused by the nature of visual height perception, but by other factor(s), such as the likely age-related nature of self-estimation of physical ability, per se.

  12. Inheritance of downy mildew (Plasmopara viticola) and anthracnose (Sphaceloma ampelinum) resistance in grapevines.

    PubMed

    Poolsawat, O; Mahanil, S; Laosuwan, P; Wongkaew, S; Tharapreuksapong, A; Reisch, B I; Tantasawat, P A

    2013-12-13

    Downy mildew (Plasmopara viticola) and anthracnose (Sphaceloma ampelinum) are two of the major diseases of most grapevine (Vitis vinifera L.) cultivars grown in Thailand. Therefore, breeding grapevines for improved downy mildew and anthracnose resistance is crucial. Factorial crosses were made between three downy mildew and/or anthracnose resistant lines ('NY88.0517.01', 'NY65.0550.04', and 'NY65.0551.05'; male parents) and two or three susceptible cultivars of V. vinifera ('Black Queen', 'Carolina Black Rose', and/or 'Italia'; female parents). F1 hybrid seedlings were evaluated for downy mildew and anthracnose resistance using a detached/excised leaf assay. For both diseases, the general combining ability (GCA) variance among male parents was significant, while the variance of GCA among females and the specific combining ability (SCA) variance were not significant, indicating the prevalence of additive over non-additive gene actions. The estimated narrow sense heritabilities of downy mildew and anthracnose resistance were 55.6 and 79.2%, respectively, suggesting that downy mildew/anthracnose resistance gene(s) were highly heritable. The 'Carolina Black Rose x NY65.0550.04' cross combination is recommended for future use.

  13. Testlet-Based Multidimensional Adaptive Testing

    PubMed Central

    Frey, Andreas; Seitz, Nicki-Nils; Brandt, Steffen

    2016-01-01

    Multidimensional adaptive testing (MAT) is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT). MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, and 1.5) and testlet sizes (3, 6, and 9 items) with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range. PMID:27917132

  14. Methods for LWIR Radiometric Calibration and Characterization

    NASA Technical Reports Server (NTRS)

    Ryan, Robert; Pagnutti, Mary; Zanoni, Vicki; Harrington, Gary; Howell, Dane; Stewart, Randy

    2002-01-01

    The utility of a thermal remote sensing system increases with it's ability to retrieve surface temperature or radiance accurately. The radiometer measures the water surface radiant temperature. Combining these measurements with atmospheric pressure, temperature, and water vapor profiles, a top-of-the-atmosphere tradiance estimate can be caluclated with a radiativer transfer code to compare to trhe sensor's output. A novel approach has been developed using an uncooled infrared camera mounted on a boom, to quantify buoy effects.

  15. Identification of the mechanism of action of a glucokinase activator from oral glucose tolerance test data in type 2 diabetic patients based on an integrated glucose-insulin model.

    PubMed

    Jauslin, Petra M; Karlsson, Mats O; Frey, Nicolas

    2012-12-01

    A mechanistic drug-disease model was developed on the basis of a previously published integrated glucose-insulin model by Jauslin et al. A glucokinase activator was used as a test compound to evaluate the model's ability to identify a drug's mechanism of action and estimate its effects on glucose and insulin profiles following oral glucose tolerance tests. A kinetic-pharmacodynamic approach was chosen to describe the drug's pharmacodynamic effects in a dose-response-time model. Four possible mechanisms of action of antidiabetic drugs were evaluated, and the corresponding affected model parameters were identified: insulin secretion, glucose production, insulin effect on glucose elimination, and insulin-independent glucose elimination. Inclusion of drug effects in the model at these sites of action was first tested one-by-one and then in combination. The results demonstrate the ability of this model to identify the dual mechanism of action of a glucokinase activator and describe and predict its effects: Estimating a stimulating drug effect on insulin secretion and an inhibiting effect on glucose output resulted in a significantly better model fit than any other combination of effect sites. The model may be used for dose finding in early clinical drug development and for gaining more insight into a drug candidate's mechanism of action.

  16. Predicting Nitrogen in Streams: A Comparison of Two Estimates of Fertilizer Application

    NASA Astrophysics Data System (ADS)

    Mehaffey, M.; Neale, A.

    2011-12-01

    Decision makers frequently rely on water and air quality models to develop nutrient management strategies. Obviously, the results of these models (e.g., SWAT, SPARROW, CMAQ) are only as good as the nutrient source input data and recently the Nutrient Innovations Task Group has called for a better accounting of nonpoint nutrient sources. Currently, modelers frequently rely on county level fertilizer sales records combined with acreage of crops to estimate nitrogen sources from fertilizer for counties or watersheds. However, since fertilizer sales data are based on reported amounts they do not necessarily reflect actual use on the fields. In addition the reported sales data quality varies by state resulting in differing accuracy between states. In this study we examine an alternative method potentially providing a more uniform, spatially explicit, estimate of fertilizer use. Our nitrogen application data is estimated at a 30m pixel resolution which allows for scalable inputs for use in water and air quality models. To develop this dataset we combined raster data from the National Cropland data layer (CDL) data with the National Land Cover Data (NLCD). This process expanded the NLCD's 'cultivated crops' classes to included major grains, cover crops, and vegetable and fruits. The Agriculture Resource Management Survey chemical fertilizer application rate data were summarized by crop type and year for each state, encompassing the corn, soybean, spring wheat, and winter wheat crop types (ARMS, 2002-2005). The chemical fertilizer application rate data were then used to estimate annual application parameters for nitrogen, phosphate, potash, herbicide, pesticide, and total pesticide, all expressed on a mass-per-unit-crop-area basis for each state for each crop type. By linking crop types to nitrogen application rates, we can better estimate where applied fertilizer would likely be in excess of the amounts used by crops or where conservation practices may improve retention and uptake helping offset the impacts to water. To test the accuracy of our finer resolution nitrogen application data, we compare its ability to predict nitrogen concentrations in streams with the ability of the county sales data to do the same.

  17. Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding

    NASA Astrophysics Data System (ADS)

    Oh, Kwan-Jung; Oh, Byung Tae

    2015-04-01

    We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.

  18. Factors determining water treatment behavior for the prevention of cholera in Chad.

    PubMed

    Lilje, Jonathan; Kessely, Hamit; Mosler, Hans-Joachim

    2015-07-01

    Cholera is a well-known and feared disease in developing countries, and is linked to high rates of morbidity and mortality. Contaminated drinking water and the lack of sufficient treatment are two of the key causes of high transmission rates. This article presents a representative health survey performed in Chad to inform future intervention strategies in the prevention and control of cholera. To identify critical psychological factors for behavior change, structured household interviews were administered to N = 1,017 primary caregivers, assessing their thoughts and attitudes toward household water treatment according to the Risk, Attitude, Norm, Ability, and Self-regulation model. The intervention potential for each factor was estimated by analyzing differences in means between groups of current performers and nonperformers of water treatment. Personal risk evaluation for diarrheal diseases and particularly for cholera was very low among the study population. Likewise, the perception of social norms was found to be rather unfavorable for water treatment behaviors. In addition, self-reported ability estimates (self-efficacy) revealed some potential for intervention. A mass radio campaign is proposed, using information and normative behavior change techniques, in combination with community meetings focused on targeting abilities and personal commitment to water treatment. © The American Society of Tropical Medicine and Hygiene.

  19. Artificial Neural Networks applied to estimate permeability, porosity and intrinsic attenuation using seismic attributes and well-log data

    NASA Astrophysics Data System (ADS)

    Iturrarán-Viveros, Ursula; Parra, Jorge O.

    2014-08-01

    Permeability and porosity are two fundamental reservoir properties which relate to the amount of fluid contained in a reservoir and its ability to flow. The intrinsic attenuation is another important parameter since it is related to porosity, permeability, oil and gas saturation and these parameters significantly affect the seismic signature of a reservoir. We apply Artificial Neural Network (ANN) models to predict permeability (k) and porosity (ϕ) for a carbonate aquifer in southeastern Florida and to predict intrinsic attenuation (1/Q) for a sand-shale oil reservoir in northeast Texas. In this study, the Gamma test (a revolutionary estimator of the noise in a data set) has been used as a mathematically non-parametric nonlinear smooth modeling tool to choose the best input combination of seismic attributes to estimate k and ϕ, and the best combination of well-logs to estimate 1/Q. This saves time during the construction and training of ANN models and also sets a lower bound for the mean squared error to prevent over-training. The Neural Network method successfully delineates a highly permeable zone that corresponds to a high water production in the aquifer. The Gamma test found nonlinear relations that were not visible to linear regression allowing us to generalize the ANN estimations of k, ϕ and 1/Q for their respective sets of patterns that were not used during the learning phase.

  20. Interpolation strategies for reducing IFOV artifacts in microgrid polarimeter imagery.

    PubMed

    Ratliff, Bradley M; LaCasse, Charles F; Tyo, J Scott

    2009-05-25

    Microgrid polarimeters are composed of an array of micro-polarizing elements overlaid upon an FPA sensor. In the past decade systems have been designed and built in all regions of the optical spectrum. These systems have rugged, compact designs and the ability to obtain a complete set of polarimetric measurements during a single image capture. However, these systems acquire the polarization measurements through spatial modulation and each measurement has a varying instantaneous field-of-view (IFOV). When these measurements are combined to estimate the polarization images, strong edge artifacts are present that severely degrade the estimated polarization imagery. These artifacts can be reduced when interpolation strategies are first applied to the intensity data prior to Stokes vector estimation. Here we formally study IFOV error and the performance of several bilinear interpolation strategies used for reducing it.

  1. Domain-averaged, Shallow Precipitation Measurements During the Aerosol and Cloud Experiments in the Eastern North Atlantic (ACE-ENA)

    NASA Astrophysics Data System (ADS)

    Lamer, K.; Luke, E. P.; Kollias, P.; Oue, M.; Wang, J.

    2017-12-01

    The Atmospheric Radiation Measurement (ARM) Climate Research Facility operates a fixed observatory in the Eastern North Atlantic (ENA) on Graciosa Island in the Azores. Straddling the tropics and extratropics, the Azores receive air transported from North America, the Arctic and sometimes Europe. At the ARM ENA site, marine boundary layer clouds are frequently observed all year round. Estimates of drizzle mass flux from the surface to cloud base height are documented using a combination of high sensitivity profiling 35-GHz radar and ceilometer observations. Three years of drizzle mass flux retrievals reveal that statistically, directly over the ENA site, marine boundary layer cloud drizzle rates tend to be weak with few heavy drizzle events. In the summer of 2017, this site hosted the first phase of the Aerosol and Cloud Experiments in the Eastern North Atlantic (ACE-ENA) field campaign, which is motivated by the need for comprehensive in situ characterization of boundary layer structure, low clouds and aerosols. During this phase, the 35-GHz scanning ARM cloud radar was operated as a surveillance radar, providing regional context for the profiling observations. While less sensitive, the scanning radar measurements document a larger number of heavier drizzle events and provide domain-representative estimates of shallow precipitation. A best estimate, domain averaged, shallow precipitation rate for the region around the ARM ENA site is presented. The methodology optimally combines the ability of the profiling observations to detect the weak but frequently occurring drizzle events with the scanning cloud radar's ability to capture the less frequent heavier drizzle events. The technique is also evaluated using high resolution model output and a sophisticated forward radar operator.

  2. TARANIS XGRE and IDEE detection capability of terrestrial gamma-ray flashes and associated electron beams

    NASA Astrophysics Data System (ADS)

    Sarria, David; Lebrun, Francois; Blelly, Pierre-Louis; Chipaux, Remi; Laurent, Philippe; Sauvaud, Jean-Andre; Prech, Lubomir; Devoto, Pierre; Pailot, Damien; Baronick, Jean-Pierre; Lindsey-Clark, Miles

    2017-07-01

    With a launch expected in 2018, the TARANIS microsatellite is dedicated to the study of transient phenomena observed in association with thunderstorms. On board the spacecraft, XGRE and IDEE are two instruments dedicated to studying terrestrial gamma-ray flashes (TGFs) and associated terrestrial electron beams (TEBs). XGRE can detect electrons (energy range: 1 to 10 MeV) and X- and gamma-rays (energy range: 20 keV to 10 MeV) with a very high counting capability (about 10 million counts per second) and the ability to discriminate one type of particle from another. The IDEE instrument is focused on electrons in the 80 keV to 4 MeV energy range, with the ability to estimate their pitch angles. Monte Carlo simulations of the TARANIS instruments, using a preliminary model of the spacecraft, allow sensitive area estimates for both instruments. This leads to an averaged effective area of 425 cm2 for XGRE, used to detect X- and gamma-rays from TGFs, and the combination of XGRE and IDEE gives an average effective area of 255 cm2 which can be used to detect electrons/positrons from TEBs. We then compare these performances to RHESSI, AGILE and Fermi GBM, using data extracted from literature for the TGF case and with the help of Monte Carlo simulations of their mass models for the TEB case. Combining this data with the help of the MC-PEPTITA Monte Carlo simulations of TGF propagation in the atmosphere, we build a self-consistent model of the TGF and TEB detection rates of RHESSI, AGILE and Fermi. It can then be used to estimate that TARANIS should detect about 200 TGFs yr-1 and 25 TEBs yr-1.

  3. A Comparison of a Bayesian and a Maximum Likelihood Tailored Testing Procedure.

    ERIC Educational Resources Information Center

    McKinley, Robert L.; Reckase, Mark D.

    A study was conducted to compare tailored testing procedures based on a Bayesian ability estimation technique and on a maximum likelihood ability estimation technique. The Bayesian tailored testing procedure selected items so as to minimize the posterior variance of the ability estimate distribution, while the maximum likelihood tailored testing…

  4. On the asymptotic standard error of a class of robust estimators of ability in dichotomous item response models.

    PubMed

    Magis, David

    2014-11-01

    In item response theory, the classical estimators of ability are highly sensitive to response disturbances and can return strongly biased estimates of the true underlying ability level. Robust methods were introduced to lessen the impact of such aberrant responses on the estimation process. The computation of asymptotic (i.e., large-sample) standard errors (ASE) for these robust estimators, however, has not yet been fully considered. This paper focuses on a broad class of robust ability estimators, defined by an appropriate selection of the weight function and the residual measure, for which the ASE is derived from the theory of estimating equations. The maximum likelihood (ML) and the robust estimators, together with their estimated ASEs, are then compared in a simulation study by generating random guessing disturbances. It is concluded that both the estimators and their ASE perform similarly in the absence of random guessing, while the robust estimator and its estimated ASE are less biased and outperform their ML counterparts in the presence of random guessing with large impact on the item response process. © 2013 The British Psychological Society.

  5. Do Self-Efficacy and Ability Self-Estimate Scores Reflect Distinct Facets of Ability Judgments?

    ERIC Educational Resources Information Center

    Hansen, Jo-Ida C.; Bubany, Shawn T.

    2008-01-01

    Vocational psychology has generated a number of concepts and assessment instruments considered to reflect ability self-concept (i.e., one's view of one's own abilities) relevant to career development. These concepts and measures often are categorized as either self efficacy beliefs or self-estimated (i.e., self-rated, self-evaluated) abilities.…

  6. Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing

    PubMed Central

    Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier

    2009-01-01

    The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989

  7. On the Relationships between Jeffreys Modal and Weighted Likelihood Estimation of Ability under Logistic IRT Models

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2012-01-01

    This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…

  8. The Precision of Mapping Between Number Words and the Approximate Number System Predicts Children’s Formal Math Abilities

    PubMed Central

    Libertus, Melissa E.; Odic, Darko; Feigenson, Lisa; Halberda, Justin

    2016-01-01

    Children can represent number in at least two ways: by using their non-verbal, intuitive Approximate Number System (ANS), and by using words and symbols to count and represent numbers exactly. Further, by the time they are five years old, children can map between the ANS and number words, as evidenced by their ability to verbally estimate numbers of items without counting. How does the quality of the mapping between approximate and exact numbers relate to children’s math abilities? The role of the ANS-number word mapping in math competence remains controversial for at least two reasons. First, previous work has not examined the relation between verbal estimation and distinct subtypes of math abilities. Second, previous work has not addressed how distinct components of verbal estimation – mapping accuracy and variability – might each relate to math performance. Here, we address these gaps by measuring individual differences in ANS precision, verbal number estimation, and formal and informal math abilities in 5- to 7-year-old children. We found that verbal estimation variability, but not estimation accuracy, predicted formal math abilities even when controlling for age, expressive vocabulary, and ANS precision, and that it mediated the link between ANS precision and overall math ability. These findings suggest that variability in the ANS-number word mapping may be especially important for formal math abilities. PMID:27348475

  9. The precision of mapping between number words and the approximate number system predicts children's formal math abilities.

    PubMed

    Libertus, Melissa E; Odic, Darko; Feigenson, Lisa; Halberda, Justin

    2016-10-01

    Children can represent number in at least two ways: by using their non-verbal, intuitive approximate number system (ANS) and by using words and symbols to count and represent numbers exactly. Furthermore, by the time they are 5years old, children can map between the ANS and number words, as evidenced by their ability to verbally estimate numbers of items without counting. How does the quality of the mapping between approximate and exact numbers relate to children's math abilities? The role of the ANS-number word mapping in math competence remains controversial for at least two reasons. First, previous work has not examined the relation between verbal estimation and distinct subtypes of math abilities. Second, previous work has not addressed how distinct components of verbal estimation-mapping accuracy and variability-might each relate to math performance. Here, we addressed these gaps by measuring individual differences in ANS precision, verbal number estimation, and formal and informal math abilities in 5- to 7-year-old children. We found that verbal estimation variability, but not estimation accuracy, predicted formal math abilities, even when controlling for age, expressive vocabulary, and ANS precision, and that it mediated the link between ANS precision and overall math ability. These findings suggest that variability in the ANS-number word mapping may be especially important for formal math abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Investigating the Eddy Diffusivity Concept in the Coastal Ocean

    NASA Astrophysics Data System (ADS)

    Rypina, I.; Kirincich, A.; Lentz, S. J.; Sundermeyer, M. A.

    2016-12-01

    We test the validity, utility, and limitations of the lateral eddy diffusivity concept in a coastal environment through analyzing data from coupled drifter and dye releases within the footprint of a high-resolution (800 m) high-frequency radar south of Martha's Vineyard, Massachusetts. Specifically, we investigate how well a combination of radar-based velocities and drifter-derived diffusivities can reproduce observed dye spreading over an 8-h time interval. A drifter-based estimate of an anisotropic diffusivity tensor is used to parameterize small-scale motions that are unresolved and under-resolved by the radar system. This leads to a significant improvement in the ability of the radar to reproduce the observed dye spreading. Our drifter-derived diffusivity estimates are O(10 m2/s), are consistent with the diffusivity inferred from aerial images of the dye taken using the quadcopter-mounted digital camera during the dye release, and are roughly an order of magnitude larger than diffusivity estimates of Okubo (O(1 m2/s)) for similar spatial scales ( 1 km). Despite the fact that the drifter-based diffusivity approach was successful in improving the ability of the radar to reproduce the observed dye spreading, the dispersion of drifters was, for the most part, not consistent with the diffusive spreading regime.

  11. Modeling the Biodegradability of Chemical Compounds Using the Online CHEmical Modeling Environment (OCHEM)

    PubMed Central

    Vorberg, Susann

    2013-01-01

    Abstract Biodegradability describes the capacity of substances to be mineralized by free‐living bacteria. It is a crucial property in estimating a compound’s long‐term impact on the environment. The ability to reliably predict biodegradability would reduce the need for laborious experimental testing. However, this endpoint is difficult to model due to unavailability or inconsistency of experimental data. Our approach makes use of the Online Chemical Modeling Environment (OCHEM) and its rich supply of machine learning methods and descriptor sets to build classification models for ready biodegradability. These models were analyzed to determine the relationship between characteristic structural properties and biodegradation activity. The distinguishing feature of the developed models is their ability to estimate the accuracy of prediction for each individual compound. The models developed using seven individual descriptor sets were combined in a consensus model, which provided the highest accuracy. The identified overrepresented structural fragments can be used by chemists to improve the biodegradability of new chemical compounds. The consensus model, the datasets used, and the calculated structural fragments are publicly available at http://ochem.eu/article/31660. PMID:27485201

  12. Software engineering the mixed model for genome-wide association studies on large samples.

    PubMed

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  13. Career Interests and Self-Estimated Abilities of Young Adults with Disabilities

    ERIC Educational Resources Information Center

    Turner, Sherri; Unkefer, Lesley Craig; Cichy, Bryan Ervin; Peper, Christine; Juang, Ju-Ping

    2011-01-01

    The purpose of this study was to ascertain vocational interests and self-estimated work-relevant abilities of young adults with disabilities. Results showed that young adults with both low incidence and high incidence disabilities have a wide range of interests and self-estimated work-relevant abilities that are comparable to those in the general…

  14. A Note on the Reliability Coefficients for Item Response Model-Based Ability Estimates

    ERIC Educational Resources Information Center

    Kim, Seonghoon

    2012-01-01

    Assuming item parameters on a test are known constants, the reliability coefficient for item response theory (IRT) ability estimates is defined for a population of examinees in two different ways: as (a) the product-moment correlation between ability estimates on two parallel forms of a test and (b) the squared correlation between the true…

  15. The Relationship between a Linear Combination of Intelligence, Musical Background, Rhythm Ability and Tapping Ability to Typewriting Speed and Accuracy.

    ERIC Educational Resources Information Center

    Fante, Cheryl H.

    This study was conducted in an attempt to identify any predictor or combination of predictors of a beginning typewriting student's success. Variables of intelligence, rhythmic ability, musical background, and tapping ability were combined to study their relationship to typewriting speed and accuracy. A sample of 109 high school students was…

  16. Predicting vapor-liquid phase equilibria with augmented ab initio interatomic potentials

    NASA Astrophysics Data System (ADS)

    Vlasiuk, Maryna; Sadus, Richard J.

    2017-06-01

    The ability of ab initio interatomic potentials to accurately predict vapor-liquid phase equilibria is investigated. Monte Carlo simulations are reported for the vapor-liquid equilibria of argon and krypton using recently developed accurate ab initio interatomic potentials. Seventeen interatomic potentials are studied, formulated from different combinations of two-body plus three-body terms. The simulation results are compared to either experimental or reference data for conditions ranging from the triple point to the critical point. It is demonstrated that the use of ab initio potentials enables systematic improvements to the accuracy of predictions via the addition of theoretically based terms. The contribution of three-body interactions is accounted for using the Axilrod-Teller-Muto plus other multipole contributions and the effective Marcelli-Wang-Sadus potentials. The results indicate that the predictive ability of recent interatomic potentials, obtained from quantum chemical calculations, is comparable to that of accurate empirical models. It is demonstrated that the Marcelli-Wang-Sadus potential can be used in combination with accurate two-body ab initio models for the computationally inexpensive and accurate estimation of vapor-liquid phase equilibria.

  17. Predicting vapor-liquid phase equilibria with augmented ab initio interatomic potentials.

    PubMed

    Vlasiuk, Maryna; Sadus, Richard J

    2017-06-28

    The ability of ab initio interatomic potentials to accurately predict vapor-liquid phase equilibria is investigated. Monte Carlo simulations are reported for the vapor-liquid equilibria of argon and krypton using recently developed accurate ab initio interatomic potentials. Seventeen interatomic potentials are studied, formulated from different combinations of two-body plus three-body terms. The simulation results are compared to either experimental or reference data for conditions ranging from the triple point to the critical point. It is demonstrated that the use of ab initio potentials enables systematic improvements to the accuracy of predictions via the addition of theoretically based terms. The contribution of three-body interactions is accounted for using the Axilrod-Teller-Muto plus other multipole contributions and the effective Marcelli-Wang-Sadus potentials. The results indicate that the predictive ability of recent interatomic potentials, obtained from quantum chemical calculations, is comparable to that of accurate empirical models. It is demonstrated that the Marcelli-Wang-Sadus potential can be used in combination with accurate two-body ab initio models for the computationally inexpensive and accurate estimation of vapor-liquid phase equilibria.

  18. Investigating the running abilities of Tyrannosaurus rex using stress-constrained multibody dynamic analysis

    PubMed Central

    Pond, Stuart B.; Brassey, Charlotte A.; Manning, Philip L.; Bates, Karl T.

    2017-01-01

    The running ability of Tyrannosaurus rex has been intensively studied due to its relevance to interpretations of feeding behaviour and the biomechanics of scaling in giant predatory dinosaurs. Different studies using differing methodologies have produced a very wide range of top speed estimates and there is therefore a need to develop techniques that can improve these predictions. Here we present a new approach that combines two separate biomechanical techniques (multibody dynamic analysis and skeletal stress analysis) to demonstrate that true running gaits would probably lead to unacceptably high skeletal loads in T. rex. Combining these two approaches reduces the high-level of uncertainty in previous predictions associated with unknown soft tissue parameters in dinosaurs, and demonstrates that the relatively long limb segments of T. rex—long argued to indicate competent running ability—would actually have mechanically limited this species to walking gaits. Being limited to walking speeds contradicts arguments of high-speed pursuit predation for the largest bipedal dinosaurs like T. rex, and demonstrates the power of multiphysics approaches for locomotor reconstructions of extinct animals. PMID:28740745

  19. Accounting for imperfect detection of groups and individuals when estimating abundance.

    PubMed

    Clement, Matthew J; Converse, Sarah J; Royle, J Andrew

    2017-09-01

    If animals are independently detected during surveys, many methods exist for estimating animal abundance despite detection probabilities <1. Common estimators include double-observer models, distance sampling models and combined double-observer and distance sampling models (known as mark-recapture-distance-sampling models; MRDS). When animals reside in groups, however, the assumption of independent detection is violated. In this case, the standard approach is to account for imperfect detection of groups, while assuming that individuals within groups are detected perfectly. However, this assumption is often unsupported. We introduce an abundance estimator for grouped animals when detection of groups is imperfect and group size may be under-counted, but not over-counted. The estimator combines an MRDS model with an N-mixture model to account for imperfect detection of individuals. The new MRDS-Nmix model requires the same data as an MRDS model (independent detection histories, an estimate of distance to transect, and an estimate of group size), plus a second estimate of group size provided by the second observer. We extend the model to situations in which detection of individuals within groups declines with distance. We simulated 12 data sets and used Bayesian methods to compare the performance of the new MRDS-Nmix model to an MRDS model. Abundance estimates generated by the MRDS-Nmix model exhibited minimal bias and nominal coverage levels. In contrast, MRDS abundance estimates were biased low and exhibited poor coverage. Many species of conservation interest reside in groups and could benefit from an estimator that better accounts for imperfect detection. Furthermore, the ability to relax the assumption of perfect detection of individuals within detected groups may allow surveyors to re-allocate resources toward detection of new groups instead of extensive surveys of known groups. We believe the proposed estimator is feasible because the only additional field data required are a second estimate of group size.

  20. Accounting for imperfect detection of groups and individuals when estimating abundance

    USGS Publications Warehouse

    Clement, Matthew J.; Converse, Sarah J.; Royle, J. Andrew

    2017-01-01

    If animals are independently detected during surveys, many methods exist for estimating animal abundance despite detection probabilities <1. Common estimators include double-observer models, distance sampling models and combined double-observer and distance sampling models (known as mark-recapture-distance-sampling models; MRDS). When animals reside in groups, however, the assumption of independent detection is violated. In this case, the standard approach is to account for imperfect detection of groups, while assuming that individuals within groups are detected perfectly. However, this assumption is often unsupported. We introduce an abundance estimator for grouped animals when detection of groups is imperfect and group size may be under-counted, but not over-counted. The estimator combines an MRDS model with an N-mixture model to account for imperfect detection of individuals. The new MRDS-Nmix model requires the same data as an MRDS model (independent detection histories, an estimate of distance to transect, and an estimate of group size), plus a second estimate of group size provided by the second observer. We extend the model to situations in which detection of individuals within groups declines with distance. We simulated 12 data sets and used Bayesian methods to compare the performance of the new MRDS-Nmix model to an MRDS model. Abundance estimates generated by the MRDS-Nmix model exhibited minimal bias and nominal coverage levels. In contrast, MRDS abundance estimates were biased low and exhibited poor coverage. Many species of conservation interest reside in groups and could benefit from an estimator that better accounts for imperfect detection. Furthermore, the ability to relax the assumption of perfect detection of individuals within detected groups may allow surveyors to re-allocate resources toward detection of new groups instead of extensive surveys of known groups. We believe the proposed estimator is feasible because the only additional field data required are a second estimate of group size.

  1. Improving the accuracy of burn-surface estimation.

    PubMed

    Nichter, L S; Williams, J; Bryant, C A; Edlich, R F

    1985-09-01

    A user-friendly computer-assisted method of calculating total body surface area burned (TBSAB) has been developed. This method is more accurate, faster, and subject to less error than conventional methods. For comparison, the ability of 30 physicians to estimate TBSAB was tested. Parameters studied included the effect of prior burn care experience, the influence of burn size, the ability to accurately sketch the size of burns on standard burn charts, and the ability to estimate percent TBSAB from the sketches. Despite the ability for physicians of all levels of training to accurately sketch TBSAB, significant burn size over-estimation (p less than 0.01) and large interrater variability of potential consequence was noted. Direct benefits of a computerized system are many. These include the need for minimal user experience and the ability for wound-trend analysis, permanent record storage, calculation of fluid and caloric requirements, hemodynamic parameters, and the ability to compare meaningfully the different treatment protocols.

  2. The Sensitivity of Parameter Estimates to the Latent Ability Distribution. Research Report. ETS RR-11-40

    ERIC Educational Resources Information Center

    Xu, Xueli; Jia, Yue

    2011-01-01

    Estimation of item response model parameters and ability distribution parameters has been, and will remain, an important topic in the educational testing field. Much research has been dedicated to addressing this task. Some studies have focused on item parameter estimation when the latent ability was assumed to follow a normal distribution,…

  3. Exploring what prompts ITIC to become a superior acceptor in organic solar cell by combining molecular dynamics simulation with quantum chemistry calculation.

    PubMed

    Pan, Qing-Qing; Li, Shuang-Bao; Duan, Ying-Chen; Wu, Yong; Zhang, Ji; Geng, Yun; Zhao, Liang; Su, Zhong-Min

    2017-11-29

    The interface characteristic is a crucial factor determining the power conversion efficiency of organic solar cells (OSCs). In this work, our aim is to conduct a comparative study on the interface characteristics between the very famous non-fullerene acceptor, ITIC, and a fullerene acceptor, PC71BM by combining molecular dynamics simulations with density functional theory. Based on some typical interface models of the acceptor ITIC or PC71BM and the donor PBDB-T selected from MD simulation, besides the evaluation of charge separation/recombination rates, the relative positions of Frenkel exciton (FE) states and the charge transfer states along with their oscillator strengths are also employed to estimate the charge separation abilities. The results show that, when compared with those for the PBDB-T/PC71BM interface, the CT states are more easily formed for the PBDB-T/ITIC interface by either the electron transfer from the FE state or direct excitation, indicating the better charge separation ability of the former. Moreover, the estimation of the charge separation efficiency manifests that although these two types of interfaces have similar charge recombination rates, the PBDB-T/ITIC interface possesses the larger charge separation rates than those of the PBDB-T/PC71BM interface. Therefore, the better match between PBDB-T and ITIC together with a larger charge separation efficiency at the interface are considered to be the reasons for the prominent performance of ITIC in OSCs.

  4. A non-parametric automatic blending methodology to estimate rainfall fields from rain gauge and radar data

    NASA Astrophysics Data System (ADS)

    Velasco-Forero, Carlos A.; Sempere-Torres, Daniel; Cassiraga, Eduardo F.; Jaime Gómez-Hernández, J.

    2009-07-01

    Quantitative estimation of rainfall fields has been a crucial objective from early studies of the hydrological applications of weather radar. Previous studies have suggested that flow estimations are improved when radar and rain gauge data are combined to estimate input rainfall fields. This paper reports new research carried out in this field. Classical approaches for the selection and fitting of a theoretical correlogram (or semivariogram) model (needed to apply geostatistical estimators) are avoided in this study. Instead, a non-parametric technique based on FFT is used to obtain two-dimensional positive-definite correlograms directly from radar observations, dealing with both the natural anisotropy and the temporal variation of the spatial structure of the rainfall in the estimated fields. Because these correlation maps can be automatically obtained at each time step of a given rainfall event, this technique might easily be used in operational (real-time) applications. This paper describes the development of the non-parametric estimator exploiting the advantages of FFT for the automatic computation of correlograms and provides examples of its application on a case study using six rainfall events. This methodology is applied to three different alternatives to incorporate the radar information (as a secondary variable), and a comparison of performances is provided. In particular, their ability to reproduce in estimated rainfall fields (i) the rain gauge observations (in a cross-validation analysis) and (ii) the spatial patterns of radar fields are analyzed. Results seem to indicate that the methodology of kriging with external drift [KED], in combination with the technique of automatically computing 2-D spatial correlograms, provides merged rainfall fields with good agreement with rain gauges and with the most accurate approach to the spatial tendencies observed in the radar rainfall fields, when compared with other alternatives analyzed.

  5. Simultaneous use of mark-recapture and radiotelemetry to estimate survival, movement, and capture rates

    USGS Publications Warehouse

    Powell, L.A.; Conroy, M.J.; Hines, J.E.; Nichols, J.D.; Krementz, D.G.

    2000-01-01

    Biologists often estimate separate survival and movement rates from radio-telemetry and mark-recapture data from the same study population. We describe a method for combining these data types in a single model to obtain joint, potentially less biased estimates of survival and movement that use all available data. We furnish an example using wood thrushes (Hylocichla mustelina) captured at the Piedmont National Wildlife Refuge in central Georgia in 1996. The model structure allows estimation of survival and capture probabilities, as well as estimation of movements away from and into the study area. In addition, the model structure provides many possibilities for hypothesis testing. Using the combined model structure, we estimated that wood thrush weekly survival was 0.989 ? 0.007 ( ?SE). Survival rates of banded and radio-marked individuals were not different (alpha hat [S_radioed, ~ S_banded]=log [S hat _radioed/ S hat _banded]=0.0239 ? 0.0435). Fidelity rates (weekly probability of remaining in a stratum) did not differ between geographic strata (psi hat=0.911 ? 0.020; alpha hat [psi11, psi22]=0.0161 ? 0.047), and recapture rates ( = 0.097 ? 0.016) banded and radio-marked individuals were not different (alpha hat [p_radioed, p_banded]=0.145 ? 0.655). Combining these data types in a common model resulted in more precise estimates of movement and recapture rates than separate estimation, but ability to detect stratum or mark-specific differences in parameters was week. We conducted simulation trials to investigate the effects of varying study designs on parameter accuracy and statistical power to detect important differences. Parameter accuracy was high (relative bias [RBIAS] <2 %) and confidence interval coverage close to nominal, except for survival estimates of banded birds for the 'off study area' stratum, which were negatively biased (RBIAS -7 to -15%) when sample sizes were small (5-10 banded or radioed animals 'released' per time interval). To provide adequate data for useful inference from this model, study designs should seek a minimum of 25 animals of each marking type observed (marked or observed via telemetry) in each time period and geographic stratum.

  6. The Ability of Patient-Symptom Questionnaires to Differentiate PVFMD From Asthma.

    PubMed

    Ye, Jinny; Nouraie, Mehdi; Holguin, Fernando; Gillespie, Amanda I

    2017-05-01

    Goals of the current study were to (1) conduct initial validation of a new Paradoxical Vocal Fold Movement Disorder Screening Questionnaire (PVFMD-SQ); (2) determine if symptom-based questionnaires can discriminate between patients with confirmed PVFMD and those with diagnosed uncontrolled asthma without clinical suspicion for PVFMD; and (3) determine if a new questionnaire with diagnostic specificity could be created from a combination of significant items on previously validated questionnaires. This is a prospective, case-controlled study of patients with PVFMD only and asthma only, who completed five questionnaires: Dyspnea Index, Reflux Symptom Index, Voice Handicap Index-10, Sino-Nasal Questionnaire, and PVFMD-SQ. Factor analysis was completed on the new PVFMD-SQ, and the discrimination ability of selected factors was assessed by receiver operating characteristics curve. The factor with the greatest discriminatory ability was selected to create one diagnostic questionnaire, and scores for each participant were calculated to estimate how well the factor correlated with a PVFMD or asthma diagnosis. Mean scores on all questionnaires were compared to test their discriminatory ability. Patients with PVFMD showed greater voice handicap and reflux symptoms than patients with asthma. A 15-item one-factor questionnaire was developed from the original PVFMD-SQ, with a sensitivity of 89% and specificity of 73% for diagnosing asthma versus PVFMD. The combined questionnaires resulted in four factors, none of which showed discriminatory ability between PVFMD and asthma. This study represents the first time that a patient symptom-based screening tool has shown diagnostic sensitivity to differentiate PVFMD from asthma in a cohort of symptomatic patients. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  7. Accounting for uncertainty in model-based prevalence estimation: paratuberculosis control in dairy herds.

    PubMed

    Davidson, Ross S; McKendrick, Iain J; Wood, Joanna C; Marion, Glenn; Greig, Alistair; Stevenson, Karen; Sharp, Michael; Hutchings, Michael R

    2012-09-10

    A common approach to the application of epidemiological models is to determine a single (point estimate) parameterisation using the information available in the literature. However, in many cases there is considerable uncertainty about parameter values, reflecting both the incomplete nature of current knowledge and natural variation, for example between farms. Furthermore model outcomes may be highly sensitive to different parameter values. Paratuberculosis is an infection for which many of the key parameter values are poorly understood and highly variable, and for such infections there is a need to develop and apply statistical techniques which make maximal use of available data. A technique based on Latin hypercube sampling combined with a novel reweighting method was developed which enables parameter uncertainty and variability to be incorporated into a model-based framework for estimation of prevalence. The method was evaluated by applying it to a simulation of paratuberculosis in dairy herds which combines a continuous time stochastic algorithm with model features such as within herd variability in disease development and shedding, which have not been previously explored in paratuberculosis models. Generated sample parameter combinations were assigned a weight, determined by quantifying the model's resultant ability to reproduce prevalence data. Once these weights are generated the model can be used to evaluate other scenarios such as control options. To illustrate the utility of this approach these reweighted model outputs were used to compare standard test and cull control strategies both individually and in combination with simple husbandry practices that aim to reduce infection rates. The technique developed has been shown to be applicable to a complex model incorporating realistic control options. For models where parameters are not well known or subject to significant variability, the reweighting scheme allowed estimated distributions of parameter values to be combined with additional sources of information, such as that available from prevalence distributions, resulting in outputs which implicitly handle variation and uncertainty. This methodology allows for more robust predictions from modelling approaches by allowing for parameter uncertainty and combining different sources of information, and is thus expected to be useful in application to a large number of disease systems.

  8. Optimal filtering and Bayesian detection for friction-based diagnostics in machines.

    PubMed

    Ray, L R; Townsend, J R; Ramasubramanian, A

    2001-01-01

    Non-model-based diagnostic methods typically rely on measured signals that must be empirically related to process behavior or incipient faults. The difficulty in interpreting a signal that is indirectly related to the fundamental process behavior is significant. This paper presents an integrated non-model and model-based approach to detecting when process behavior varies from a proposed model. The method, which is based on nonlinear filtering combined with maximum likelihood hypothesis testing, is applicable to dynamic systems whose constitutive model is well known, and whose process inputs are poorly known. Here, the method is applied to friction estimation and diagnosis during motion control in a rotating machine. A nonlinear observer estimates friction torque in a machine from shaft angular position measurements and the known input voltage to the motor. The resulting friction torque estimate can be analyzed directly for statistical abnormalities, or it can be directly compared to friction torque outputs of an applicable friction process model in order to diagnose faults or model variations. Nonlinear estimation of friction torque provides a variable on which to apply diagnostic methods that is directly related to model variations or faults. The method is evaluated experimentally by its ability to detect normal load variations in a closed-loop controlled motor driven inertia with bearing friction and an artificially-induced external line contact. Results show an ability to detect statistically significant changes in friction characteristics induced by normal load variations over a wide range of underlying friction behaviors.

  9. Combining Empirical and Stochastic Models for Extreme Floods Estimation

    NASA Astrophysics Data System (ADS)

    Zemzami, M.; Benaabidate, L.

    2013-12-01

    Hydrological models can be defined as physical, mathematical or empirical. The latter class uses mathematical equations independent of the physical processes involved in the hydrological system. The linear regression and Gradex (Gradient of Extreme values) are classic examples of empirical models. However, conventional empirical models are still used as a tool for hydrological analysis by probabilistic approaches. In many regions in the world, watersheds are not gauged. This is true even in developed countries where the gauging network has continued to decline as a result of the lack of human and financial resources. Indeed, the obvious lack of data in these watersheds makes it impossible to apply some basic empirical models for daily forecast. So we had to find a combination of rainfall-runoff models in which it would be possible to create our own data and use them to estimate the flow. The estimated design floods would be a good choice to illustrate the difficulties facing the hydrologist for the construction of a standard empirical model in basins where hydrological information is rare. The construction of the climate-hydrological model, which is based on frequency analysis, was established to estimate the design flood in the Anseghmir catchments, Morocco. The choice of using this complex model returns to its ability to be applied in watersheds where hydrological information is not sufficient. It was found that this method is a powerful tool for estimating the design flood of the watershed and also other hydrological elements (runoff, volumes of water...).The hydrographic characteristics and climatic parameters were used to estimate the runoff, water volumes and design flood for different return periods.

  10. Retinal blood vessel segmentation in high resolution fundus photographs using automated feature parameter estimation

    NASA Astrophysics Data System (ADS)

    Orlando, José Ignacio; Fracchia, Marcos; del Río, Valeria; del Fresno, Mariana

    2017-11-01

    Several ophthalmological and systemic diseases are manifested through pathological changes in the properties and the distribution of the retinal blood vessels. The characterization of such alterations requires the segmentation of the vasculature, which is a tedious and time-consuming task that is infeasible to be performed manually. Numerous attempts have been made to propose automated methods for segmenting the retinal vasculature from fundus photographs, although their application in real clinical scenarios is usually limited by their ability to deal with images taken at different resolutions. This is likely due to the large number of parameters that have to be properly calibrated according to each image scale. In this paper we propose to apply a novel strategy for automated feature parameter estimation, combined with a vessel segmentation method based on fully connected conditional random fields. The estimation model is learned by linear regression from structural properties of the images and known optimal configurations, that were previously obtained for low resolution data sets. Our experiments in high resolution images show that this approach is able to estimate appropriate configurations that are suitable for performing the segmentation task without requiring to re-engineer parameters. Furthermore, our combined approach reported state of the art performance on the benchmark data set HRF, as measured in terms of the F1-score and the Matthews correlation coefficient.

  11. Towards Large-area Field-scale Operational Evapotranspiration for Water Use Mapping

    NASA Astrophysics Data System (ADS)

    Senay, G. B.; Friedrichs, M.; Morton, C.; Huntington, J. L.; Verdin, J.

    2017-12-01

    Field-scale evapotranspiration (ET) estimates are needed for improving surface and groundwater use and water budget studies. Ideally, field-scale ET estimates would be at regional to national levels and cover long time periods. As a result of large data storage and computational requirements associated with processing field-scale satellite imagery such as Landsat, numerous challenges remain to develop operational ET estimates over large areas for detailed water use and availability studies. However, the combination of new science, data availability, and cloud computing technology is enabling unprecedented capabilities for ET mapping. To demonstrate this capability, we used Google's Earth Engine cloud computing platform to create nationwide annual ET estimates with 30-meter resolution Landsat ( 16,000 images) and gridded weather data using the Operational Simplified Surface Energy Balance (SSEBop) model in support of the National Water Census, a USGS research program designed to build decision support capacity for water management agencies and other natural resource managers. By leveraging Google's Earth Engine Application Programming Interface (API) and developing software in a collaborative, open-platform environment, we rapidly advance from research towards applications for large-area field-scale ET mapping. Cloud computing of the Landsat image archive combined with other satellite, climate, and weather data, is creating never imagined opportunities for assessing ET model behavior and uncertainty, and ultimately providing the ability for more robust operational monitoring and assessment of water use at field-scales.

  12. Combining-Ability Determinations for Incomplete Mating Designs

    Treesearch

    E.B. Snyder

    1975-01-01

    It is shown how general combining ability values (GCA's) from cross-, open-, and self-pollinated progeny can be derived in a single analysis. Breeding values are employed to facilitate explaining genetic models of the expected family means and the derivation of the GCA's. A FORTRAN computer program also includes computation of specific combining ability...

  13. Spurious Latent Class Problem in the Mixed Rasch Model: A Comparison of Three Maximum Likelihood Estimation Methods under Different Ability Distributions

    ERIC Educational Resources Information Center

    Sen, Sedat

    2018-01-01

    Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…

  14. Individual differences in non-symbolic numerical abilities predict mathematical achievements but contradict ATOM.

    PubMed

    Agrillo, Christian; Piffer, Laura; Adriano, Andrea

    2013-07-01

    A significant debate surrounds the nature of the cognitive mechanisms involved in non-symbolic number estimation. Several studies have suggested the existence of the same cognitive system for estimation of time, space, and number, called "a theory of magnitude" (ATOM). In addition, researchers have proposed the theory that non-symbolic number abilities might support our mathematical skills. Despite the large number of studies carried out, no firm conclusions can be drawn on either topic. In the present study, we correlated the performance of adults on non-symbolic magnitude estimations and symbolic numerical tasks. Non-symbolic magnitude abilities were assessed by asking participants to estimate which auditory tone lasted longer (time), which line was longer (space), and which group of dots was more numerous (number). To assess symbolic numerical abilities, participants were required to perform mental calculations and mathematical reasoning. We found a positive correlation between non-symbolic and symbolic numerical abilities. On the other hand, no correlation was found among non-symbolic estimations of time, space, and number. Our study supports the idea that mathematical abilities rely on rudimentary numerical skills that predate verbal language. By contrast, the lack of correlation among non-symbolic estimations of time, space, and number is incompatible with the idea that these magnitudes are entirely processed by the same cognitive system.

  15. Severe hypoglycaemia and late-life cognitive ability in older people with Type 2 diabetes: the Edinburgh Type 2 Diabetes Study.

    PubMed

    Aung, P P; Strachan, M W J; Frier, B M; Butcher, I; Deary, I J; Price, J F

    2012-03-01

    To determine the association between lifetime severe hypoglycaemia and late-life cognitive ability in older people with Type 2 diabetes. Cross-sectional, population-based study of 1066 men and women aged 60-75 years, with Type 2 diabetes. Frequency of severe hypoglycaemia over a person's lifetime and in the year prior to cognitive testing was assessed using a previously validated self-completion questionnaire. Results of age-sensitive neuropsychological tests were combined to derive a late-life general cognitive ability factor, 'g'. Vocabulary test scores, which are stable during ageing, were used to estimate early life (prior) cognitive ability. After age- and sex- adjustment, 'g' was lower in subjects reporting at least one prior severe hypoglycaemia episode (n = 113), compared with those who did not report severe hypoglycaemia (mean 'g'-0.34 vs. 0.05, P < 0.001). Mean vocabulary test scores did not differ significantly between the two groups (30.2 vs. 31.0, P = 0.13). After adjustment for vocabulary, difference in 'g' between the groups persisted (means -0.25 vs. 0.04, P < 0.001), with the group with severe hypoglycaemia demonstrating poorer performance on tests of Verbal Fluency (34.5 vs. 37.3, P = 0.02), Digit Symbol Testing (45.9 vs. 49.9, P = 0.002), Letter-Number Sequencing (9.1 vs. 9.8, P = 0.005) and Trail Making (P < 0.001). These associations persisted after adjustment for duration of diabetes, vascular disease and other potential confounders. Self-reported history of severe hypoglycaemia was associated with poorer late-life cognitive ability in people with Type 2 diabetes. Persistence of this association after adjustment for estimated prior cognitive ability suggests that the association may be attributable, at least in part, to an effect of hypoglycaemia on age-related cognitive decline. © 2011 The Authors. Diabetic Medicine © 2011 Diabetes UK.

  16. Estimated maximal and current brain volume predict cognitive ability in old age

    PubMed Central

    Royle, Natalie A.; Booth, Tom; Valdés Hernández, Maria C.; Penke, Lars; Murray, Catherine; Gow, Alan J.; Maniega, Susana Muñoz; Starr, John; Bastin, Mark E.; Deary, Ian J.; Wardlaw, Joanna M.

    2013-01-01

    Brain tissue deterioration is a significant contributor to lower cognitive ability in later life; however, few studies have appropriate data to establish how much influence prior brain volume and prior cognitive performance have on this association. We investigated the associations between structural brain imaging biomarkers, including an estimate of maximal brain volume, and detailed measures of cognitive ability at age 73 years in a large (N = 620), generally healthy, community-dwelling population. Cognitive ability data were available from age 11 years. We found positive associations (r) between general cognitive ability and estimated brain volume in youth (male, 0.28; females, 0.12), and in measured brain volume in later life (males, 0.27; females, 0.26). Our findings show that cognitive ability in youth is a strong predictor of estimated prior and measured current brain volume in old age but that these effects were the same for both white and gray matter. As 1 of the largest studies of associations between brain volume and cognitive ability with normal aging, this work contributes to the wider understanding of how some early-life factors influence cognitive aging. PMID:23850342

  17. The utility of covariance of combining ability in plant breeding.

    PubMed

    Arunachalam, V

    1976-11-01

    The definition of covariances of half- and full sibs, and hence that of variances of general and specific combining ability with regard to a quantitative character, is extended to take into account the respective covariances between a pair of characters. The interpretation of the dispersion and correlation matrices of general and specific combining ability is discussed by considering a set of single, three- and four-way crosses, made using diallel and line × tester mating systems in Pennisetum typhoides. The general implications of the concept of covariance of combining ability in plant breeding are discussed.

  18. Flexible nonlinear estimates of the association between height and mental ability in early life.

    PubMed

    Murasko, Jason E

    2014-01-01

    To estimate associations between early-life mental ability and height/height-growth in contemporary US children. Structured additive regression models are used to flexibly estimate the associations between height and mental ability at approximately 24 months of age. The sample is taken from the Early Childhood Longitudinal Study-Birth Cohort, a national study whose target population was children born in the US during 2001. A nonlinear association is indicated between height and mental ability at approximately 24 months of age. There is an increasing association between height and mental ability below the mean value of height, but a flat association thereafter. Annualized growth shows the same nonlinear association to ability when controlling for baseline length at 9 months. Restricted growth at lower values of the height distribution is associated with lower measured mental ability in contemporary US children during the first years of life. Copyright © 2013 Wiley Periodicals, Inc.

  19. Modeling the marine resources consumed in raising a king penguin chick: an energetics approach.

    PubMed

    Halsey, L G; Butler, P J; Fahlman, A; Bost, C-A; Woakes, A J; Handrich, Y

    2008-01-01

    Accurate estimates of penguin energetics would represent an important contribution to our understanding of the trophodynamics of the Southern Ocean ecosystem and our ability to predict effects of environmental change on these species. We used the heart rate-rate of oxygen consumption technique to estimate rate of energy expenditure in adult king penguins raising a chick, in combination with data from the literature on changes in adult mass, chick energy requirements, and prey energy density. Our model estimated a variety of energetic costs and quantities of prey consumption related to raising a king penguin chick during the austral summer. The total energy requirements of a king penguin chick at the Crozet Archipelago from hatching until reaching a mass of 8 kg 90 d later is 271 MJ, representing the consumption of 38.4 kg of myctophid fish. A successfully breeding male requires 0.78 kg d(-1) of fish during the entirety of the incubation period and 1.14 kg d(-1) during the subsequent 90 d of chick rearing. Assuming the same energy requirements for females, the estimated 580,000 pairs of king penguins that breed successfully at Crozet each year, together with their chicks, consume a total of around 190,000 tons of fish during the incubation and summer rearing periods combined. If, due to depletion of fish stocks, the diet of breeders and chicks during the summer becomes identical to the typical diet of adults during the austral winter, the mass of prey required by both adults and chicks combined (where the chick still reaches 8 kg after 90 d) would increase by more than 25%.

  20. Comparison of in silico models for prediction of mutagenicity.

    PubMed

    Bakhtyari, Nazanin G; Raitano, Giuseppa; Benfenati, Emilio; Martin, Todd; Young, Douglas

    2013-01-01

    Using a dataset with more than 6000 compounds, the performance of eight quantitative structure activity relationships (QSAR) models was evaluated: ACD/Tox Suite, Absorption, Distribution, Metabolism, Elimination, and Toxicity of chemical substances (ADMET) predictor, Derek, Toxicity Estimation Software Tool (T.E.S.T.), TOxicity Prediction by Komputer Assisted Technology (TOPKAT), Toxtree, CEASAR, and SARpy (SAR in python). In general, the results showed a high level of performance. To have a realistic estimate of the predictive ability, the results for chemicals inside and outside the training set for each model were considered. The effect of applicability domain tools (when available) on the prediction accuracy was also evaluated. The predictive tools included QSAR models, knowledge-based systems, and a combination of both methods. Models based on statistical QSAR methods gave better results.

  1. Data acquisition and path selection decision making for an autonomous roving vehicle. [laser pointing control system for vehicle guidance

    NASA Technical Reports Server (NTRS)

    Shen, C. N.; YERAZUNIS

    1979-01-01

    The feasibility of using range/pointing angle data such as might be obtained by a laser rangefinder for the purpose of terrain evaluation in the 10-40 meter range on which to base the guidance of an autonomous rover was investigated. The decision procedure of the rapid estimation scheme for the detection of discrete obstacles has been modified to reinforce the detection ability. With the introduction of the logarithmic scanning scheme and obstacle identification scheme, previously developed algorithms are combined to demonstrate the overall performance of the intergrated route designation system using laser rangefinder. In an attempt to cover a greater range, 30 m to 100 mm, the problem estimating gradients in the presence of positioning angle noise at middle range is investigated.

  2. Experimental study and neural network modeling of sugarcane bagasse pretreatment with H2SO4 and O3 for cellulosic material conversion to sugar.

    PubMed

    Gitifar, Vahid; Eslamloueyan, Reza; Sarshar, Mohammad

    2013-11-01

    In this study, pretreatment of sugarcane bagasse and subsequent enzymatic hydrolysis is investigated using two categories of pretreatment methods: dilute acid (DA) pretreatment and combined DA-ozonolysis (DAO) method. Both methods are accomplished at different solid ratios, sulfuric acid concentrations, autoclave residence times, bagasse moisture content, and ozonolysis time. The results show that the DAO pretreatment can significantly increase the production of glucose compared to DA method. Applying k-fold cross validation method, two optimal artificial neural networks (ANNs) are trained for estimations of glucose concentrations for DA and DAO pretreatment methods. Comparing the modeling results with experimental data indicates that the proposed ANNs have good estimation abilities. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. The "Creative Right Brain" Revisited: Individual Creativity and Associative Priming in the Right Hemisphere Relate to Hemispheric Asymmetries in Reward Brain Function.

    PubMed

    Aberg, Kristoffer Carl; Doell, Kimberly C; Schwartz, Sophie

    2017-10-01

    The idea that creativity resides in the right cerebral hemisphere is persistent in popular science, but has been widely frowned upon by the scientific community due to little empirical support. Yet, creativity is believed to rely on the ability to combine remote concepts into novel and useful ideas, an ability which would depend on associative processing in the right hemisphere. Moreover, associative processing is modulated by dopamine, and asymmetries in dopamine functionality between hemispheres may imbalance the expression of their implemented cognitive functions. Here, by uniting these largely disconnected concepts, we hypothesize that relatively less dopamine function in the right hemisphere boosts creativity by releasing constraining effects of dopamine on remote associations. Indeed, participants with reduced neural responses in the dopaminergic system of the right hemisphere (estimated by functional MRI in a reward task with positive and negative feedback), displayed higher creativity (estimated by convergent and divergent tasks), and increased associative processing in the right hemisphere (estimated by a lateralized lexical decision task). Our findings offer unprecedented empirical support for a crucial and specific contribution of the right hemisphere to creativity. More importantly our study provides a comprehensive view on potential determinants of human creativity, namely dopamine-related activity and associative processing. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. The effects of whole body vibration combined biofeedback postural control training on the balance ability and gait ability in stroke patients.

    PubMed

    Uhm, Yo-Han; Yang, Dae-Jung

    2017-11-01

    [Purpose] The purpose of this study was to examine the effect of biofeedback postural control training using whole body vibration in acute stroke patients on balance and gait ability. [Subjects and Methods] Thirty stroke patients participated in this study and were divided into a group of 10, a group for biofeedback postural control training combined with a whole body vibration, one for biofeedback postural control training combined with an aero-step, and one for biofeedback postural control training. Biorescue was used to measure the limits of stability, balance ability, and Lukotronic was used to measure step length, gait ability. [Results] In the comparison of balance ability and gait ability between the groups for before and after intervention, Group I showed a significant difference in balance ability and gait ability compared to Groups II and III. [Conclusion] This study showed that biofeedback postural control training using whole body vibration is effective for improving balance ability and gait ability in stroke patients.

  5. Assessing the Utility of Compound Trait Estimates of Narrow Personality Traits.

    PubMed

    Credé, Marcus; Harms, Peter D; Blacksmith, Nikki; Wood, Dustin

    2016-01-01

    It has been argued that approximations of narrow traits can be made through linear combinations of broad traits such as the Big Five personality traits. Indeed, Hough and Ones ( 2001 ) used a qualitative analysis of scale content to arrive at a taxonomy of how Big Five traits might be combined to approximate various narrow traits. However, the utility of such compound trait approximations has yet to be established beyond specific cases such as integrity and customer service orientation. Using data from the Eugene-Springfield Community Sample (Goldberg, 2008 ), we explore the ability of linear composites of scores on Big Five traits to approximate scores on 127 narrow trait measures from 5 well-known non-Big-Five omnibus measures of personality. Our findings indicate that individuals' standing on more than 30 narrow traits can be well estimated from 3 different types of linear composites of scores on Big Five traits without a substantial sacrifice in criterion validity. We discuss theoretical accounts for why such relationships exist as well as the theoretical and practical implications of these findings for researchers and practitioners.

  6. Combining GRACE and Altimetry to solve for present day mass changes and GIA

    NASA Astrophysics Data System (ADS)

    Rietbroek, R.; Lück, C.; Uebbing, B.; Kusche, J.; King, M. A.

    2017-12-01

    Past and present day sea level rise is closely linked to geoid and surface deformation changes from the ongoing glacial isostatic adjustment (GIA). Sea level, as detected by radar altimetry, senses the radial deformation of the ocean floor as mantle material slowly flows back to the locations of the former glacial domes. This manifests itself as a net subsidence when averaged over the entire ocean, but can regionally be seen as an uplift for locations close to the former ice sheets. Furthermore, mass driven sea level as derived from GRACE, is even more sensitive to GIA induced mass redistribution in the solid Earth. Consequently, errors in GIA corrections, most notably errors in mantle viscosity and ice histories, have a different leverage on regional sea level estimates from GRACE and altimetry. In this study, we discuss the abilities of a GRACE-altimetry combination to co-estimate GIA corrections together with present day contributors to sea level, rather than simply prescribing a GIA correction from a model. The data is combined in a joint inversion scheme which makes use of spatial patterns to parameterize present day loading effects and GIA. We show that the GRACE-altimetry combination requires constraints, but generally steers the Antarctic GIA signal towards a weaker present day signal in Antarctica compared to a ICE5-G(VM2) derived model. Furthermore, in light of the aging GRACE mission, we show sensitivity studies of how well one could estimate GIA corrections when using other low earth orbiters such as SWARM or CHAMP. Finally, we show whether the Antarctic GNSS station network may be useful in separating GIA from present day mass signals in this type of inversion schemes.

  7. Sex estimation based on tooth measurements using panoramic radiographs.

    PubMed

    Capitaneanu, Cezar; Willems, Guy; Jacobs, Reinhilde; Fieuws, Steffen; Thevissen, Patrick

    2017-05-01

    Sex determination is an important step in establishing the biological profile of unidentified human remains. The aims of the study were, firstly, to assess the degree of sexual dimorphism in permanent teeth, based on digital tooth measurements performed on panoramic radiographs. Secondly, to identify sex-related tooth position-specific measurements or combinations of such measurements, and to assess their applicability for potential sex determination. Two hundred digital panoramic radiographs (100 males, 100 females; age range 22-34 years) were retrospectively collected from the dental clinic files of the Dentomaxillofacial Radiology Center of the University Hospitals Leuven, Belgium, and imported in image enhancement software. Tooth length- and width-related variables were measured on all teeth in upper and lower left quadrant, and ratios of variables were calculated. Univariate and multivariate analyses were performed to quantify the sex discriminative value of the tooth position-specific variables and their combinations. The mandibular and maxillary canine showed the greatest sexual dimorphism, and tooth length variables had the highest discriminative potential. Compared to single variables, combining variables or ratios of variables did not improve substantially the discrimination between males and females. Considering that the discriminative ability values (area under the curve (AUC)) were not higher than 0.80, it is not advocated to use the currently studied dental variables for accurate sex estimation in forensic practice.

  8. Exposing the Secrets of HIV's Success | Center for Cancer Research

    Cancer.gov

    An estimated 40 million people were living with HIV and approximately 3 million people died of AIDS worldwide in 2005, making HIV the deadliest infectious agent of the modern era. HIV owes much of its pathogenic success to two factors —its rapid and imprecise replication, which can lead to drug resistance, and its ability to survive at low levels in the presence of antiviral drugs, a phenomenon called persistence. Multipronged treatment—usually a combination of three antiviral therapies—has helped reduce the number of AIDS-related deaths in developed countries, but does not provide a cure. Drug resistance sometimes occurs with long-term combination therapy, and is even more common when suboptimal treatment strategies are employed. Furthermore, if treatment is interrupted, HIV makes a rapid return.

  9. Target-type probability combining algorithms for multisensor tracking

    NASA Astrophysics Data System (ADS)

    Wigren, Torbjorn

    2001-08-01

    Algorithms for the handing of target type information in an operational multi-sensor tracking system are presented. The paper discusses recursive target type estimation, computation of crosses from passive data (strobe track triangulation), as well as the computation of the quality of the crosses for deghosting purposes. The focus is on Bayesian algorithms that operate in the discrete target type probability space, and on the approximations introduced for computational complexity reduction. The centralized algorithms are able to fuse discrete data from a variety of sensors and information sources, including IFF equipment, ESM's, IRST's as well as flight envelopes estimated from track data. All algorithms are asynchronous and can be tuned to handle clutter, erroneous associations as well as missed and erroneous detections. A key to obtain this ability is the inclusion of data forgetting by a procedure for propagation of target type probability states between measurement time instances. Other important properties of the algorithms are their abilities to handle ambiguous data and scenarios. The above aspects are illustrated in a simulations study. The simulation setup includes 46 air targets of 6 different types that are tracked by 5 airborne sensor platforms using ESM's and IRST's as data sources.

  10. Combined Microwave Ablation and Cementoplasty in Patients with Painful Bone Metastases at High Risk of Fracture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pusceddu, Claudio, E-mail: clapusceddu@gmail.com; Sotgia, Barbara, E-mail: barbara.sotgia@gmail.com; Fele, Rosa Maria, E-mail: rosellafele@tiscali.it

    2016-01-15

    PurposeTo retrospectively evaluate the effectiveness of computed tomography-guided percutaneous microwave ablation (MWA) and cementoplasty in patients with painful bone metastases at high risk of fracture.Materials and MethodsThirty-five patients with 37 metastatic bone lesions underwent computed tomography-guided MWA combined with cementoplasty (polymethylmethacrylate injection). Vertebrae, femur, and acetabulum were the intervention sites and the primary end point was pain relief. Pain severity was estimated by visual analog scale (VAS) before treatment; 1 week post-treatment; and 1, 6, and 12 months post-treatment. Functional outcome was assessed by improved patient walking ability. Radiological evaluation was performed at baseline and 3 and 12 months post-procedure.ResultsIn all patients, painmore » reduction occurred from the first week after treatment. The mean reduction in the VAS score was 84, 90, 90 % at week 1, month 1, and month 6, respectively. Improved walking ability occurred in 100 and 98 % of cases at the 1- and 6-month functional outcome evaluations, respectively. At the 1-year evaluation, 25 patients were alive, and 10 patients (28 %) had died because of widespread disease. The mean reduction in the VAS score and improvement in surviving patients’ walking ability were 90 and 100 %, respectively. No patients showed evidence of local tumor recurrence or progression and pathological fracture in the treated sites.ConclusionOur results suggest that MWA combined with osteoplasty is safe and effective when treating painful bone metastases at high risk of fracture. The number of surviving patients at the 1-year evaluation confirms the need for an effective and long-lasting treatment.« less

  11. Combining the ASA Physical Classification System and Continuous Intraoperative Surgical Apgar Score Measurement in Predicting Postoperative Risk.

    PubMed

    Jering, Monika Zdenka; Marolen, Khensani N; Shotwell, Matthew S; Denton, Jason N; Sandberg, Warren S; Ehrenfeld, Jesse Menachem

    2015-11-01

    The surgical Apgar score predicts major 30-day postoperative complications using data assessed at the end of surgery. We hypothesized that evaluating the surgical Apgar score continuously during surgery may identify patients at high risk for postoperative complications. We retrospectively identified general, vascular, and general oncology patients at Vanderbilt University Medical Center. Logistic regression methods were used to construct a series of predictive models in order to continuously estimate the risk of major postoperative complications, and to alert care providers during surgery should the risk exceed a given threshold. Area under the receiver operating characteristic curve (AUROC) was used to evaluate the discriminative ability of a model utilizing a continuously measured surgical Apgar score relative to models that use only preoperative clinical factors or continuously monitored individual constituents of the surgical Apgar score (i.e. heart rate, blood pressure, and blood loss). AUROC estimates were validated internally using a bootstrap method. 4,728 patients were included. Combining the ASA PS classification with continuously measured surgical Apgar score demonstrated improved discriminative ability (AUROC 0.80) in the pooled cohort compared to ASA (0.73) and the surgical Apgar score alone (0.74). To optimize the tradeoff between inadequate and excessive alerting with future real-time notifications, we recommend a threshold probability of 0.24. Continuous assessment of the surgical Apgar score is predictive for major postoperative complications. In the future, real-time notifications might allow for detection and mitigation of changes in a patient's accumulating risk of complications during a surgical procedure.

  12. Estimated maximal and current brain volume predict cognitive ability in old age.

    PubMed

    Royle, Natalie A; Booth, Tom; Valdés Hernández, Maria C; Penke, Lars; Murray, Catherine; Gow, Alan J; Maniega, Susana Muñoz; Starr, John; Bastin, Mark E; Deary, Ian J; Wardlaw, Joanna M

    2013-12-01

    Brain tissue deterioration is a significant contributor to lower cognitive ability in later life; however, few studies have appropriate data to establish how much influence prior brain volume and prior cognitive performance have on this association. We investigated the associations between structural brain imaging biomarkers, including an estimate of maximal brain volume, and detailed measures of cognitive ability at age 73 years in a large (N = 620), generally healthy, community-dwelling population. Cognitive ability data were available from age 11 years. We found positive associations (r) between general cognitive ability and estimated brain volume in youth (male, 0.28; females, 0.12), and in measured brain volume in later life (males, 0.27; females, 0.26). Our findings show that cognitive ability in youth is a strong predictor of estimated prior and measured current brain volume in old age but that these effects were the same for both white and gray matter. As 1 of the largest studies of associations between brain volume and cognitive ability with normal aging, this work contributes to the wider understanding of how some early-life factors influence cognitive aging. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Optimal Designs for the Rasch Model

    ERIC Educational Resources Information Center

    Grasshoff, Ulrike; Holling, Heinz; Schwabe, Rainer

    2012-01-01

    In this paper, optimal designs will be derived for estimating the ability parameters of the Rasch model when difficulty parameters are known. It is well established that a design is locally D-optimal if the ability and difficulty coincide. But locally optimal designs require that the ability parameters to be estimated are known. To attenuate this…

  14. The Asymptotic Distribution of Ability Estimates: Beyond Dichotomous Items and Unidimensional IRT Models

    ERIC Educational Resources Information Center

    Sinharay, Sandip

    2015-01-01

    The maximum likelihood estimate (MLE) of the ability parameter of an item response theory model with known item parameters was proved to be asymptotically normally distributed under a set of regularity conditions for tests involving dichotomous items and a unidimensional ability parameter (Klauer, 1990; Lord, 1983). This article first considers…

  15. Numerosity but Not Texture-Density Discrimination Correlates with Math Ability in Children

    ERIC Educational Resources Information Center

    Anobile, Giovanni; Castaldi, Elisa; Turi, Marco; Tinelli, Francesca; Burr, David C.

    2016-01-01

    Considerable recent work suggests that mathematical abilities in children correlate with the ability to estimate numerosity. Does math correlate only with numerosity estimation, or also with other similar tasks? We measured discrimination thresholds of school-age (6- to 12.5-years-old) children in 3 tasks: numerosity of patterns of relatively…

  16. A method to combine spaceborne radar and radiometric observations of precipitation

    NASA Astrophysics Data System (ADS)

    Munchak, Stephen Joseph

    This dissertation describes the development and application of a combined radar-radiometer rainfall retrieval algorithm for the Tropical Rainfall Measuring Mission (TRMM) satellite. A retrieval framework based upon optimal estimation theory is proposed wherein three parameters describing the raindrop size distribution (DSD), ice particle size distribution (PSD), and cloud water path (cLWP) are retrieved for each radar profile. The retrieved rainfall rate is found to be strongly sensitive to the a priori constraints in DSD and cLWP; thus, these parameters are tuned to match polarimetric radar estimates of rainfall near Kwajalein, Republic of Marshall Islands. An independent validation against gauge-tuned radar rainfall estimates at Melbourne, FL shows agreement within 2% which exceeds previous algorithms' ability to match rainfall at these two sites. The algorithm is then applied to two years of TRMM data over oceans to determine the sources of DSD variability. Three correlated sets of variables representing storm dynamics, background environment, and cloud microphysics are found to account for approximately 50% of the variability in the absolute and reflectivity-normalized median drop size. Structures of radar reflectivity are also identified and related to drop size, with these relationships being confirmed by ground-based polarimetric radar data from the North American Monsoon Experiment (NAME). Regional patterns of DSD and the sources of variability identified herein are also shown to be consistent with previous work documenting regional DSD properties. In particular, mid-latitude regions and tropical regions near land tend to have larger drops for a given reflectivity, whereas the smallest drops are found in the eastern Pacific Intertropical Convergence Zone. Due to properties of the DSD and rain water/cloud water partitioning that change with column water vapor, it is shown that increases in water vapor in a global warming scenario could lead to slight (1%) underestimates of a rainfall trends by radar but larger overestimates (5%) by radiometer algorithms. Further analyses are performed to compare tropical oceanic mean rainfall rates between the combined algorithm and other sources. The combined algorithm is 15% higher than the version 6 of the 2A25 radar-only algorithm and 6.6% higher than the Global Precipitation Climatology Project (GPCP) estimate for the same time-space domain. Despite being higher than these two sources, the combined total is not inconsistent with estimates of the other components of the energy budget given their uncertainties.

  17. Generalising better: Applying deep learning to integrate deleteriousness prediction scores for whole-exome SNV studies.

    PubMed

    Korvigo, Ilia; Afanasyev, Andrey; Romashchenko, Nikolay; Skoblov, Mikhail

    2018-01-01

    Many automatic classifiers were introduced to aid inference of phenotypical effects of uncategorised nsSNVs (nonsynonymous Single Nucleotide Variations) in theoretical and medical applications. Lately, several meta-estimators have been proposed that combine different predictors, such as PolyPhen and SIFT, to integrate more information in a single score. Although many advances have been made in feature design and machine learning algorithms used, the shortage of high-quality reference data along with the bias towards intensively studied in vitro models call for improved generalisation ability in order to further increase classification accuracy and handle records with insufficient data. Since a meta-estimator basically combines different scoring systems with highly complicated nonlinear relationships, we investigated how deep learning (supervised and unsupervised), which is particularly efficient at discovering hierarchies of features, can improve classification performance. While it is believed that one should only use deep learning for high-dimensional input spaces and other models (logistic regression, support vector machines, Bayesian classifiers, etc) for simpler inputs, we still believe that the ability of neural networks to discover intricate structure in highly heterogenous datasets can aid a meta-estimator. We compare the performance with various popular predictors, many of which are recommended by the American College of Medical Genetics and Genomics (ACMG), as well as available deep learning-based predictors. Thanks to hardware acceleration we were able to use a computationally expensive genetic algorithm to stochastically optimise hyper-parameters over many generations. Overfitting was hindered by noise injection and dropout, limiting coadaptation of hidden units. Although we stress that this work was not conceived as a tool comparison, but rather an exploration of the possibilities of deep learning application in ensemble scores, our results show that even relatively simple modern neural networks can significantly improve both prediction accuracy and coverage. We provide open-access to our finest model via the web-site: http://score.generesearch.ru/services/badmut/.

  18. Age-related self-overestimation of step-over ability in healthy older adults and its relationship to fall risk.

    PubMed

    Sakurai, Ryota; Fujiwara, Yoshinori; Ishihara, Masami; Higuchi, Takahiro; Uchida, Hayato; Imanaka, Kuniyasu

    2013-05-07

    Older adults could not safely step over an obstacle unless they correctly estimated their physical ability to be capable of a successful step over action. Thus, incorrect estimation (overestimation) of ability to step over an obstacle could result in severe accident such as falls in older adults. We investigated whether older adults tended to overestimate step-over ability compared with young adults and whether such overestimation in stepping over obstacles was associated with falls. Three groups of adults, young-old (age, 60-74 years; n, 343), old-old (age, >74 years; n, 151), and young (age, 18-35 years; n, 71), performed our original step-over test (SOT). In the SOT, participants observed a horizontal bar at a 7-m distance and estimated the maximum height (EH) that they could step over. After estimation, they performed real SOT trials to measure the actual maximum height (AH). We also identified participants who had experienced falls in the 1 year period before the study. Thirty-nine young-old adults (11.4%) and 49 old-old adults (32.5%) failed to step over the bar at EH (overestimation), whereas all young adults succeeded (underestimation). There was a significant negative correlation between actual performance (AH) and self-estimation error (difference between EH and AH) in the older adults, indicating that older adults with lower AH (SOT ability) tended to overestimate actual ability (EH > AH) and vice versa. Furthermore, the percentage of participants who overestimated SOT ability in the fallers (28%) was almost double larger than that in the non-fallers (16%), with the fallers showing significantly lower SOT ability than the non-fallers. Older adults appear unaware of age-related physical decline and tended to overestimate step-over ability. Both age-related decline in step-over ability, and more importantly, overestimation or decreased underestimation of this ability may raise potential risk of falls.

  19. Age-related self-overestimation of step-over ability in healthy older adults and its relationship to fall risk

    PubMed Central

    2013-01-01

    Background Older adults could not safely step over an obstacle unless they correctly estimated their physical ability to be capable of a successful step over action. Thus, incorrect estimation (overestimation) of ability to step over an obstacle could result in severe accident such as falls in older adults. We investigated whether older adults tended to overestimate step-over ability compared with young adults and whether such overestimation in stepping over obstacles was associated with falls. Methods Three groups of adults, young-old (age, 60–74 years; n, 343), old-old (age, >74 years; n, 151), and young (age, 18–35 years; n, 71), performed our original step-over test (SOT). In the SOT, participants observed a horizontal bar at a 7-m distance and estimated the maximum height (EH) that they could step over. After estimation, they performed real SOT trials to measure the actual maximum height (AH). We also identified participants who had experienced falls in the 1 year period before the study. Results Thirty-nine young-old adults (11.4%) and 49 old-old adults (32.5%) failed to step over the bar at EH (overestimation), whereas all young adults succeeded (underestimation). There was a significant negative correlation between actual performance (AH) and self-estimation error (difference between EH and AH) in the older adults, indicating that older adults with lower AH (SOT ability) tended to overestimate actual ability (EH > AH) and vice versa. Furthermore, the percentage of participants who overestimated SOT ability in the fallers (28%) was almost double larger than that in the non-fallers (16%), with the fallers showing significantly lower SOT ability than the non-fallers. Conclusions Older adults appear unaware of age-related physical decline and tended to overestimate step-over ability. Both age-related decline in step-over ability, and more importantly, overestimation or decreased underestimation of this ability may raise potential risk of falls. PMID:23651772

  20. Integrating biology, field logistics, and simulations to optimize parameter estimation for imperiled species

    USGS Publications Warehouse

    Lanier, Wendy E.; Bailey, Larissa L.; Muths, Erin L.

    2016-01-01

    Conservation of imperiled species often requires knowledge of vital rates and population dynamics. However, these can be difficult to estimate for rare species and small populations. This problem is further exacerbated when individuals are not available for detection during some surveys due to limited access, delaying surveys and creating mismatches between the breeding behavior and survey timing. Here we use simulations to explore the impacts of this issue using four hypothetical boreal toad (Anaxyrus boreas boreas) populations, representing combinations of logistical access (accessible, inaccessible) and breeding behavior (synchronous, asynchronous). We examine the bias and precision of survival and breeding probability estimates generated by survey designs that differ in effort and timing for these populations. Our findings indicate that the logistical access of a site and mismatch between the breeding behavior and survey design can greatly limit the ability to yield accurate and precise estimates of survival and breeding probabilities. Simulations similar to what we have performed can help researchers determine an optimal survey design(s) for their system before initiating sampling efforts.

  1. Application of a Constant Gain Extended Kalman Filter for In-Flight Estimation of Aircraft Engine Performance Parameters

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.; Litt, Jonathan S.

    2005-01-01

    An approach based on the Constant Gain Extended Kalman Filter (CGEKF) technique is investigated for the in-flight estimation of non-measurable performance parameters of aircraft engines. Performance parameters, such as thrust and stall margins, provide crucial information for operating an aircraft engine in a safe and efficient manner, but they cannot be directly measured during flight. A technique to accurately estimate these parameters is, therefore, essential for further enhancement of engine operation. In this paper, a CGEKF is developed by combining an on-board engine model and a single Kalman gain matrix. In order to make the on-board engine model adaptive to the real engine s performance variations due to degradation or anomalies, the CGEKF is designed with the ability to adjust its performance through the adjustment of artificial parameters called tuning parameters. With this design approach, the CGEKF can maintain accurate estimation performance when it is applied to aircraft engines at offnominal conditions. The performance of the CGEKF is evaluated in a simulation environment using numerous component degradation and fault scenarios at multiple operating conditions.

  2. Color constancy in 3D-2D face recognition

    NASA Astrophysics Data System (ADS)

    Meyer, Manuel; Riess, Christian; Angelopoulou, Elli; Evangelopoulos, Georgios; Kakadiaris, Ioannis A.

    2013-05-01

    Face is one of the most popular biometric modalities. However, up to now, color is rarely actively used in face recognition. Yet, it is well-known that when a person recognizes a face, color cues can become as important as shape, especially when combined with the ability of people to identify the color of objects independent of illuminant color variations. In this paper, we examine the feasibility and effect of explicitly embedding illuminant color information in face recognition systems. We empirically examine the theoretical maximum gain of including known illuminant color to a 3D-2D face recognition system. We also investigate the impact of using computational color constancy methods for estimating the illuminant color, which is then incorporated into the face recognition framework. Our experiments show that under close-to-ideal illumination estimates, one can improve face recognition rates by 16%. When the illuminant color is algorithmically estimated, the improvement is approximately 5%. These results suggest that color constancy has a positive impact on face recognition, but the accuracy of the illuminant color estimate has a considerable effect on its benefits.

  3. Estimating Above-Ground Carbon Biomass in a Newly Restored Coastal Plain Wetland Using Remote Sensing

    PubMed Central

    Riegel, Joseph B.; Bernhardt, Emily; Swenson, Jennifer

    2013-01-01

    Developing accurate but inexpensive methods for estimating above-ground carbon biomass is an important technical challenge that must be overcome before a carbon offset market can be successfully implemented in the United States. Previous studies have shown that LiDAR (light detection and ranging) is well-suited for modeling above-ground biomass in mature forests; however, there has been little previous research on the ability of LiDAR to model above-ground biomass in areas with young, aggrading vegetation. This study compared the abilities of discrete-return LiDAR and high resolution optical imagery to model above-ground carbon biomass at a young restored forested wetland site in eastern North Carolina. We found that the optical imagery model explained more of the observed variation in carbon biomass than the LiDAR model (adj-R2 values of 0.34 and 0.18 respectively; root mean squared errors of 0.14 Mg C/ha and 0.17 Mg C/ha respectively). Optical imagery was also better able to predict high and low biomass extremes than the LiDAR model. Combining both the optical and LiDAR improved upon the optical model but only marginally (adj-R2 of 0.37). These results suggest that the ability of discrete-return LiDAR to model above-ground biomass may be rather limited in areas with young, small trees and that high spatial resolution optical imagery may be the better tool in such areas. PMID:23840837

  4. Taking the Missing Propensity Into Account When Estimating Competence Scores

    PubMed Central

    Pohl, Steffi; Carstensen, Claus H.

    2014-01-01

    When competence tests are administered, subjects frequently omit items. These missing responses pose a threat to correctly estimating the proficiency level. Newer model-based approaches aim to take nonignorable missing data processes into account by incorporating a latent missing propensity into the measurement model. Two assumptions are typically made when using these models: (1) The missing propensity is unidimensional and (2) the missing propensity and the ability are bivariate normally distributed. These assumptions may, however, be violated in real data sets and could, thus, pose a threat to the validity of this approach. The present study focuses on modeling competencies in various domains, using data from a school sample (N = 15,396) and an adult sample (N = 7,256) from the National Educational Panel Study. Our interest was to investigate whether violations of unidimensionality and the normal distribution assumption severely affect the performance of the model-based approach in terms of differences in ability estimates. We propose a model with a competence dimension, a unidimensional missing propensity and a distributional assumption more flexible than a multivariate normal. Using this model for ability estimation results in different ability estimates compared with a model ignoring missing responses. Implications for ability estimation in large-scale assessments are discussed. PMID:29795844

  5. Effects of Resistance Training and Combined Training Program on Repeated Sprint Ability in Futsal Players.

    PubMed

    Torres-Torrelo, Julio; Rodríguez-Rosell, David; Mora-Custodio, Ricardo; Pareja-Blanco, Fernando; Yañez-García, Juan Manuel; González-Badillo, Juan José

    2018-05-16

    The purpose of this study was to compare the effects of 6 weeks resistance training (RT) with combined RT and loaded change of direction (CD) exercise on muscle strength and repeated sprint ability (RSA) in futsal players. Thirty-four players (age: 23.7±4.1 years; height: 1.77±0.06 m; body mass: 74.1±8.2 kg) were randomly assigned into three groups: full squat group (SG, n=12), combined full squat and CD group (S+CDG, n=12), and control group (CG, n=10). The RT for SG consisted of full squat with low-load (~45-60% 1RM) and low-volume (2-3 sets and 4-6 repetitions), whereas the S+CDG performed the same RT program combined with loaded CD (2-5 sets of 10 s). Estimated one-repetition maximum (1RM est ) and variables derived from RSA test including mean sprint time (RSA mean ), best sprint time (RSA best ), percent sprint decrement (S dec ), mean ground contact time (GCT mean ) and mean step length (SL) were selected as testing variables. Changes in sprint time and GCT in each sprint were also analysed. Both experimental groups showed significant (P<0.05-0.001) improvements for 1RM est , RSA best and first and second sprint time. In addition, S+CDG achieved significant (P<0.05-0.001) improvements in RSA mean, sprint time (from fifth to ninth sprint) and GCT (from third to eighth sprint). These results indicate that only 6 weeks of low-load and low-volume RT combined with CD in addition to routine futsal training is enough to improve RSA and strength performance simultaneously in futsal players. © Georg Thieme Verlag KG Stuttgart · New York.

  6. Making the Most of What We Have: A Practical Application of Multidimensional Item Response Theory in Test Scoring

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Patz, Richard J.

    2005-01-01

    This article proposes a practical method that capitalizes on the availability of information from multiple tests measuring correlated abilities given in a single test administration. By simultaneously estimating different abilities with the use of a hierarchical Bayesian framework, more precise estimates for each ability dimension are obtained.…

  7. Improving the Quality of Ability Estimates through Multidimensional Scoring and Incorporation of Ancillary Variables

    ERIC Educational Resources Information Center

    de la Torre, Jimmy

    2009-01-01

    For one reason or another, various sources of information, namely, ancillary variables and correlational structure of the latent abilities, which are usually available in most testing situations, are ignored in ability estimation. A general model that incorporates these sources of information is proposed in this article. The model has a general…

  8. Developing an Efficient Computational Method that Estimates the Ability of Students in a Web-Based Learning Environment

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2012-01-01

    This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…

  9. Laccase from Pycnoporus cinnabarinus and phenolic compounds: can the efficiency of an enzyme mediator for delignifying kenaf pulp be predicted?

    PubMed

    Andreu, Glòria; Vidal, Teresa

    2013-03-01

    In this work, kenaf pulp was delignified by using laccase in combination with various redox mediators and the efficiency of the different laccase–mediator systems assessed in terms of the changes in pulp properties after bleaching. The oxidative ability of the individual mediators used (acetosyringone, syringaldehyde, p-coumaric acid, vanillin and actovanillone) and the laccase–mediator systems was determined by monitoring the oxidation–reduction potential (ORP) during process. The results confirmed the production of phenoxy radicals of variable reactivity and stressed the significant role of lignin structure in the enzymatic process. Although changes in ORP were correlated with the oxidative ability of the mediators, pulp properties as determined after the bleaching stage were also influenced by condensation and grafting reactions. As shown here, ORP measurements provide a first estimation of the delignification efficiency of a laccase–mediator system. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963) equation as a prediction model.

    NASA Astrophysics Data System (ADS)

    Wan, S.; He, W.

    2016-12-01

    The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963) equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data." On the basis of the intelligent features of evolutionary modeling (EM), including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  11. Association Between Connecticut's Permit-to-Purchase Handgun Law and Homicides.

    PubMed

    Rudolph, Kara E; Stuart, Elizabeth A; Vernick, Jon S; Webster, Daniel W

    2015-08-01

    We sought to estimate the effect of Connecticut's implementation of a handgun permit-to-purchase law in October 1995 on subsequent homicides. Using the synthetic control method, we compared Connecticut's homicide rates after the law's implementation to rates we would have expected had the law not been implemented. To estimate the counterfactual, we used longitudinal data from a weighted combination of comparison states identified based on the ability of their prelaw homicide trends and covariates to predict prelaw homicide trends in Connecticut. We estimated that the law was associated with a 40% reduction in Connecticut's firearm homicide rates during the first 10 years that the law was in place. By contrast, there was no evidence for a reduction in nonfirearm homicides. Consistent with prior research, this study demonstrated that Connecticut's handgun permit-to-purchase law was associated with a subsequent reduction in homicide rates. As would be expected if the law drove the reduction, the policy's effects were only evident for homicides committed with firearms.

  12. Combining Radiography and Passive Measurements for Radiological Threat Localization in Cargo

    NASA Astrophysics Data System (ADS)

    Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.; Kouzes, Richard T.; Kulisek, Jonathan A.; Robinson, Sean M.; Wittman, Richard A.

    2015-10-01

    Detecting shielded special nuclear material (SNM) in a cargo container is a difficult problem, since shielding reduces the amount of radiation escaping the container. Radiography provides information that is complementary to that provided by passive gamma-ray detection systems: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions that may mask a passive radiological signal. Combining these measurements has the potential to improve SNM detection, either through improved sensitivity or by providing a solution to the inverse problem to estimate source properties (strength and location). We present a data-fusion method that uses a radiograph to provide an estimate of the radiation-transport environment for gamma rays from potential sources. This approach makes quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present results for this method for a modeled test case of a cargo container passing through a plastic-scintillator-based radiation portal monitor and a transmission-radiography system. We find that a radiograph-based inversion scheme allows for localization of a low-noise source placed randomly within the test container to within 40 cm, compared to 70 cm for triangulation alone, while strength estimation accuracy is improved by a factor of six. Improvements are seen in regions of both high and low shielding, but are most pronounced in highly shielded regions. The approach proposed here combines transmission and emission data in a manner that has not been explored in the cargo-screening literature, advancing the ability to accurately describe a hidden source based on currently-available instrumentation.

  13. Genomic models with genotype × environment interaction for predicting hybrid performance: an application in maize hybrids.

    PubMed

    Acosta-Pech, Rocío; Crossa, José; de Los Campos, Gustavo; Teyssèdre, Simon; Claustres, Bruno; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino

    2017-07-01

    A new genomic model that incorporates genotype × environment interaction gave increased prediction accuracy of untested hybrid response for traits such as percent starch content, percent dry matter content and silage yield of maize hybrids. The prediction of hybrid performance (HP) is very important in agricultural breeding programs. In plant breeding, multi-environment trials play an important role in the selection of important traits, such as stability across environments, grain yield and pest resistance. Environmental conditions modulate gene expression causing genotype × environment interaction (G × E), such that the estimated genetic correlations of the performance of individual lines across environments summarize the joint action of genes and environmental conditions. This article proposes a genomic statistical model that incorporates G × E for general and specific combining ability for predicting the performance of hybrids in environments. The proposed model can also be applied to any other hybrid species with distinct parental pools. In this study, we evaluated the predictive ability of two HP prediction models using a cross-validation approach applied in extensive maize hybrid data, comprising 2724 hybrids derived from 507 dent lines and 24 flint lines, which were evaluated for three traits in 58 environments over 12 years; analyses were performed for each year. On average, genomic models that include the interaction of general and specific combining ability with environments have greater predictive ability than genomic models without interaction with environments (ranging from 12 to 22%, depending on the trait). We concluded that including G × E in the prediction of untested maize hybrids increases the accuracy of genomic models.

  14. A New Online Calibration Method Based on Lord's Bias-Correction.

    PubMed

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  15. Design and Evaluation of a Training Protocol for a Photographic Method of Visual Estimation of Fruit and Vegetable Intake among Kindergarten Through Second-Grade Students.

    PubMed

    Masis, Natalie; McCaffrey, Jennifer; Johnson, Susan L; Chapman-Novakofski, Karen

    2017-04-01

    To design a replicable training protocol for visual estimation of fruit and vegetable (FV) intake of kindergarten through second-grade students through digital photography of lunch trays that results in reliable data for FV served and consumed. Protocol development through literature and researcher input was followed by 3 laboratory-based trainings of 3 trainees. Lunchroom data collection sessions were done at 2 elementary schools for kindergarten through second-graders. Intraclass correlation coefficients (ICCs) were used. By training 3, ICC was substantial for amount of FV served and consumed (0.86 and 0.95, respectively; P < .05). The ICC was moderate for percentage of fruits consumed (0.67; P = .06). In-school estimates for ICCs were all significant for amounts served at school 1 and percentage of FV consumed at both schools. The protocol resulted in reliable estimation of combined FV served and consumed using digital photography. The ability to estimate FV intake accurately will benefit intervention development and evaluation. Copyright © 2017 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  16. A Comparison of Grizzly Bear Demographic Parameters Estimated from Non-Spatial and Spatial Open Population Capture-Recapture Models.

    PubMed

    Whittington, Jesse; Sawaya, Michael A

    2015-01-01

    Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal's home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786-1.071) for females, 0.844 (0.703-0.975) for males, and 0.882 (0.779-0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758-1.024) for females, 0.825 (0.700-0.948) for males, and 0.863 (0.771-0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth rates suggest that Banff National Park's population of grizzly bears requires continued conservation-oriented management actions.

  17. Additive-dominance genetic model analyses for late-maturity alpha-amylase activity in a bread wheat factorial crossing population.

    PubMed

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Ibrahim, Amir M H

    2015-12-01

    Elevated level of late maturity α-amylase activity (LMAA) can result in low falling number scores, reduced grain quality, and downgrade of wheat (Triticum aestivum L.) class. A mating population was developed by crossing parents with different levels of LMAA. The F2 and F3 hybrids and their parents were evaluated for LMAA, and data were analyzed using the R software package 'qgtools' integrated with an additive-dominance genetic model and a mixed linear model approach. Simulated results showed high testing powers for additive and additive × environment variances, and comparatively low powers for dominance and dominance × environment variances. All variance components and their proportions to the phenotypic variance for the parents and hybrids were significant except for the dominance × environment variance. The estimated narrow-sense heritability and broad-sense heritability for LMAA were 14 and 54%, respectively. High significant negative additive effects for parents suggest that spring wheat cultivars 'Lancer' and 'Chester' can serve as good general combiners, and that 'Kinsman' and 'Seri-82' had negative specific combining ability in some hybrids despite of their own significant positive additive effects, suggesting they can be used as parents to reduce LMAA levels. Seri-82 showed very good general combining ability effect when used as a male parent, indicating the importance of reciprocal effects. High significant negative dominance effects and high-parent heterosis for hybrids demonstrated that the specific hybrid combinations; Chester × Kinsman, 'Lerma52' × Lancer, Lerma52 × 'LoSprout' and 'Janz' × Seri-82 could be generated to produce cultivars with significantly reduced LMAA level.

  18. Search for gravitational waves from LIGO-Virgo science run and data interpretation

    NASA Astrophysics Data System (ADS)

    Biswas, Rahul

    Search for gravitational wave events was performed on data jointly taken during LIGO's fifth science run (S5) and Virgo's first science mn (VSR1). The data taken during this period was broken down into five separate months. I shall report the analysis performed on one of these months. Apart from the search, I shall describe the work related to estimation of rate based on the loudest event in the search. I shall demonstrate methods used in construction of rate intervals at 90% confidence level and combination of rates from multiple experiments of similar duration. To have confidence in our detection, accurate estimation of false alarm probability (F.A.P.) associated with the event candidate is required. Current false alarm estimation techniques limit our ability to measure the F.A.P. to about 1 in 100. I shall describe a method that significantly improves this estimate using information from multiple detectors. Besides accurate knowledge of F.A.P., detection is also dependent on our ability to distinguish real signals to those from noise. Several tests exist which use the quality of the signal to differentiate between real and noise signal. The chi-square test is one such computationally expensive test applied in our search; we shall understand the dependence of the chi-square parameter on the signal to noise ratio (SNR) for a given signal, which will help us to model the chi-square parameter based on SNR. The two detectors at Hanford, WA, H1(4km) and H2(2km), share the same vacuum system and hence their noise is correlated. Our present method of background estimation cannot capture this correlation and often underestimates the background when only H1 and H2 are operating. I shall describe a novel method of time reversed filtering to correctly estimate the background.

  19. A practical tool for maximal information coefficient analysis.

    PubMed

    Albanese, Davide; Riccadonna, Samantha; Donati, Claudio; Franceschi, Pietro

    2018-04-01

    The ability of finding complex associations in large omics datasets, assessing their significance, and prioritizing them according to their strength can be of great help in the data exploration phase. Mutual information-based measures of association are particularly promising, in particular after the recent introduction of the TICe and MICe estimators, which combine computational efficiency with superior bias/variance properties. An open-source software implementation of these two measures providing a complete procedure to test their significance would be extremely useful. Here, we present MICtools, a comprehensive and effective pipeline that combines TICe and MICe into a multistep procedure that allows the identification of relationships of various degrees of complexity. MICtools calculates their strength assessing statistical significance using a permutation-based strategy. The performances of the proposed approach are assessed by an extensive investigation in synthetic datasets and an example of a potential application on a metagenomic dataset is also illustrated. We show that MICtools, combining TICe and MICe, is able to highlight associations that would not be captured by conventional strategies.

  20. Inheritance in a Diallel Crossing Experiment with Longleaf Pine

    Treesearch

    E. B. Snyder; Gene Namkoong

    1978-01-01

    Seven-year-old progeny from crosses among 13 randomly selected parent trees provided genetic information on 51 growth, form, foliage, branch, bud, and pest resistance traits. Presented are he&abilities, phenotypic and genotypic variances, covariances, General Combining Ability (GCA), Specific Combining Ability (SCA), and environmental. correlations for all measured...

  1. Synthetic Constraint of Ecosystem C Models Using Radiocarbon and Net Primary Production (NPP) in New Zealand Grazing Land

    NASA Astrophysics Data System (ADS)

    Baisden, W. T.

    2011-12-01

    Time-series radiocarbon measurements have substantial ability to constrain the size and residence time of the soil C pools commonly represented in ecosystem models. Radiocarbon remains unique in the ability to constrain the large stabilized C pool with decadal residence times. Radiocarbon also contributes usefully to constraining the size and turnover rate of the passive pool, but typically struggles to constrain pools with residence times less than a few years. Overall, the number of pools and associated turnover rates that can be constrained depends upon the number of time-series samples available, the appropriateness of chemical or physical fractions to isolate unequivocal pools, and the utility of additional C flux data to provide additional constraints. In New Zealand pasture soils, we demonstrate the ability to constrain decadal turnover times with in a few years for the stabilized pool and reasonably constrain the passive fraction. Good constraint is obtained with two time-series samples spaced 10 or more years apart after 1970. Three or more time-series samples further improve the level of constraint. Work within this context shows that a two-pool model does explain soil radiocarbon data for the most detailed profiles available (11 time-series samples), and identifies clear and consistent differences in rates of C turnover and passive fraction in Andisols vs Non-Andisols. Furthermore, samples from multiple horizons can commonly be combined, yielding consistent residence times and passive fraction estimates that are stable with, or increase with, depth in different sites. Radiocarbon generally fails to quantify rapid C turnover, however. Given that the strength of radiocarbon is estimating the size and turnover of the stabilized (decadal) and passive (millennial) pools, the magnitude of fast cycling pool(s) can be estimated by subtracting the radiocarbon-based estimates of turnover within stabilized and passive pools from total estimates of NPP. In grazing land, these estimates can be derived primarily from measured aboveground NPP and calculated belowground NPP. Results suggest that only 19-36% of heterotrophic soil respiration is derived from the soil C with rapid turnover times. A final logical step in synthesis is the analysis of temporal variation in NPP, primarily due to climate, as driver of changes in plant inputs and resulting in dynamic changes in rapid and decadal soil C pools. In sites with good time series samples from 1959-1975, we examine the apparent impacts of measured or modelled (Biome-BGC) NPP on soil Δ14C. Ultimately, these approaches have the ability to empirically constrain, and provide limited verification, of the soil C cycle as commonly depicted ecosystem biogeochemistry models.

  2. Improving estimates of tree mortality probability using potential growth rate

    USGS Publications Warehouse

    Das, Adrian J.; Stephenson, Nathan L.

    2015-01-01

    Tree growth rate is frequently used to estimate mortality probability. Yet, growth metrics can vary in form, and the justification for using one over another is rarely clear. We tested whether a growth index (GI) that scales the realized diameter growth rate against the potential diameter growth rate (PDGR) would give better estimates of mortality probability than other measures. We also tested whether PDGR, being a function of tree size, might better correlate with the baseline mortality probability than direct measurements of size such as diameter or basal area. Using a long-term dataset from the Sierra Nevada, California, U.S.A., as well as existing species-specific estimates of PDGR, we developed growth–mortality models for four common species. For three of the four species, models that included GI, PDGR, or a combination of GI and PDGR were substantially better than models without them. For the fourth species, the models including GI and PDGR performed roughly as well as a model that included only the diameter growth rate. Our results suggest that using PDGR can improve our ability to estimate tree survival probability. However, in the absence of PDGR estimates, the diameter growth rate was the best empirical predictor of mortality, in contrast to assumptions often made in the literature.

  3. A temporal basis for Weber's law in value perception.

    PubMed

    Namboodiri, Vijay Mohan K; Mihalas, Stefan; Hussain Shuler, Marshall G

    2014-01-01

    Weber's law-the observation that the ability to perceive changes in magnitudes of stimuli is proportional to the magnitude-is a widely observed psychophysical phenomenon. It is also believed to underlie the perception of reward magnitudes and the passage of time. Since many ecological theories state that animals attempt to maximize reward rates, errors in the perception of reward magnitudes and delays must affect decision-making. Using an ecological theory of decision-making (TIMERR), we analyze the effect of multiple sources of noise (sensory noise, time estimation noise, and integration noise) on reward magnitude and subjective value perception. We show that the precision of reward magnitude perception is correlated with the precision of time perception and that Weber's law in time estimation can lead to Weber's law in value perception. The strength of this correlation is predicted to depend on the reward history of the animal. Subsequently, we show that sensory integration noise (either alone or in combination with time estimation noise) also leads to Weber's law in reward magnitude perception in an accumulator model, if it has balanced Poisson feedback. We then demonstrate that the noise in subjective value of a delayed reward, due to the combined effect of noise in both the perception of reward magnitude and delay, also abides by Weber's law. Thus, in our theory we prove analytically that the perception of reward magnitude, time, and subjective value change all approximately obey Weber's law.

  4. Dose-finding design for multi-drug combinations

    PubMed Central

    Wages, Nolan A; Conaway, Mark R; O'Quigley, John

    2012-01-01

    Background Most of the current designs used for Phase I dose finding trials in oncology will either involve only a single cytotoxic agent or will impose some implicit ordering among the doses. The goal of the studies is to estimate the maximum tolerated dose (MTD), the highest dose that can be administered with an acceptable level of toxicity. A key working assumption of these methods is the monotonicity of the dose–toxicity curve. Purpose Here we consider situations in which the monotonicity assumption may fail. These studies are becoming increasingly common in practice, most notably, in phase I trials that involve combinations of agents. Our focus is on studies where there exist pairs of treatment combinations for which the ordering of the probabilities of a dose-limiting toxicity cannot be known a priori. Methods We describe a new dose-finding design which can be used for multiple-drug trials and can be applied to this kind of problem. Our methods proceed by laying out all possible orderings of toxicity probabilities that are consistent with the known orderings among treatment combinations and allowing the continual reassessment method (CRM) to provide efficient estimates of the MTD within these orders. The design can be seen to simplify to the CRM when the full ordering is known. Results We study the properties of the design via simulations that provide comparisons to the Bayesian approach to partial orders (POCRM) of Wages, Conaway, and O'Quigley. The POCRM was shown to perform well when compared to other suggested methods for partial orders. Therefore, we comapre our approach to it in order to assess the performance of the new design. Limitations A limitation concerns the number of possible orders. There are dose-finding studies with combinations of agents that can lead to a large number of possible orders. In this case, it may not be feasible to work with all possible orders. Conclusions The proposed design demonstrates the ability to effectively estimate MTD combinations in partially ordered dosefinding studies. Because it relaxes the monotonicity assumption, it can be considered a multivariate generalization of the CRM. Hence, it can serve as a link between single and multiple-agent dosefinding trials. PMID:21652689

  5. I've Fallen and I Can't Get up: Can High-Ability Students Recover from Early Mistakes in CAT?

    ERIC Educational Resources Information Center

    Rulison, Kelly L.; Loken, Eric

    2009-01-01

    A difficult result to interpret in Computerized Adaptive Tests (CATs) occurs when an ability estimate initially drops and then ascends continuously until the test ends, suggesting that the true ability may be higher than implied by the final estimate. This study explains why this asymmetry occurs and shows that early mistakes by high-ability…

  6. Combination of Competitive Quantitative PCR and Constant-Denaturant Capillary Electrophoresis for High-Resolution Detection and Enumeration of Microbial Cells

    PubMed Central

    Lim, Eelin L.; Tomita, Aoy V.; Thilly, William G.; Polz, Martin F.

    2001-01-01

    A novel quantitative PCR (QPCR) approach, which combines competitive PCR with constant-denaturant capillary electrophoresis (CDCE), was adapted for enumerating microbial cells in environmental samples using the marine nanoflagellate Cafeteria roenbergensis as a model organism. Competitive PCR has been used successfully for quantification of DNA in environmental samples. However, this technique is labor intensive, and its accuracy is dependent on an internal competitor, which must possess the same amplification efficiency as the target yet can be easily discriminated from the target DNA. The use of CDCE circumvented these problems, as its high resolution permitted the use of an internal competitor which differed from the target DNA fragment by a single base and thus ensured that both sequences could be amplified with equal efficiency. The sensitivity of CDCE also enabled specific and precise detection of sequences over a broad range of concentrations. The combined competitive QPCR and CDCE approach accurately enumerated C. roenbergensis cells in eutrophic, coastal seawater at abundances ranging from approximately 10 to 104 cells ml−1. The QPCR cell estimates were confirmed by fluorescent in situ hybridization counts, but estimates of samples with <50 cells ml−1 by QPCR were less variable. This novel approach extends the usefulness of competitive QPCR by demonstrating its ability to reliably enumerate microorganisms at a range of environmentally relevant cell concentrations in complex aquatic samples. PMID:11525983

  7. An auxiliary adaptive Gaussian mixture filter applied to flowrate allocation using real data from a multiphase producer

    NASA Astrophysics Data System (ADS)

    Lorentzen, Rolf J.; Stordal, Andreas S.; Hewitt, Neal

    2017-05-01

    Flowrate allocation in production wells is a complicated task, especially for multiphase flow combined with several reservoir zones and/or branches. The result depends heavily on the available production data, and the accuracy of these. In the application we show here, downhole pressure and temperature data are available, in addition to the total flowrates at the wellhead. The developed methodology inverts these observations to the fluid flowrates (oil, water and gas) that enters two production branches in a real full-scale producer. A major challenge is accurate estimation of flowrates during rapid variations in the well, e.g. due to choke adjustments. The Auxiliary Sequential Importance Resampling (ASIR) filter was developed to handle such challenges, by introducing an auxiliary step, where the particle weights are recomputed (second weighting step) based on how well the particles reproduce the observations. However, the ASIR filter suffers from large computational time when the number of unknown parameters increase. The Gaussian Mixture (GM) filter combines a linear update, with the particle filters ability to capture non-Gaussian behavior. This makes it possible to achieve good performance with fewer model evaluations. In this work we present a new filter which combines the ASIR filter and the Gaussian Mixture filter (denoted ASGM), and demonstrate improved estimation (compared to ASIR and GM filters) in cases with rapid parameter variations, while maintaining reasonable computational cost.

  8. Genomic Prediction of Single Crosses in the Early Stages of a Maize Hybrid Breeding Pipeline.

    PubMed

    Kadam, Dnyaneshwar C; Potts, Sarah M; Bohn, Martin O; Lipka, Alexander E; Lorenz, Aaron J

    2016-09-19

    Prediction of single-cross performance has been a major goal of plant breeders since the beginning of hybrid breeding. Recently, genomic prediction has shown to be a promising approach, but only limited studies have examined the accuracy of predicting single-cross performance. Moreover, no studies have examined the potential of predicting single crosses among random inbreds derived from a series of biparental families, which resembles the structure of germplasm comprising the initial stages of a hybrid maize breeding pipeline. The main objectives of this study were to evaluate the potential of genomic prediction for identifying superior single crosses early in the hybrid breeding pipeline and optimize its application. To accomplish these objectives, we designed and analyzed a novel population of single crosses representing the Iowa Stiff Stalk Synthetic/Non-Stiff Stalk heterotic pattern commonly used in the development of North American commercial maize hybrids. The performance of single crosses was predicted using parental combining ability and covariance among single crosses. Prediction accuracies were estimated using cross-validation and ranged from 0.28 to 0.77 for grain yield, 0.53 to 0.91 for plant height, and 0.49 to 0.94 for staygreen, depending on the number of tested parents of the single cross and genomic prediction method used. The genomic estimated general and specific combining abilities showed an advantage over genomic covariances among single crosses when one or both parents of the single cross were untested. Overall, our results suggest that genomic prediction of single crosses in the early stages of a hybrid breeding pipeline holds great potential to re-design hybrid breeding and increase its efficiency. Copyright © 2016 Author et al.

  9. Comparison of Marker-Based Genomic Estimated Breeding Values and Phenotypic Evaluation for Selection of Bacterial Spot Resistance in Tomato.

    PubMed

    Liabeuf, Debora; Sim, Sung-Chur; Francis, David M

    2018-03-01

    Bacterial spot affects tomato crops (Solanum lycopersicum) grown under humid conditions. Major genes and quantitative trait loci (QTL) for resistance have been described, and multiple loci from diverse sources need to be combined to improve disease control. We investigated genomic selection (GS) prediction models for resistance to Xanthomonas euvesicatoria and experimentally evaluated the accuracy of these models. The training population consisted of 109 families combining resistance from four sources and directionally selected from a population of 1,100 individuals. The families were evaluated on a plot basis in replicated inoculated trials and genotyped with single nucleotide polymorphisms (SNP). We compared the prediction ability of models developed with 14 to 387 SNP. Genomic estimated breeding values (GEBV) were derived using Bayesian least absolute shrinkage and selection operator regression (BL) and ridge regression (RR). Evaluations were based on leave-one-out cross validation and on empirical observations in replicated field trials using the next generation of inbred progeny and a hybrid population resulting from selections in the training population. Prediction ability was evaluated based on correlations between GEBV and phenotypes (r g ), percentage of coselection between genomic and phenotypic selection, and relative efficiency of selection (r g /r p ). Results were similar with BL and RR models. Models using only markers previously identified as significantly associated with resistance but weighted based on GEBV and mixed models with markers associated with resistance treated as fixed effects and markers distributed in the genome treated as random effects offered greater accuracy and a high percentage of coselection. The accuracy of these models to predict the performance of progeny and hybrids exceeded the accuracy of phenotypic selection.

  10. Reaching the Hard-to-Reach: A Probability Sampling Method for Assessing Prevalence of Driving under the Influence after Drinking in Alcohol Outlets

    PubMed Central

    De Boni, Raquel; do Nascimento Silva, Pedro Luis; Bastos, Francisco Inácio; Pechansky, Flavio; de Vasconcellos, Mauricio Teixeira Leite

    2012-01-01

    Drinking alcoholic beverages in places such as bars and clubs may be associated with harmful consequences such as violence and impaired driving. However, methods for obtaining probabilistic samples of drivers who drink at these places remain a challenge – since there is no a priori information on this mobile population – and must be continually improved. This paper describes the procedures adopted in the selection of a population-based sample of drivers who drank at alcohol selling outlets in Porto Alegre, Brazil, which we used to estimate the prevalence of intention to drive under the influence of alcohol. The sampling strategy comprises a stratified three-stage cluster sampling: 1) census enumeration areas (CEA) were stratified by alcohol outlets (AO) density and sampled with probability proportional to the number of AOs in each CEA; 2) combinations of outlets and shifts (COS) were stratified by prevalence of alcohol-related traffic crashes and sampled with probability proportional to their squared duration in hours; and, 3) drivers who drank at the selected COS were stratified by their intention to drive and sampled using inverse sampling. Sample weights were calibrated using a post-stratification estimator. 3,118 individuals were approached and 683 drivers interviewed, leading to an estimate that 56.3% (SE = 3,5%) of the drivers intended to drive after drinking in less than one hour after the interview. Prevalence was also estimated by sex and broad age groups. The combined use of stratification and inverse sampling enabled a good trade-off between resource and time allocation, while preserving the ability to generalize the findings. The current strategy can be viewed as a step forward in the efforts to improve surveys and estimation for hard-to-reach, mobile populations. PMID:22514620

  11. Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2005-01-01

    Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…

  12. Estimation of Two-Parameter Logistic Item Response Curves. Research Report 83-1. Mathematical Sciences Technical Report No. 130.

    ERIC Educational Resources Information Center

    Tsutakawa, Robert K.

    This paper presents a method for estimating certain characteristics of test items which are designed to measure ability, or knowledge, in a particular area. Under the assumption that ability parameters are sampled from a normal distribution, the EM algorithm is used to derive maximum likelihood estimates to item parameters of the two-parameter…

  13. Improving phylogenetic analyses by incorporating additional information from genetic sequence databases.

    PubMed

    Liang, Li-Jung; Weiss, Robert E; Redelings, Benjamin; Suchard, Marc A

    2009-10-01

    Statistical analyses of phylogenetic data culminate in uncertain estimates of underlying model parameters. Lack of additional data hinders the ability to reduce this uncertainty, as the original phylogenetic dataset is often complete, containing the entire gene or genome information available for the given set of taxa. Informative priors in a Bayesian analysis can reduce posterior uncertainty; however, publicly available phylogenetic software specifies vague priors for model parameters by default. We build objective and informative priors using hierarchical random effect models that combine additional datasets whose parameters are not of direct interest but are similar to the analysis of interest. We propose principled statistical methods that permit more precise parameter estimates in phylogenetic analyses by creating informative priors for parameters of interest. Using additional sequence datasets from our lab or public databases, we construct a fully Bayesian semiparametric hierarchical model to combine datasets. A dynamic iteratively reweighted Markov chain Monte Carlo algorithm conveniently recycles posterior samples from the individual analyses. We demonstrate the value of our approach by examining the insertion-deletion (indel) process in the enolase gene across the Tree of Life using the phylogenetic software BALI-PHY; we incorporate prior information about indels from 82 curated alignments downloaded from the BAliBASE database.

  14. Estimating structural collapse fragility of generic building typologies using expert judgment

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Perkins, David M.; Aspinall, Willy P.; Kiremidjian, Anne S.

    2014-01-01

    The structured expert elicitation process proposed by Cooke (1991), hereafter referred to as Cooke's approach, is applied for the first time in the realm of structural collapse-fragility assessment for selected generic construction types. Cooke's approach works on the principle of objective calibration scoring of judgments couple with hypothesis testing used in classical statistics. The performance-based scoring system reflects the combined measure of an expert's informativeness about variables in the problem are under consideration, and their ability to enumerate, in a statistically accurate way through expressing their true beliefs, the quantitative uncertainties associated with their assessments. We summarize the findings of an expert elicitation workshop in which a dozen earthquake-engineering professionals from around the world were engaged to estimate seismic collapse fragility for generic construction types. Development of seismic collapse fragility-functions was accomplished by combining their judgments using weights derived from Cooke's method. Although substantial effort was needed to elicit the inputs of these experts successfully, we anticipate that the elicitation strategy described here will gain momentum in a wide variety of earthquake seismology and engineering hazard and risk analyses where physical model and data limitations are inherent and objective professional judgment can fill gaps.

  15. Estimating structural collapse fragility of generic building typologies using expert judgment

    USGS Publications Warehouse

    Jaiswal, Kishor S.; Wald, D.J.; Perkins, D.; Aspinall, W.P.; Kiremidjian, Anne S.; Deodatis, George; Ellingwood, Bruce R.; Frangopol, Dan M.

    2014-01-01

    The structured expert elicitation process proposed by Cooke (1991), hereafter referred to as Cooke’s approach, is applied for the first time in the realm of structural collapse-fragility assessment for selected generic construction types. Cooke’s approach works on the principle of objective calibration scoring of judgments coupled with hypothesis testing used in classical statistics. The performance-based scoring system reflects the combined measure of an expert’s informativeness about variables in the problem area under consideration, and their ability to enumerate, in a statistically accurate way through expressing their true beliefs, the quantitative uncertainties associated with their assessments. We summarize the findings of an expert elicitation workshop in which a dozen earthquake-engineering professionals from around the world were engaged to estimate seismic collapse fragility for generic construction types. Development of seismic collapse fragility functions was accomplished by combining their judgments using weights derived from Cooke’s method. Although substantial effort was needed to elicit the inputs of these experts successfully, we anticipate that the elicitation strategy described here will gain momentum in a wide variety of earthquake seismology and engineering hazard and risk analyses where physical model and data limitations are inherent and objective professional judgment can fill gaps.

  16. Exploring pima and upland cross-combinations to identify fusarium oxysporum f. sp. vasinfectum race 4 resistant cottons by combining ability of superior cultivars

    USDA-ARS?s Scientific Manuscript database

    A cotton breeding program strives to identify the best performance cultivars or breeding lines which can be used as parents in crosses. Multi-cross combinations provide the means of performance of each parent and assess the combining ability or productivity of parents through the hybridization proce...

  17. A cross-sectional study of mathematics achievement, estimation skills, and academic self-perception in students of varying ability.

    PubMed

    Montague, Marjorie; van Garderen, Delinda

    2003-01-01

    This study investigated students' mathematics achievement, estimation ability, use of estimation strategies, and academic self-perception. Students with learning disabilities (LD), average achievers, and intellectually gifted students (N = 135) in fourth, sixth, and eighth grade participated in the study. They were assessed to determine their mathematics achievement, ability to estimate discrete quantities, knowledge and use of estimation strategies, and perception of academic competence. The results indicated that the students with LD performed significantly lower than their peers on the math achievement measures, as expected, but viewed themselves to be as academically competent as the average achievers did. Students with LD and average achievers scored significantly lower than gifted students on all estimation measures, but they differed significantly from one another only on the estimation strategy use measure. Interestingly, even gifted students did not seem to have a well-developed understanding of estimation and, like the other students, did poorly on the first estimation measure. The accuracy of their estimates seemed to improve, however, when students were asked open-ended questions about the strategies they used to arrive at their estimates. Although students with LD did not differ from average achievers in their estimation accuracy, they used significantly fewer effective estimation strategies. Implications for instruction are discussed.

  18. Comparison of two regression-based approaches for determining nutrient and sediment fluxes and trends in the Chesapeake Bay watershed

    USGS Publications Warehouse

    Moyer, Douglas; Hirsch, Robert M.; Hyer, Kenneth

    2012-01-01

    Nutrient and sediment fluxes and changes in fluxes over time are key indicators that water resource managers can use to assess the progress being made in improving the structure and function of the Chesapeake Bay ecosystem. The U.S. Geological Survey collects annual nutrient (nitrogen and phosphorus) and sediment flux data and computes trends that describe the extent to which water-quality conditions are changing within the major Chesapeake Bay tributaries. Two regression-based approaches were compared for estimating annual nutrient and sediment fluxes and for characterizing how these annual fluxes are changing over time. The two regression models compared are the traditionally used ESTIMATOR and the newly developed Weighted Regression on Time, Discharge, and Season (WRTDS). The model comparison focused on answering three questions: (1) What are the differences between the functional form and construction of each model? (2) Which model produces estimates of flux with the greatest accuracy and least amount of bias? (3) How different would the historical estimates of annual flux be if WRTDS had been used instead of ESTIMATOR? One additional point of comparison between the two models is how each model determines trends in annual flux once the year-to-year variations in discharge have been determined. All comparisons were made using total nitrogen, nitrate, total phosphorus, orthophosphorus, and suspended-sediment concentration data collected at the nine U.S. Geological Survey River Input Monitoring stations located on the Susquehanna, Potomac, James, Rappahannock, Appomattox, Pamunkey, Mattaponi, Patuxent, and Choptank Rivers in the Chesapeake Bay watershed. Two model characteristics that uniquely distinguish ESTIMATOR and WRTDS are the fundamental model form and the determination of model coefficients. ESTIMATOR and WRTDS both predict water-quality constituent concentration by developing a linear relation between the natural logarithm of observed constituent concentration and three explanatory variables—the natural log of discharge, time, and season. ESTIMATOR uses two additional explanatory variables—the square of the log of discharge and time-squared. Both models determine coefficients for variables for a series of estimation windows. ESTIMATOR establishes variable coefficients for a series of 9-year moving windows; all observed constituent concentration data within the 9-year window are used to establish each coefficient. Conversely, WRTDS establishes variable coefficients for each combination of discharge and time using only observed concentration data that are similar in time, season, and discharge to the day being estimated. As a result of these distinguishing characteristics, ESTIMATOR reproduces concentration-discharge relations that are closely approximated by a quadratic or linear function with respect to both the log of discharge and time. Conversely, the linear model form of WRTDS coupled with extensive model windowing for each combination of discharge and time allows WRTDS to reproduce observed concentration-discharge relations that are more sinuous in form. Another distinction between ESTIMATOR and WRTDS is the reporting of uncertainty associated with the model estimates of flux and trend. ESTIMATOR quantifies the standard error of prediction associated with the determination of flux and trends. The standard error of prediction enables the determination of the 95-percent confidence intervals for flux and trend as well as the ability to test whether the reported trend is significantly different from zero (where zero equals no trend). Conversely, WRTDS is unable to propagate error through the many (over 5,000) models for unique combinations of flow and time to determine a total standard error. As a result, WRTDS flux estimates are not reported with confidence intervals and a level of significance is not determined for flow-normalized fluxes. The differences between ESTIMATOR and WRTDS, with regard to model form and determination of model coefficients, have an influence on the determination of nutrient and sediment fluxes and associated changes in flux over time as a result of management activities. The comparison between the model estimates of flux and trend was made for combinations of five water-quality constituents at nine River Input Monitoring stations. The major findings with regard to nutrient and sediment fluxes are as follows: (1)WRTDS produced estimates of flux for all combinations that were more accurate, based on reduction in root mean squared error, than flux estimates from ESTIMATOR; (2) for 67 percent of the combinations, WRTDS and ESTIMATOR both produced estimates of flux that were minimally biased compared to observed fluxes(flux bias = tendency to over or underpredict flux observations); however, for 33 percent of the combinations, WRTDS produced estimates of flux that were considerably less biased (by at least 10 percent) than flux estimates from ESTIMATOR; (3) the average percent difference in annual fluxes generated by ESTIMATOR and WRTDS was less than 10 percent at 80 percent of the combinations; and (4) the greatest differences related to flux bias and annual fluxes all occurred for combinations where the pattern in observed concentration-discharge relation was sinuous (two points of inflection) rather than linear or quadratic (zero or one point of inflection). The major findings with regard to trends are as follows: (1) both models produce water-quality trends that have factored in the year-to-year variations in flow; (2) trends in water-quality condition are represented by ESTIMATOR as a trend in flow-adjusted concentration and by WRTDS as a flow normalized flux; (3) for 67 percent of the combinations with trend estimates, the WRTDS trends in flow-normalized flux are in the same direction and magnitude to the ESTIMATOR trends in flow-adjusted concentration, and at the remaining 33 percent the differences in trend magnitude and direction are related to fundamental differences between concentration and flux; and (4) the majority (85 percent) of the total nitrogen, nitrate, and orthophosphorus combinations exhibited long-term (1985 to 2010) trends in WRTDS flow-normalized flux that indicate improvement or reduction in associated flux and the majority (83 percent) of the total phosphorus (from 1985 to 2010) and suspended sediment (from 2001 to 2010) combinations exhibited trends in WRTDS flow-normalized flux that indicate degradation or increases in the flux delivered.

  19. A Framework for Measuring Low-Value Care.

    PubMed

    Miller, George; Rhyan, Corwin; Beaudin-Seiler, Beth; Hughes-Cromwick, Paul

    2018-04-01

    It has been estimated that more than 30% of health care spending in the United States is wasteful, and that low-value care, which drives up costs unnecessarily while increasing patient risk, is a significant component of wasteful spending. To address the need for an ability to measure the magnitude of low-value care nationwide, identify the clinical services that are the greatest contributors to waste, and track progress toward eliminating low-value use of these services. Such an ability could provide valuable input to the efforts of policymakers and health systems to improve efficiency. We reviewed existing methods that could contribute to measuring low-value care and developed an integrated framework that combines multiple methods to comprehensively estimate and track the magnitude and principal sources of clinical waste. We also identified a process and needed research for implementing the framework. A comprehensive methodology for measuring and tracking low-value care in the United States would provide an important contribution toward reducing waste. Implementation of the framework described in this article appears feasible, and the proposed research program will allow moving incrementally toward full implementation while providing a near-term capability for measuring low-value care that can be enhanced over time. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  20. Estimating Ocean Currents from Automatic Identification System Based Ship Drift Measurements

    NASA Astrophysics Data System (ADS)

    Jakub, Thomas D.

    Ship drift is a technique that has been used over the last century and a half to estimate ocean currents. Several of the shortcomings of the ship drift technique include obtaining the data from multiple ships, the time delay in getting those ship positions to a data center for processing and the limited resolution based on the amount of time between position measurements. These shortcomings can be overcome through the use of the Automatic Identification System (AIS). AIS enables more precise ocean current estimates, the option of finer resolution and more timely estimates. In this work, a demonstration of the use of AIS to compute ocean currents is performed. A corresponding error and sensitivity analysis is performed to help identify under which conditions errors will be smaller. A case study in San Francisco Bay with constant AIS message updates was compared against high frequency radar and demonstrated ocean current magnitude residuals of 19 cm/s for ship tracks in a high signal to noise environment. These ship tracks were only minutes long compared to the normally 12 to 24 hour ship tracks. The Gulf of Mexico case study demonstrated the ability to estimate ocean currents over longer baselines and identified the dependency of the estimates on the accuracy of time measurements. Ultimately, AIS measurements when combined with ship drift can provide another method of estimating ocean currents, particularly when other measurements techniques are not available.

  1. Daily pan evaporation modelling using a neuro-fuzzy computing technique

    NASA Astrophysics Data System (ADS)

    Kişi, Özgür

    2006-10-01

    SummaryEvaporation, as a major component of the hydrologic cycle, is important in water resources development and management. This paper investigates the abilities of neuro-fuzzy (NF) technique to improve the accuracy of daily evaporation estimation. Five different NF models comprising various combinations of daily climatic variables, that is, air temperature, solar radiation, wind speed, pressure and humidity are developed to evaluate degree of effect of each of these variables on evaporation. A comparison is made between the estimates provided by the NF model and the artificial neural networks (ANNs). The Stephens-Stewart (SS) method is also considered for the comparison. Various statistic measures are used to evaluate the performance of the models. Based on the comparisons, it was found that the NF computing technique could be employed successfully in modelling evaporation process from the available climatic data. The ANN also found to perform better than the SS method.

  2. SpotCaliper: fast wavelet-based spot detection with accurate size estimation.

    PubMed

    Püspöki, Zsuzsanna; Sage, Daniel; Ward, John Paul; Unser, Michael

    2016-04-15

    SpotCaliper is a novel wavelet-based image-analysis software providing a fast automatic detection scheme for circular patterns (spots), combined with the precise estimation of their size. It is implemented as an ImageJ plugin with a friendly user interface. The user is allowed to edit the results by modifying the measurements (in a semi-automated way), extract data for further analysis. The fine tuning of the detections includes the possibility of adjusting or removing the original detections, as well as adding further spots. The main advantage of the software is its ability to capture the size of spots in a fast and accurate way. http://bigwww.epfl.ch/algorithms/spotcaliper/ zsuzsanna.puspoki@epfl.ch Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Comparing Different Approaches of Bias Correction for Ability Estimation in IRT Models. Research Report. ETS RR-08-13

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; Zhang, Jinming

    2008-01-01

    The method of maximum-likelihood is typically applied to item response theory (IRT) models when the ability parameter is estimated while conditioning on the true item parameters. In practice, the item parameters are unknown and need to be estimated first from a calibration sample. Lewis (1985) and Zhang and Lu (2007) proposed the expected response…

  4. Combining Radiography and Passive Measurements for Radiological Threat Localization in Cargo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.

    Detecting shielded special nuclear material (SNM) in a cargo container is a difficult problem, since shielding reduces the amount of radiation escaping the container. Radiography provides information that is complementary to that provided by passive gamma-ray detection systems: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions that may mask a passive radiological signal. Combining these measurements has the potential to improve SNM detection, either through improved sensitivity or by providing a solution to the inverse problem to estimate source properties (strength and location). We present a data-fusion method that uses a radiograph to provide anmore » estimate of the radiation-transport environment for gamma rays from potential sources. This approach makes quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present results for this method for a modeled test case of a cargo container passing through a plastic-scintillator-based radiation portal monitor and a transmission-radiography system. We find that a radiograph-based inversion scheme allows for localization of a low-noise source placed randomly within the test container to within 40 cm, compared to 70 cm for triangulation alone, while strength estimation accuracy is improved by a factor of six. Improvements are seen in regions of both high and low shielding, but are most pronounced in highly shielded regions. The approach proposed here combines transmission and emission data in a manner that has not been explored in the cargo-screening literature, advancing the ability to accurately describe a hidden source based on currently-available instrumentation.« less

  5. Smart Fluids in Hydrology: Use of Non-Newtonian Fluids for Pore Structure Characterization

    NASA Astrophysics Data System (ADS)

    Abou Najm, M. R.; Atallah, N. M.; Selker, J. S.; Roques, C.; Stewart, R. D.; Rupp, D. E.; Saad, G.; El-Fadel, M.

    2015-12-01

    Classic porous media characterization relies on typical infiltration experiments with Newtonian fluids (i.e., water) to estimate hydraulic conductivity. However, such experiments are generally not able to discern important characteristics such as pore size distribution or pore structure. We show that introducing non-Newtonian fluids provides additional unique flow signatures that can be used for improved pore structure characterization while still representing the functional hydraulic behavior of real porous media. We present a new method for experimentally estimating the pore structure of porous media using a combination of Newtonian and non-Newtonian fluids. The proposed method transforms results of N infiltration experiments using water and N-1 non-Newtonian solutions into a system of equations that yields N representative radii (Ri) and their corresponding percent contribution to flow (wi). This method allows for estimating the soil retention curve using only saturated experiments. Experimental and numerical validation comparing the functional flow behavior of different soils to their modeled flow with N representative radii revealed the ability of the proposed method to represent the water retention and infiltration behavior of real soils. The experimental results showed the ability of such fluids to outsmart Newtonian fluids and infer pore size distribution and unsaturated behavior using simple saturated experiments. Specifically, we demonstrate using synthetic porous media that the use of different non-Newtonian fluids enables the definition of the radii and corresponding percent contribution to flow of multiple representative pores, thus improving the ability of pore-scale models to mimic the functional behavior of real porous media in terms of flow and porosity. The results advance the knowledge towards conceptualizing the complexity of porous media and can potentially impact applications in fields like irrigation efficiencies, vadose zone hydrology, soil-root-plant continuum, carbon sequestration into geologic formations, soil remediation, petroleum reservoir engineering, oil exploration and groundwater modeling.

  6. Combining four Monte Carlo estimators for radiation momentum deposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urbatsch, Todd J; Hykes, Joshua M

    2010-11-18

    Using four distinct Monte Carlo estimators for momentum deposition - analog, absorption, collision, and track-length estimators - we compute a combined estimator. In the wide range of problems tested, the combined estimator always has a figure of merit (FOM) equal to or better than the other estimators. In some instances the gain in FOM is only a few percent higher than the FOM of the best solo estimator, the track-length estimator, while in one instance it is better by a factor of 2.5. Over the majority of configurations, the combined estimator's FOM is 10-20% greater than any of the solomore » estimators FOM. In addition, the numerical results show that the track-length estimator is the most important term in computing the combined estimator, followed far behind by the analog estimator. The absorption and collision estimators make negligible contributions.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stavridis, Adamantios; Arun, K. G.; Will, Clifford M.

    Spin induced precessional modulations of gravitational wave signals from supermassive black hole binaries can improve the estimation of luminosity distance to the source by space based gravitational wave missions like the Laser Interferometer Space Antenna (LISA). We study how this impacts the ability of LISA to do cosmology, specifically, to measure the dark energy equation of state (EOS) parameter w. Using the {lambda}CDM model of cosmology, we show that observations of precessing binaries with mass ratio 10 ratio 1 by LISA, combined with a redshift measurement, can improve the determination of w up to an order of magnitude with respectmore » to the nonprecessing case depending on the total mass and the redshift.« less

  8. Why are they late? Timing abilities and executive control among students with learning disabilities.

    PubMed

    Grinblat, Nufar; Rosenblum, Sara

    2016-12-01

    While a deficient ability to perform daily tasks on time has been reported among students with learning disabilities (LD), the underlying mechanism behind their 'being late' is still unclear. This study aimed to evaluate the organization in time, time estimation abilities, actual performance time pertaining to specific daily activities, as well as the executive functions of students with LD in comparison to those of controls, and to assess the relationships between these domains among each group. The participants were 27 students with LD, aged 20-30, and 32 gender and age-matched controls who completed the Time Organization and Participation Scale (TOPS) and the Behavioral Rating Inventory of Executive Function-Adult version (BRIEF-A). In addition, their ability to estimate the time needed to complete the task of preparing a cup of coffee as well as their actual performance time were evaluated. The results indicated that in comparison to controls, students with LD showed significantly inferior organization in time (TOPS) and executive function abilities (BRIEF-A). Furthermore, their time estimation abilities were significantly inferior and they required significantly more time to prepare a cup of coffee. Regression analysis identified the variables that predicted organization in time and task performance time among each group. The significance of the results for both theoretical and clinical implications are discussed. What this paper adds? This study examines the underlying mechanism of the phenomena of being late among students with LD. Following a recent call for using ecologically valid assessments, the functional daily ability of students with LD to prepare a cup of coffee and to organize time were investigated. Furthermore, their time estimation and executive control abilities were examined as a possible underlying mechanism for their lateness. Although previous studies have indicated executive control deficits among students with LD, to our knowledge, this is the first analysis of the relationships between their executive control and time estimation deficits and their influence upon their daily function and organization in time abilities. Our findings demonstrate that students with LD need more time in order to execute simple daily activities, such as preparing a cup of coffee. Deficient working memory, retrospective time estimation ability and inhibition predicted their performance time and organization in time abilities. Therefore, this paper sheds light on the mechanism behind daily performance in time among students with LD and emphasizes the need for future development of focused intervention programs to meet their unique needs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Application of diffusion kurtosis imaging to odontogenic lesions: Analysis of the cystic component.

    PubMed

    Sakamoto, Junichiro; Kuribayashi, Ami; Kotaki, Shinya; Fujikura, Mamiko; Nakamura, Shin; Kurabayashi, Tohru

    2016-12-01

    To assess the feasibility of applying diffusion kurtosis imaging (DKI) to common odontogenic lesions and to compare its diagnostic ability versus that of the apparent diffusion coefficient (ADC) for differentiating keratocystic odontogenic tumors (KCOTs) from odontogenic cysts. Altogether, 35 odontogenic lesions were studied: 24 odontogenic cysts, six KCOTs, and five ameloblastomas. The diffusion coefficient (D) and excessive kurtosis (K) were obtained from diffusion-weighted images at b-values of 0, 500, 1000, and 1500 s/mm 2 on 3T magnetic resonance imaging (MRI). The combination of D and K values showing the maximum density of the probable density function was estimated. The ADC was obtained (0 and 1000 s/mm 2 ). Values for odontogenic cysts, KCOTs, and ameloblastomas were compared. Multivariate logistic regression modeling was performed to assess the combination of D and K model versus ADC for differentiating KCOTs from odontogenic cysts. The mean D and ADC were significantly higher for ameloblastomas than for odontogenic cysts or KCOTs (P < 0.05). The mean K was significantly lower for ameloblastomas than for odontogenic cysts or KCOTs (P < 0.05). The mean values of all parameters for odontogenic cysts and KCOTs showed no significant differences (P = 0.369 for ADC, 0.133 for D, and 0.874 for K). The accuracy of the combination of D and K model (76.7%) was superior to that of ADC (66.7%). Use of DKI may be feasible for common odontogenic lesions. A combination of DKI parameters can be expected to increase the accuracy of its diagnostic ability compared with ADC. J. Magn. Reson. Imaging 2016;44:1565-1571. © 2016 International Society for Magnetic Resonance in Medicine.

  10. Evaluating the impacts of different measurement and model configurations on top-down estimates of UK methane emissions

    NASA Astrophysics Data System (ADS)

    Lunt, Mark; Rigby, Matt; Manning, Alistair; O'Doherty, Simon; Stavert, Ann; Stanley, Kieran; Young, Dickon; Pitt, Joseph; Bauguitte, Stephane; Allen, Grant; Helfter, Carole; Palmer, Paul

    2017-04-01

    The Greenhouse gAs Uk and Global Emissions (GAUGE) project aims to quantify the magnitude and uncertainty of key UK greenhouse gas emissions more robustly than previously achieved. Measurements of methane have been taken from a number of tall-tower and surface sites as well as mobile measurement platforms such as a research aircraft and a ferry providing regular transects off the east coast of the UK. Using the UK Met Office's atmospheric transport model, NAME, and a novel Bayesian inversion technique we present estimates of methane emissions from the UK from a number of different combinations of sites to show the robustness of the UK total emissions to network configuration. The impact on uncertainties will be discussed, focusing on the usefulness of the various measurement platforms for constraining UK emissions. We will examine the effects of observation selection and how a priori assumptions about model uncertainty can affect the emission estimates, even within a data-driven hierarchical inversion framework. Finally, we will show the impact of the resolution of the meteorology used to drive the NAME model on emissions estimates, and how to rationalise our understanding of the ability of transport models to represent reality.

  11. 2-D Myocardial Deformation Imaging Based on RF-Based Nonrigid Image Registration.

    PubMed

    Chakraborty, Bidisha; Liu, Zhi; Heyde, Brecht; Luo, Jianwen; D'hooge, Jan

    2018-06-01

    Myocardial deformation imaging is a well-established echocardiographic technique for the assessment of myocardial function. Although some solutions make use of speckle tracking of the reconstructed B-mode images, others apply block matching (BM) on the underlying radio frequency (RF) data in order to increase sensitivity to small interframe motion and deformation. However, for both approaches, lateral motion estimation remains a challenge due to the relatively poor lateral resolution of the ultrasound image in combination with the lack of phase information in this direction. Hereto, nonrigid image registration (NRIR) of B-mode images has previously been proposed as an attractive solution. However, hereby, the advantages of RF-based tracking were lost. The aim of this paper was, therefore, to develop an NRIR motion estimator adapted to RF data sets. The accuracy of this estimator was quantified using synthetic data and was contrasted against a state-of-the-art BM solution. The results show that RF-based NRIR outperforms BM in terms of tracking accuracy, particularly, as hypothesized, in the lateral direction. Finally, this RF-based NRIR algorithm was applied clinically, illustrating its ability to estimate both in-plane velocity components in vivo.

  12. Population dynamics of Greater Scaup breeding on the Yukon-Kuskokwim Delta, Alaska

    USGS Publications Warehouse

    Flint, Paul L.; Grand, J. Barry; Fondell, Thomas F.; Morse, Julie A.

    2006-01-01

    Using a stochastic model, we estimated that, on average, breeding females produced 0.57 young females/nesting season. We combined this estimate of productivity with our annual estimates of adult survival and an assumed population growth rate of 1.0, then solved for an estimate of first-year survival (0.40). Under these conditions the predicted stable age distribution of breeding females (i.e., the nesting population) was 15.1% 1-year-old, 4.1% 2-year-old first-time breeders, and 80.8% 2-year-old and older, experienced breeders. We subjected this stochastic model to perturbation analyses to examine the relative effects of demographic parameters on k. The relative effects of productivity and adult survival on the population growth rate were 0.26 and 0.72, respectively. Thus, compared to productivity, proportionally equivalent changes in annual survival would have 2.8 times the effect on k. However, when we examined annual variation in predicted population size using standardized regression coefficients, productivity explained twice as much variation as annual survival. Thus, management actions focused on changes in survival or productivity have the ability to influence population size; however, substantially larger changes in productivity are required to influence population trends.

  13. Validation of Yoon's Critical Thinking Disposition Instrument.

    PubMed

    Shin, Hyunsook; Park, Chang Gi; Kim, Hyojin

    2015-12-01

    The lack of reliable and valid evaluation tools targeting Korean nursing students' critical thinking (CT) abilities has been reported as one of the barriers to instructing and evaluating students in undergraduate programs. Yoon's Critical Thinking Disposition (YCTD) instrument was developed for Korean nursing students, but few studies have assessed its validity. This study aimed to validate the YCTD. Specifically, the YCTD was assessed to identify its cross-sectional and longitudinal measurement invariance. This was a validation study in which a cross-sectional and longitudinal (prenursing and postnursing practicum) survey was used to validate the YCTD using 345 nursing students at three universities in Seoul, Korea. The participants' CT abilities were assessed using the YCTD before and after completing an established pediatric nursing practicum. The validity of the YCTD was estimated and then group invariance test using multigroup confirmatory factor analysis was performed to confirm the measurement compatibility of multigroups. A test of the seven-factor model showed that the YCTD demonstrated good construct validity. Multigroup confirmatory factor analysis findings for the measurement invariance suggested that this model structure demonstrated strong invariance between groups (i.e., configural, factor loading, and intercept combined) but weak invariance within a group (i.e., configural and factor loading combined). In general, traditional methods for assessing instrument validity have been less than thorough. In this study, multigroup confirmatory factor analysis using cross-sectional and longitudinal measurement data allowed validation of the YCTD. This study concluded that the YCTD can be used for evaluating Korean nursing students' CT abilities. Copyright © 2015. Published by Elsevier B.V.

  14. Risk approximation in decision making: approximative numeric abilities predict advantageous decisions under objective risk.

    PubMed

    Mueller, Silke M; Schiebener, Johannes; Delazer, Margarete; Brand, Matthias

    2018-01-22

    Many decision situations in everyday life involve mathematical considerations. In decisions under objective risk, i.e., when explicit numeric information is available, executive functions and abilities to handle exact numbers and ratios are predictors of objectively advantageous choices. Although still debated, exact numeric abilities, e.g., normative calculation skills, are assumed to be related to approximate number processing skills. The current study investigates the effects of approximative numeric abilities on decision making under objective risk. Participants (N = 153) performed a paradigm measuring number-comparison, quantity-estimation, risk-estimation, and decision-making skills on the basis of rapid dot comparisons. Additionally, a risky decision-making task with exact numeric information was administered, as well as tasks measuring executive functions and exact numeric abilities, e.g., mental calculation and ratio processing skills, were conducted. Approximative numeric abilities significantly predicted advantageous decision making, even beyond the effects of executive functions and exact numeric skills. Especially being able to make accurate risk estimations seemed to contribute to superior choices. We recommend approximation skills and approximate number processing to be subject of future investigations on decision making under risk.

  15. Direct estimation of evoked hemoglobin changes by multimodality fusion imaging

    PubMed Central

    Huppert, Theodore J.; Diamond, Solomon G.; Boas, David A.

    2009-01-01

    In the last two decades, both diffuse optical tomography (DOT) and blood oxygen level dependent (BOLD)-based functional magnetic resonance imaging (fMRI) methods have been developed as noninvasive tools for imaging evoked cerebral hemodynamic changes in studies of brain activity. Although these two technologies measure functional contrast from similar physiological sources, i.e., changes in hemoglobin levels, these two modalities are based on distinct physical and biophysical principles leading to both limitations and strengths to each method. In this work, we describe a unified linear model to combine the complimentary spatial, temporal, and spectroscopic resolutions of concurrently measured optical tomography and fMRI signals. Using numerical simulations, we demonstrate that concurrent optical and BOLD measurements can be used to create cross-calibrated estimates of absolute micromolar deoxyhemoglobin changes. We apply this new analysis tool to experimental data acquired simultaneously with both DOT and BOLD imaging during a motor task, demonstrate the ability to more robustly estimate hemoglobin changes in comparison to DOT alone, and show how this approach can provide cross-calibrated estimates of hemoglobin changes. Using this multimodal method, we estimate the calibration of the 3 tesla BOLD signal to be −0.55% ± 0.40% signal change per micromolar change of deoxyhemoglobin. PMID:19021411

  16. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions.

    PubMed

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.

  17. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions

    PubMed Central

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability. PMID:26941699

  18. Rotation otolith tilt-translation reinterpretation (ROTTR) hypothesis: a new hypothesis to explain neurovestibular spaceflight adaptation.

    PubMed

    Merfeld, Daniel M

    2003-01-01

    Normally, the nervous system must process ambiguous graviceptor (e.g., otolith) cues to estimate tilt and translation. The neural processes that help perform these estimation processes must adapt upon exposure to weightlessness and readapt upon return to Earth. In this paper we present a review of evidence supporting a new hypothesis that explains some aspects of these adaptive processes. This hypothesis, which we label the rotation otolith tilt-translation reinterpretation (ROTTR) hypothesis, suggests that the neural processes resulting in spaceflight adaptation include deterioration in the ability of the nervous system to use rotational cues to help accurately estimate the relative orientation of gravity ("tilt"). Changes in the ability to estimate gravity then also influence the ability of the nervous system to estimate linear acceleration ("translation"). We explicitly hypothesize that such changes in the ability to estimate "tilt" and "translation" will be measurable upon return to Earth and will, at least partially, explain the disorientation experienced when astronauts return to Earth. In this paper, we present the details and implications of ROTTR, review data related to ROTTR, and discuss the relationship of ROTTR to the influential otolith tilt-translation reinterpretation (OTTR) hypothesis as well as discuss the distinct differences between ROTTR and OTTR.

  19. Estimating distribution of hidden objects with drones: from tennis balls to manatees.

    PubMed

    Martin, Julien; Edwards, Holly H; Burgess, Matthew A; Percival, H Franklin; Fagan, Daniel E; Gardner, Beth E; Ortega-Ortiz, Joel G; Ifju, Peter G; Evers, Brandon S; Rambo, Thomas J

    2012-01-01

    Unmanned aerial vehicles (UAV), or drones, have been used widely in military applications, but more recently civilian applications have emerged (e.g., wildlife population monitoring, traffic monitoring, law enforcement, oil and gas pipeline threat detection). UAV can have several advantages over manned aircraft for wildlife surveys, including reduced ecological footprint, increased safety, and the ability to collect high-resolution geo-referenced imagery that can document the presence of species without the use of a human observer. We illustrate how geo-referenced data collected with UAV technology in combination with recently developed statistical models can improve our ability to estimate the distribution of organisms. To demonstrate the efficacy of this methodology, we conducted an experiment in which tennis balls were used as surrogates of organisms to be surveyed. We used a UAV to collect images of an experimental field with a known number of tennis balls, each of which had a certain probability of being hidden. We then applied spatially explicit occupancy models to estimate the number of balls and created precise distribution maps. We conducted three consecutive surveys over the experimental field and estimated the total number of balls to be 328 (95%CI: 312, 348). The true number was 329 balls, but simple counts based on the UAV pictures would have led to a total maximum count of 284. The distribution of the balls in the field followed a simulated environmental gradient. We also were able to accurately estimate the relationship between the gradient and the distribution of balls. Our experiment demonstrates how this technology can be used to create precise distribution maps in which discrete regions of the study area are assigned a probability of presence of an object. Finally, we discuss the applicability and relevance of this experimental study to the case study of Florida manatee distribution at power plants.

  20. Using Uncertainty Quantification to Guide Development and Improvements of a Regional-Scale Model of the Coastal Lowlands Aquifer System Spanning Texas, Louisiana, Mississippi, Alabama and Florida

    NASA Astrophysics Data System (ADS)

    Foster, L. K.; Clark, B. R.; Duncan, L. L.; Tebo, D. T.; White, J.

    2017-12-01

    Several historical groundwater models exist within the Coastal Lowlands Aquifer System (CLAS), which spans the Gulf Coastal Plain in Texas, Louisiana, Mississippi, Alabama, and Florida. The largest of these models, called the Gulf Coast Regional Aquifer System Analysis (RASA) model, has been brought into a new framework using the Newton formulation for MODFLOW-2005 (MODFLOW-NWT) and serves as the starting point of a new investigation underway by the U.S. Geological Survey to improve understanding of the CLAS and provide predictions of future groundwater availability within an uncertainty quantification (UQ) framework. The use of an UQ framework will not only provide estimates of water-level observation worth, hydraulic parameter uncertainty, boundary-condition uncertainty, and uncertainty of future potential predictions, but it will also guide the model development process. Traditionally, model development proceeds from dataset construction to the process of deterministic history matching, followed by deterministic predictions using the model. This investigation will combine the use of UQ with existing historical models of the study area to assess in a quantitative framework the effect model package and property improvements have on the ability to represent past-system states, as well as the effect on the model's ability to make certain predictions of water levels, water budgets, and base-flow estimates. Estimates of hydraulic property information and boundary conditions from the existing models and literature, forming the prior, will be used to make initial estimates of model forecasts and their corresponding uncertainty, along with an uncalibrated groundwater model run within an unconstrained Monte Carlo analysis. First-Order Second-Moment (FOSM) analysis will also be used to investigate parameter and predictive uncertainty, and guide next steps in model development prior to rigorous history matching by using PEST++ parameter estimation code.

  1. Combining high-resolution gross domestic product data with home and personal care product market research data to generate a subnational emission inventory for Asia.

    PubMed

    Hodges, Juliet Elizabeth Natasha; Vamshi, Raghu; Holmes, Christopher; Rowson, Matthew; Miah, Taqmina; Price, Oliver Richard

    2014-04-01

    Environmental risk assessment of chemicals is reliant on good estimates of product usage information and robust exposure models. Over the past 20 to 30 years, much progress has been made with the development of exposure models that simulate the transport and distribution of chemicals in the environment. However, little progress has been made in our ability to estimate chemical emissions of home and personal care (HPC) products. In this project, we have developed an approach to estimate subnational emission inventory of chemical ingredients used in HPC products for 12 Asian countries including Bangladesh, Cambodia, China, India, Indonesia, Laos, Malaysia, Pakistan, Philippines, Sri Lanka, Thailand, and Vietnam (Asia-12). To develop this inventory, we have coupled a 1 km grid of per capita gross domestic product (GDP) estimates with market research data of HPC product sales. We explore the necessity of accounting for a population's ability to purchase HPC products in determining their subnational distribution in regions where wealth is not uniform. The implications of using high resolution data on inter- and intracountry subnational emission estimates for a range of hypothetical and actual HPC product types were explored. It was demonstrated that for low value products (<500 US$ per capita/annum required to purchase product) the maximum deviation from baseline (emission distributed via population) is less than a factor of 3 and it would not result in significant differences in chemical risk assessments. However, for other product types (>500 US$ per capita/annum required to purchase product) the implications on emissions being assigned to subnational regions can vary by several orders of magnitude. The implications of this on conducting national or regional level risk assessments may be significant. Further work is needed to explore the implications of this variability in HPC emissions to enable the HPC industry and/or governments to advance risk-based chemical management policies in emerging markets. © 2013 SETAC.

  2. Estimating Distribution of Hidden Objects with Drones: From Tennis Balls to Manatees

    PubMed Central

    Martin, Julien; Edwards, Holly H.; Burgess, Matthew A.; Percival, H. Franklin; Fagan, Daniel E.; Gardner, Beth E.; Ortega-Ortiz, Joel G.; Ifju, Peter G.; Evers, Brandon S.; Rambo, Thomas J.

    2012-01-01

    Unmanned aerial vehicles (UAV), or drones, have been used widely in military applications, but more recently civilian applications have emerged (e.g., wildlife population monitoring, traffic monitoring, law enforcement, oil and gas pipeline threat detection). UAV can have several advantages over manned aircraft for wildlife surveys, including reduced ecological footprint, increased safety, and the ability to collect high-resolution geo-referenced imagery that can document the presence of species without the use of a human observer. We illustrate how geo-referenced data collected with UAV technology in combination with recently developed statistical models can improve our ability to estimate the distribution of organisms. To demonstrate the efficacy of this methodology, we conducted an experiment in which tennis balls were used as surrogates of organisms to be surveyed. We used a UAV to collect images of an experimental field with a known number of tennis balls, each of which had a certain probability of being hidden. We then applied spatially explicit occupancy models to estimate the number of balls and created precise distribution maps. We conducted three consecutive surveys over the experimental field and estimated the total number of balls to be 328 (95%CI: 312, 348). The true number was 329 balls, but simple counts based on the UAV pictures would have led to a total maximum count of 284. The distribution of the balls in the field followed a simulated environmental gradient. We also were able to accurately estimate the relationship between the gradient and the distribution of balls. Our experiment demonstrates how this technology can be used to create precise distribution maps in which discrete regions of the study area are assigned a probability of presence of an object. Finally, we discuss the applicability and relevance of this experimental study to the case study of Florida manatee distribution at power plants. PMID:22761712

  3. Ability Self-Estimates and Self-Efficacy: Meaningfully Distinct?

    ERIC Educational Resources Information Center

    Bubany, Shawn T.; Hansen, Jo-Ida C.

    2010-01-01

    Conceptual differences between self-efficacy and ability self-estimate scores, used in vocational psychology and career counseling, were examined with confirmatory factor analysis, discriminate relations, and reliability analysis. Results suggest that empirical differences may be due to measurement error or scale content, rather than due to the…

  4. Occupational stress perception and its potential impact on work ability.

    PubMed

    Yong, Mei; Nasterlack, Michael; Pluto, Rolf-Peter; Lang, Stefan; Oberlinner, Christoph

    2013-01-01

    To examine perceived stress across employees with different occupational status, to investigate the impact of stress on work ability and to derive conclusions regarding health promotion activities. A comprehensive survey combining questionnaire and medical examination was offered in one division in BASF Ludwigshafen. Among 867 voluntary participants, 653 returned complete questionnaires. The questions were directed at perception of safety at the workplace, self-rated health status, frequency of stress symptoms, unrealistic job demands, time pressure and maladjustment of work life balance. The outcome of interest was self-estimated health measured by the Work Ability Index (WAI). Occupational stressors were perceived differently across occupational status groups. Frontline operators had more health concerns due to workplace conditions, while professional and managerial staff reported higher frequencies of perceived tension, time pressure, and maladjustment of work life balance. After adjustment for occupational status, demographic and lifestyle factors, perceived stress was associated with a modest to strong decline in WAI scores. While perceived occupational stress had an apparent impact on WAI, and WAI has been demonstrated to be predictive of early retirement, more intensive and employee group-specific stress management interventions are being implemented beyond traditional strategies of routine occupational medical surveillance.

  5. Economic analysis of the intangible impacts of informal care for people with Alzheimer's disease and other mental disorders.

    PubMed

    Gervès, Chloé; Bellanger, Martine Marie; Ankri, Joël

    2013-01-01

    Valuation of the intangible impacts of informal care remains a great challenge for economic evaluation, especially in the framework of care recipients with cognitive impairment. Our main objective was to explore the influence of intangible impacts of caring on both informal caregivers' ability to estimate their willingness to pay (WTP) to be replaced and their WTP value. We mapped characteristics that influence ability or inability to estimate WTP by using a multiple correspondence analysis. We ran a bivariate probit model with sample selection to further analyze the caregivers' WTP value conditional on their ability to estimate their WTP. A distinction exists between the opportunity costs of the caring dimension and those of the intangible costs and benefits of caring. Informal caregivers' ability to estimate WTP is negatively influenced by both intangible benefits from caring (P < 0.001) and negative intangible impacts of caring (P < 0.05). Caregivers' WTP value is negatively associated with positive intangible impacts of informal care (P < 0.01). Informal caregivers' WTP and their ability to estimate WTP are both influenced by intangible burden and benefit of caring. These results call into question the relevance of a hypothetical generalized financial compensation system as the optimal way to motivate caregivers to continue providing care. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  6. Being on sick leave due to heart failure: Encounters with social insurance officers and associations with sociodemographic factors and self-estimated ability to return to work.

    PubMed

    Nordgren, Lena; Söderlund, Anne

    2016-04-01

    Little is known about sick leave and the ability to return to work (RTW) for people with heart failure (HF). Previous research findings raise questions about the significance of encounters with social insurance officers (SIOs) and sociodemographics in people sick-listed due to HF. To investigate how people on sick leave due to HF experience encounters with SIOs and associations between sociodemographic factors, experiences of positive/negative encounters with SIOs, and self-estimated ability to RTW. This was a population-based study with a cross-sectional design. The sample consisted of 590 sick-listed people with HF in Sweden. A register-based investigation supplemented with a postal survey questionnaire was conducted. Bivariate correlations and logistic regression analysis was used to test associations between sociodemographic factors, positive and negative encounters, and self-estimated ability to RTW. People with low income were more likely to receive sickness compensation. A majority of the responders experienced encounters with SIOs as positive. Being married was significantly associated with positive encounters. Having a low income was related to negative encounters. More than a third of the responders agreed that positive encounters with SIOs facilitated self-estimated ability to RTW. High income was strongly associated with the impact of positive encounters on self-estimated ability to RTW. Encounters between SIOs and people on sick leave due to HF need to be characterized by a person-centred approach including confidence and trust. People with low income need special attention. © The European Society of Cardiology 2015.

  7. Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution

    NASA Astrophysics Data System (ADS)

    He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun

    2016-05-01

    Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.

  8. Association Between Connecticut’s Permit-to-Purchase Handgun Law and Homicides

    PubMed Central

    Rudolph, Kara E.; Stuart, Elizabeth A.; Vernick, Jon S.

    2015-01-01

    Objectives. We sought to estimate the effect of Connecticut’s implementation of a handgun permit-to-purchase law in October 1995 on subsequent homicides. Methods. Using the synthetic control method, we compared Connecticut’s homicide rates after the law’s implementation to rates we would have expected had the law not been implemented. To estimate the counterfactual, we used longitudinal data from a weighted combination of comparison states identified based on the ability of their prelaw homicide trends and covariates to predict prelaw homicide trends in Connecticut. Results. We estimated that the law was associated with a 40% reduction in Connecticut’s firearm homicide rates during the first 10 years that the law was in place. By contrast, there was no evidence for a reduction in nonfirearm homicides. Conclusions. Consistent with prior research, this study demonstrated that Connecticut’s handgun permit-to-purchase law was associated with a subsequent reduction in homicide rates. As would be expected if the law drove the reduction, the policy’s effects were only evident for homicides committed with firearms. PMID:26066959

  9. Towards Personal Exposures: How Technology Is Changing Air Pollution and Health Research.

    PubMed

    Larkin, A; Hystad, P

    2017-12-01

    We present a review of emerging technologies and how these can transform personal air pollution exposure assessment and subsequent health research. Estimating personal air pollution exposures is currently split broadly into methods for modeling exposures for large populations versus measuring exposures for small populations. Air pollution sensors, smartphones, and air pollution models capitalizing on big/new data sources offer tremendous opportunity for unifying these approaches and improving long-term personal exposure prediction at scales needed for population-based research. A multi-disciplinary approach is needed to combine these technologies to not only estimate personal exposures for epidemiological research but also determine drivers of these exposures and new prevention opportunities. While available technologies can revolutionize air pollution exposure research, ethical, privacy, logistical, and data science challenges must be met before widespread implementations occur. Available technologies and related advances in data science can improve long-term personal air pollution exposure estimates at scales needed for population-based research. This will advance our ability to evaluate the impacts of air pollution on human health and develop effective prevention strategies.

  10. Estimating rainfall time series and model parameter distributions using model data reduction and inversion techniques

    NASA Astrophysics Data System (ADS)

    Wright, Ashley J.; Walker, Jeffrey P.; Pauwels, Valentijn R. N.

    2017-08-01

    Floods are devastating natural hazards. To provide accurate, precise, and timely flood forecasts, there is a need to understand the uncertainties associated within an entire rainfall time series, even when rainfall was not observed. The estimation of an entire rainfall time series and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of entire rainfall input time series to be considered when estimating model parameters, and provides the ability to improve rainfall estimates from poorly gauged catchments. Current methods to estimate entire rainfall time series from streamflow records are unable to adequately invert complex nonlinear hydrologic systems. This study aims to explore the use of wavelets in the estimation of rainfall time series from streamflow records. Using the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia, it is shown that model parameter distributions and an entire rainfall time series can be estimated. Including rainfall in the estimation process improves streamflow simulations by a factor of up to 1.78. This is achieved while estimating an entire rainfall time series, inclusive of days when none was observed. It is shown that the choice of wavelet can have a considerable impact on the robustness of the inversion. Combining the use of a likelihood function that considers rainfall and streamflow errors with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.

  11. Robust Estimation of Latent Ability in Item Response Models

    ERIC Educational Resources Information Center

    Schuster, Christof; Yuan, Ke-Hai

    2011-01-01

    Because of response disturbances such as guessing, cheating, or carelessness, item response models often can only approximate the "true" individual response probabilities. As a consequence, maximum-likelihood estimates of ability will be biased. Typically, the nature and extent to which response disturbances are present is unknown, and, therefore,…

  12. Evaluating Carbonate System Algorithms in a Nearshore System: Does Total Alkalinity Matter?

    PubMed Central

    Sweet, Julia; Brzezinski, Mark A.; McNair, Heather M.; Passow, Uta

    2016-01-01

    Ocean acidification is a threat to many marine organisms, especially those that use calcium carbonate to form their shells and skeletons. The ability to accurately measure the carbonate system is the first step in characterizing the drivers behind this threat. Due to logistical realities, regular carbonate system sampling is not possible in many nearshore ocean habitats, particularly in remote, difficult-to-access locations. The ability to autonomously measure the carbonate system in situ relieves many of the logistical challenges; however, it is not always possible to measure the two required carbonate parameters autonomously. Observed relationships between sea surface salinity and total alkalinity can frequently provide a second carbonate parameter thus allowing for the calculation of the entire carbonate system. Here, we assessed the rigor of estimating total alkalinity from salinity at a depth <15 m by routinely sampling water from a pier in southern California for several carbonate system parameters. Carbonate system parameters based on measured values were compared with those based on estimated TA values. Total alkalinity was not predictable from salinity or from a combination of salinity and temperature at this site. However, dissolved inorganic carbon and the calcium carbonate saturation state of these nearshore surface waters could both be estimated within on average 5% of measured values using measured pH and salinity-derived or regionally averaged total alkalinity. Thus we find that the autonomous measurement of pH and salinity can be used to monitor trends in coastal changes in DIC and saturation state and be a useful method for high-frequency, long-term monitoring of ocean acidification. PMID:27893739

  13. Evaluating Carbonate System Algorithms in a Nearshore System: Does Total Alkalinity Matter?

    PubMed

    Jones, Jonathan M; Sweet, Julia; Brzezinski, Mark A; McNair, Heather M; Passow, Uta

    2016-01-01

    Ocean acidification is a threat to many marine organisms, especially those that use calcium carbonate to form their shells and skeletons. The ability to accurately measure the carbonate system is the first step in characterizing the drivers behind this threat. Due to logistical realities, regular carbonate system sampling is not possible in many nearshore ocean habitats, particularly in remote, difficult-to-access locations. The ability to autonomously measure the carbonate system in situ relieves many of the logistical challenges; however, it is not always possible to measure the two required carbonate parameters autonomously. Observed relationships between sea surface salinity and total alkalinity can frequently provide a second carbonate parameter thus allowing for the calculation of the entire carbonate system. Here, we assessed the rigor of estimating total alkalinity from salinity at a depth <15 m by routinely sampling water from a pier in southern California for several carbonate system parameters. Carbonate system parameters based on measured values were compared with those based on estimated TA values. Total alkalinity was not predictable from salinity or from a combination of salinity and temperature at this site. However, dissolved inorganic carbon and the calcium carbonate saturation state of these nearshore surface waters could both be estimated within on average 5% of measured values using measured pH and salinity-derived or regionally averaged total alkalinity. Thus we find that the autonomous measurement of pH and salinity can be used to monitor trends in coastal changes in DIC and saturation state and be a useful method for high-frequency, long-term monitoring of ocean acidification.

  14. Dual-component video image analysis system (VIASCAN) as a predictor of beef carcass red meat yield percentage and for augmenting application of USDA yield grades.

    PubMed

    Cannell, R C; Tatum, J D; Belk, K E; Wise, J W; Clayton, R P; Smith, G C

    1999-11-01

    An improved ability to quantify differences in the fabrication yields of beef carcasses would facilitate the application of value-based marketing. This study was conducted to evaluate the ability of the Dual-Component Australian VIASCAN to 1) predict fabricated beef subprimal yields as a percentage of carcass weight at each of three fat-trim levels and 2) augment USDA yield grading, thereby improving accuracy of grade placement. Steer and heifer carcasses (n = 240) were evaluated using VIASCAN, as well as by USDA expert and online graders, before fabrication of carcasses to each of three fat-trim levels. Expert yield grade (YG), online YG, VIASCAN estimates, and VIASCAN estimated ribeye area used to augment actual and expert grader estimates of the remaining YG factors (adjusted fat thickness, percentage of kidney-pelvic-heart fat, and hot carcass weight), respectively, 1) accounted for 51, 37, 46, and 55% of the variation in fabricated yields of commodity-trimmed subprimals, 2) accounted for 74, 54, 66, and 75% of the variation in fabricated yields of closely trimmed subprimals, and 3) accounted for 74, 54, 71, and 75% of the variation in fabricated yields of very closely trimmed subprimals. The VIASCAN system predicted fabrication yields more accurately than current online yield grading and, when certain VIASCAN-measured traits were combined with some USDA yield grade factors in an augmentation system, the accuracy of cutability prediction was improved, at packing plant line speeds, to a level matching that of expert graders applying grades at a comfortable rate.

  15. Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.

    PubMed

    Farsani, Zahra Amini; Schmid, Volker J

    2017-01-01

    In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.

  16. Multiple Drosophila Tracking System with Heading Direction

    PubMed Central

    Sirigrivatanawong, Pudith; Arai, Shogo; Thoma, Vladimiros; Hashimoto, Koichi

    2017-01-01

    Machine vision systems have been widely used for image analysis, especially that which is beyond human ability. In biology, studies of behavior help scientists to understand the relationship between sensory stimuli and animal responses. This typically requires the analysis and quantification of animal locomotion. In our work, we focus on the analysis of the locomotion of the fruit fly Drosophila melanogaster, a widely used model organism in biological research. Our system consists of two components: fly detection and tracking. Our system provides the ability to extract a group of flies as the objects of concern and furthermore determines the heading direction of each fly. As each fly moves, the system states are refined with a Kalman filter to obtain the optimal estimation. For the tracking step, combining information such as position and heading direction with assignment algorithms gives a successful tracking result. The use of heading direction increases the system efficiency when dealing with identity loss and flies swapping situations. The system can also operate with a variety of videos with different light intensities. PMID:28067800

  17. Nonconservative force model parameter estimation strategy for TOPEX/Poseidon precision orbit determination

    NASA Technical Reports Server (NTRS)

    Luthcke, S. B.; Marshall, J. A.

    1992-01-01

    The TOPEX/Poseidon spacecraft was launched on August 10, 1992 to study the Earth's oceans. To achieve maximum benefit from the altimetric data it is to collect, mission requirements dictate that TOPEX/Poseidon's orbit must be computed at an unprecedented level of accuracy. To reach our pre-launch radial orbit accuracy goals, the mismodeling of the radiative nonconservative forces of solar radiation, Earth albedo an infrared re-radiation, and spacecraft thermal imbalances cannot produce in combination more than a 6 cm rms error over a 10 day period. Similarly, the 10-day drag modeling error cannot exceed 3 cm rms. In order to satisfy these requirements, a 'box-wing' representation of the satellite has been developed in which, the satellite is modelled as the combination of flat plates arranged in the shape of a box and a connected solar array. The radiative/thermal nonconservative forces acting on each of the eight surfaces are computed independently, yielding vector accelerations which are summed to compute the total aggregate effect on the satellite center-of-mass. Select parameters associated with the flat plates are adjusted to obtain a better representation of the satellite acceleration history. This study analyzes the estimation of these parameters from simulated TOPEX/Poseidon laser data in the presence of both nonconservative and gravity model errors. A 'best choice' of estimated parameters is derived and the ability to meet mission requirements with the 'box-wing' model evaluated.

  18. Estimating Premorbid Cognitive Abilities in Low-Educated Populations

    PubMed Central

    Apolinario, Daniel; Brucki, Sonia Maria Dozzi; Ferretti, Renata Eloah de Lucena; Farfel, José Marcelo; Magaldi, Regina Miksian; Busse, Alexandre Leopold; Jacob-Filho, Wilson

    2013-01-01

    Objective To develop an informant-based instrument that would provide a valid estimate of premorbid cognitive abilities in low-educated populations. Methods A questionnaire was drafted by focusing on the premorbid period with a 10-year time frame. The initial pool of items was submitted to classical test theory and a factorial analysis. The resulting instrument, named the Premorbid Cognitive Abilities Scale (PCAS), is composed of questions addressing educational attainment, major lifetime occupation, reading abilities, reading habits, writing abilities, calculation abilities, use of widely available technology, and the ability to search for specific information. The validation sample was composed of 132 older Brazilian adults from the following three demographically matched groups: normal cognitive aging (n = 72), mild cognitive impairment (n = 33), and mild dementia (n = 27). The scores of a reading test and a neuropsychological battery were adopted as construct criteria. Post-mortem inter-informant reliability was tested in a sub-study with two relatives from each deceased individual. Results All items presented good discriminative power, with corrected item-total correlation varying from 0.35 to 0.74. The summed score of the instrument presented high correlation coefficients with global cognitive function (r = 0.73) and reading skills (r = 0.82). Cronbach's alpha was 0.90, showing optimal internal consistency without redundancy. The scores did not decrease across the progressive levels of cognitive impairment, suggesting that the goal of evaluating the premorbid state was achieved. The intraclass correlation coefficient was 0.96, indicating excellent inter-informant reliability. Conclusion The instrument developed in this study has shown good properties and can be used as a valid estimate of premorbid cognitive abilities in low-educated populations. The applicability of the PCAS, both as an estimate of premorbid intelligence and cognitive reserve, is discussed. PMID:23555894

  19. A Comparison of Grizzly Bear Demographic Parameters Estimated from Non-Spatial and Spatial Open Population Capture-Recapture Models

    PubMed Central

    Whittington, Jesse; Sawaya, Michael A.

    2015-01-01

    Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal’s home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786–1.071) for females, 0.844 (0.703–0.975) for males, and 0.882 (0.779–0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758–1.024) for females, 0.825 (0.700–0.948) for males, and 0.863 (0.771–0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth rates suggest that Banff National Park’s population of grizzly bears requires continued conservation-oriented management actions. PMID:26230262

  20. Quantifying time-varying ground-water discharge and recharge in wetlands of the northern Florida Everglades

    USGS Publications Warehouse

    Choi, J.; Harvey, J.W.

    2000-01-01

    Developing a more thorough understanding of water and chemical budgets in wetlands depends in part on our ability to quantify time-varying interactions between ground water and surface water. We used a combined water and solute mass balance approach to estimate time-varying ground-water discharge and recharge in the Everglades Nutrient Removal project (ENR), a relatively large constructed wetland (1544 hectare) built for removing nutrients from agricultural drainage in the norther Everglades in South Florida, USA. Over a 4-year period (1994 through 1998), ground-water recharge averaged 13.4 hectare-meter per day (ha-m/day) or 0.9 cm/day, which is approximately 31% of surface water pumped into the ENR for treatment. In contrast, ground-water discharge was much smaller (1.4 ha-m/day, or 0.09 cm/day, or 2.8% of water input to ENR for treatment). Using a water-balance approach alone only allowed net ground-water exchange (discharge - recharge) to be estimated (-12 ?? 2.4 ha-ma/day). Disharge and recharge were individually determined by combining a chloride mass balance with the water balance. For a variety of reasons, the ground-water discharge estimated by the combined mass balance approach was not reliable (1.4 ?? 37 ha-m/day). As a result, ground-water interactions could only be reliably estimated by comparing the mass-balance results with other independent approaches, including direct seepage-meter measurements and previous estimates using ground-water modeling. All three independent approaches provided similar estimates of average ground-water recharge, ranging from 13 to 14 ha-m/day. There was also relatively good agreement between ground-water discharge estimates for the mass balance and seepage meter methods, 1.4 and 0.9 ha-m/day, respectively. However, ground-water-flow modeling provided an average discharge estimate that was approximately a factor of four higher (5.4 ha-m/day) than the other two methods. Our study developed an initial understanding of how the design and operation of the ENR increases interactions between ground water and surface water. A considerable portion of recharged ground water (73%) was collected and returned to the ENR by a seepage canal. Additional recharge that was not captured by the seepage canal only occurred when pumped inflow rates to ENR (and ENR water levels) were relatively high. Management of surface water in the northern Everglades therefore clearly has the potential to increase interactions with ground water.

  1. Study on Hyperspectral Characteristics and Estimation Model of Soil Mercury Content

    NASA Astrophysics Data System (ADS)

    Liu, Jinbao; Dong, Zhenyu; Sun, Zenghui; Ma, Hongchao; Shi, Lei

    2017-12-01

    In this study, the mercury content of 44 soil samples in Guan Zhong area of Shaanxi Province was used as the data source, and the reflectance spectrum of soil was obtained by ASD Field Spec HR (350-2500 nm) Comparing the reflection characteristics of different contents and the effect of different pre-treatment methods on the establishment of soil heavy metal spectral inversion model. The first order differential, second order differential and reflectance logarithmic transformations were carried out after the pre-treatment of NOR, MSC and SNV, and the sensitive bands of reflectance and mercury content in different mathematical transformations were selected. A hyperspectral estimation model is established by regression method. The results of chemical analysis show that there is a serious Hg pollution in the study area. The results show that: (1) the reflectivity decreases with the increase of mercury content, and the sensitive regions of mercury are located at 392 ~ 455nm, 923nm ~ 1040nm and 1806nm ~ 1969nm. (2) The combination of NOR, MSC and SNV transformations combined with differential transformations can improve the information of heavy metal elements in the soil, and the combination of high correlation band can improve the stability and prediction ability of the model. (3) The partial least squares regression model based on the logarithm of the original reflectance is better and the precision is higher, Rc2 = 0.9912, RMSEC = 0.665; Rv2 = 0.9506, RMSEP = 1.93, which can achieve the mercury content in this region Quick forecast.

  2. Using Robust Variance Estimation to Combine Multiple Regression Estimates with Meta-Analysis

    ERIC Educational Resources Information Center

    Williams, Ryan

    2013-01-01

    The purpose of this study was to explore the use of robust variance estimation for combining commonly specified multiple regression models and for combining sample-dependent focal slope estimates from diversely specified models. The proposed estimator obviates traditionally required information about the covariance structure of the dependent…

  3. Is Approximate Number Precision a Stable Predictor of Math Ability?

    ERIC Educational Resources Information Center

    Libertus, Melissa E.; Feigenson, Lisa; Halberda, Justin

    2013-01-01

    Previous research shows that children's ability to estimate numbers of items using their Approximate Number System (ANS) predicts later math ability. To more closely examine the predictive role of early ANS acuity on later abilities, we assessed the ANS acuity, math ability, and expressive vocabulary of preschoolers twice, six months apart. We…

  4. Relations between basic and specific motor abilities and player quality of young basketball players.

    PubMed

    Marić, Kristijan; Katić, Ratko; Jelicić, Mario

    2013-05-01

    Subjects from 5 first league clubs from Herzegovina were tested with the purpose of determining the relations of basic and specific motor abilities, as well as the effect of specific abilities on player efficiency in young basketball players (cadets). A battery of 12 tests assessing basic motor abilities and 5 specific tests assessing basketball efficiency were used on a sample of 83 basketball players. Two significant canonical correlations, i.e. linear combinations explained the relation between the set of twelve variables of basic motor space and five variables of situational motor abilities. Underlying the first canonical linear combination is the positive effect of the general motor factor, predominantly defined by jumping explosive power, movement speed of the arms, static strength of the arms and coordination, on specific basketball abilities: movement efficiency, the power of the overarm throw, shooting and passing precision, and the skill of handling the ball. The impact of basic motor abilities of precision and balance on specific abilities of passing and shooting precision and ball handling is underlying the second linear combination. The results of regression correlation analysis between the variable set of specific motor abilities and game efficiency have shown that the ability of ball handling has the largest impact on player quality in basketball cadets, followed by shooting precision and passing precision, and the power of the overarm throw.

  5. Heterosis and combining ability: a diallel cross of three geographically isolated populations of Pacific abalone Haliotis discus hannai Ino

    NASA Astrophysics Data System (ADS)

    Deng, Yuewen; Liu, Xiao; Zhang, Guofan; Wu, Fucun

    2010-11-01

    We conducted a complete diallel cross among three geographically isolated populations of Pacific abalone Haliotis discus hannai Ino to determine the heterosis and the combining ability of growth traits at the spat stage. The three populations were collected from Qingdao (Q) and Dalian (D) in China, and Miyagi (M) in Japan. We measured the shell length, shell width, and total weight. The magnitude of the general combining ability (GCA) variance was more pronounced than the specific combining ability (SCA) variance, which is evidenced by both the ratio of the genetic component in total variation and the GCA/SCA values. The component variances of GCA and SCA were significant for all three traits ( P<0.05), indicating the importance of additive and non-additive genetic effects in determining the expression of these traits. The reciprocal maternal effects (RE) were also significant for these traits ( P<0.05). Our results suggest that population D was the best general combiner in breeding programs to improve growth traits. The DM cross had the highest heterosis values for all three traits.

  6. Item Selection and Ability Estimation Procedures for a Mixed-Format Adaptive Test

    ERIC Educational Resources Information Center

    Ho, Tsung-Han; Dodd, Barbara G.

    2012-01-01

    In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…

  7. Tools of Robustness for Item Response Theory.

    ERIC Educational Resources Information Center

    Jones, Douglas H.

    This paper briefly demonstrates a few of the possibilities of a systematic application of robustness theory, concentrating on the estimation of ability when the true item response model does and does not fit the data. The definition of the maximum likelihood estimator (MLE) of ability is briefly reviewed. After introducing the notion of…

  8. A unified Bayesian semiparametric approach to assess discrimination ability in survival analysis

    PubMed Central

    Zhao, Lili; Feng, Dai; Chen, Guoan; Taylor, Jeremy M.G.

    2015-01-01

    Summary The discriminatory ability of a marker for censored survival data is routinely assessed by the time-dependent ROC curve and the c-index. The time-dependent ROC curve evaluates the ability of a biomarker to predict whether a patient lives past a particular time t. The c-index measures the global concordance of the marker and the survival time regardless of the time point. We propose a Bayesian semiparametric approach to estimate these two measures. The proposed estimators are based on the conditional distribution of the survival time given the biomarker and the empirical biomarker distribution. The conditional distribution is estimated by a linear dependent Dirichlet process mixture model. The resulting ROC curve is smooth as it is estimated by a mixture of parametric functions. The proposed c-index estimator is shown to be more efficient than the commonly used Harrell's c-index since it uses all pairs of data rather than only informative pairs. The proposed estimators are evaluated through simulations and illustrated using a lung cancer dataset. PMID:26676324

  9. Predicting the impact of blocking human immunodeficiency virus type 1 Nef in vivo.

    PubMed

    Wick, W David; Gilbert, Peter B; Yang, Otto O

    2009-03-01

    Human immunodeficiency virus type 1 (HIV-1) Nef is a multifunctional protein that confers an ability to evade killing by cytotoxic T lymphocytes (CTLs) as well as other advantages to the virus in vivo. Here we exploited mathematical modeling and related statistical methods to estimate the impact of Nef activity on viral replication in vivo in relation to CTLs. Our results indicate that downregulation of major histocompatibility complex class I (MHC-I) A and B by wild-type Nef confers an advantage to the virus of about 82% in decreased CTL killing efficiency on average, meaning that abolishing the MHC-I downregulation function of Nef would increase killing by more than fivefold. We incorporated this estimate, as well as prior estimates of replicative enhancement by Nef, into a previously published model of HIV-1 and CTLs in vivo (W. D. Wick, O. O. Yang, L. Corey, and S. G. Self, J. Virol. 79:13579-13586, 2005), generalized to permit CTL recognition of multiple epitopes. A sequence database analysis revealed that 92.9% of HIV-1 epitopes are A or B restricted, and a previous study found an average of about 19 epitopes recognized (M. M. Addo et al., J. Virol. 77:2081-2092, 2003). We combined these estimates in the model in order to predict the impact of inhibiting Nef function in the general (chronically infected) population by a drug. The predicted impact on viral load ranged from negligible to 2.4 orders of magnitude, depending on the effects of the drug and the CTL dynamical scenario assumed. We conclude that inhibiting Nef could make a substantial reduction in disease burden, lengthening the time before the necessity of undertaking combination therapy with other antiretroviral drugs.

  10. Model calibration criteria for estimating ecological flow characteristics

    USGS Publications Warehouse

    Vis, Marc; Knight, Rodney; Poole, Sandra; Wolfe, William J.; Seibert, Jan; Breuer, Lutz; Kraft, Philipp

    2016-01-01

    Quantification of streamflow characteristics in ungauged catchments remains a challenge. Hydrological modeling is often used to derive flow time series and to calculate streamflow characteristics for subsequent applications that may differ from those envisioned by the modelers. While the estimation of model parameters for ungauged catchments is a challenging research task in itself, it is important to evaluate whether simulated time series preserve critical aspects of the streamflow hydrograph. To address this question, seven calibration objective functions were evaluated for their ability to preserve ecologically relevant streamflow characteristics of the average annual hydrograph using a runoff model, HBV-light, at 27 catchments in the southeastern United States. Calibration trials were repeated 100 times to reduce parameter uncertainty effects on the results, and 12 ecological flow characteristics were computed for comparison. Our results showed that the most suitable calibration strategy varied according to streamflow characteristic. Combined objective functions generally gave the best results, though a clear underprediction bias was observed. The occurrence of low prediction errors for certain combinations of objective function and flow characteristic suggests that (1) incorporating multiple ecological flow characteristics into a single objective function would increase model accuracy, potentially benefitting decision-making processes; and (2) there may be a need to have different objective functions available to address specific applications of the predicted time series.

  11. Modeling the Biodegradability of Chemical Compounds Using the Online CHEmical Modeling Environment (OCHEM).

    PubMed

    Vorberg, Susann; Tetko, Igor V

    2014-01-01

    Biodegradability describes the capacity of substances to be mineralized by free-living bacteria. It is a crucial property in estimating a compound's long-term impact on the environment. The ability to reliably predict biodegradability would reduce the need for laborious experimental testing. However, this endpoint is difficult to model due to unavailability or inconsistency of experimental data. Our approach makes use of the Online Chemical Modeling Environment (OCHEM) and its rich supply of machine learning methods and descriptor sets to build classification models for ready biodegradability. These models were analyzed to determine the relationship between characteristic structural properties and biodegradation activity. The distinguishing feature of the developed models is their ability to estimate the accuracy of prediction for each individual compound. The models developed using seven individual descriptor sets were combined in a consensus model, which provided the highest accuracy. The identified overrepresented structural fragments can be used by chemists to improve the biodegradability of new chemical compounds. The consensus model, the datasets used, and the calculated structural fragments are publicly available at http://ochem.eu/article/31660. © 2014 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

  12. Association between different combination of measures for obesity and new-onset gallstone disease.

    PubMed

    Liu, Tong; Wang, Wanchao; Ji, Yannan; Wang, Yiming; Liu, Xining; Cao, Liying; Liu, Siqing

    2018-01-01

    Body mass index(BMI) is a calculation index of general obesity. Waist circumference(WC) is a measure of body-fat distribution and always used to estimate abdominal obesity. An important trait of general obesity and abdominal obesity is their propensity to coexist. Using one single measure of obesity could not estimate persons at risk for GSD precisely. This study aimed to compare the predictive values of various combination of measures for obesity(BMI, WC, waist to hip ratio) for new-onset GSD. We prospectively studied the predictive values of various combination of measures for obesity for new-onset GSD in a cohort of 88,947 participants who were free of prior gallstone disease, demographic characteristics and biochemical parameters were recorded. 4,329 participants were identified to have GSD among 88,947 participants during 713 345 person-years of follow-up. Higher BMI, WC and waist to hip ratio (WHtR) were significantly associated with higher risks of GSD in both genders even after adjustment for potential confounders. In males, the hazard ratio for the highest versus lowest BMI, WC, WHtR were 1.63(1.47~1.79), 1.53(1.40~1.68), 1.44(1.31~1.58), respectively. In females, the hazard ratio for the highest versus lowest BMI, WC, WHtR were 2.11(1.79~2.49), 1.85(1.55~2.22), 1.84(1.55~2.19), respectively. In male group, the combination of BMI+WC improved the predictive ability of the model more clearly than other combinations after adding them to the multivariate model in turn, while for females the best predictive combination was BMI+WHtR. Elevated BMI, WC and WHtR were independent risk factors for new-onset GSD in both sex groups after additional adjustment was made for potential confounders. In males, the combination of BMI+WC seemed to be the most predictable model to evaluate the effect of obesity on new-onset GSD, while the best combination in females was BMI+WHtR.

  13. A model of the endogenous glucose balance incorporating the characteristics of glucose transporters.

    PubMed

    Arleth, T; Andreassen, S; Federici, M O; Benedetti, M M

    2000-07-01

    This paper describes the development and preliminary test of a model of the endogenous glucose balance that incorporates the characteristics of the glucose transporters GLUT1, GLUT3 and GLUT4. In the modeling process the model is parameterized with nine parameters that are subsequently estimated from data in the literature on the hepatic- and endogenous- balances at various combinations of blood glucose and insulin levels. The ability of the resulting endogenous balance to fit blood glucose measured from patients was tested on 20 patients. The fit obtained with this model compared favorably with the fit obtained with the endogenous balance currently incorporated in the DIAS system.

  14. A decision-making tool for exchange transfusions in infants with severe hyperbilirubinemia in resource-limited settings.

    PubMed

    Olusanya, B O; Iskander, I F; Slusher, T M; Wennberg, R P

    2016-05-01

    Late presentation and ineffective phototherapy account for excessive rates of avoidable exchange transfusions (ETs) in many low- and middle-income countries. Several system-based constraints sometimes limit the ability to provide timely ETs for all infants at risk of kernicterus, thus necessitating a treatment triage to optimize available resources. This article proposes a practical priority-setting model for term and near-term infants requiring ET after the first 48 h of life. The proposed model combines plasma/serum bilirubin estimation, clinical signs of acute bilirubin encephalopathy and neurotoxicity risk factors for predicting the risk of kernicterus based on available evidence in the literature.

  15. Macro-organizational factors, the incidence of work disability, and work ability among the total workforce of home care workers in Sweden.

    PubMed

    Dellve, Lotta; Karlberg, Catarina; Allebeck, Peter; Herloff, Birgitta; Hagberg, Mats

    2006-01-01

    To investigate the importance of macro-organizational factors, i.e. organizational sociodemographic and socioeconomic preconditions, of the municipal incidence of long-term sick leave, disability pension, and prevalence of workers with long-term work ability among home care workers. In an ecological study design, data from national databases were combined by record linkage. Descriptive and analytical statistics were used to estimate and interpret macro-organizational factors (economic resources, region, unemployment, employment, occupational rehabilitation, return to work, age structures of inhabitants and home care workers). The incidence of long-term sick leave among female home care workers was twice as high as that of male home care workers, and incidence of disability pension was about four times as high for the women. A great variation in municipal incidence of long-term sick leave, disability pension, and long-term work ability (101-264, 0.6-19.6, and 913-1,279 per 1,000 full-time equivalent workers and year) was also found. The strongest single factor for long-term work ability was a high proportion of part-time or hourly paid employees, which explained 35% of the municipal variation. Macro-organizational factors explained long-term work ability (47-62% explained variance) better than long-term sick leave (33% explained variance). There was a low rehabilitation activity; only 2% received occupational rehabilitation and 5% of those on sick leave longer than 2 weeks returned to work within 30 days. The differences in the municipal proportion of work ability incidence indicate a preventive potential, especially related to employment and return to work after sick leave.

  16. Effects of Differential Item Functioning on Examinees' Test Performance and Reliability of Test

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; Zhang, Jinming

    2017-01-01

    Simulations were conducted to examine the effect of differential item functioning (DIF) on measurement consequences such as total scores, item response theory (IRT) ability estimates, and test reliability in terms of the ratio of true-score variance to observed-score variance and the standard error of estimation for the IRT ability parameter. The…

  17. Effects of Content Balancing and Item Selection Method on Ability Estimation in Computerized Adaptive Tests

    ERIC Educational Resources Information Center

    Sahin, Alper; Ozbasi, Durmus

    2017-01-01

    Purpose: This study aims to reveal effects of content balancing and item selection method on ability estimation in computerized adaptive tests by comparing Fisher's maximum information (FMI) and likelihood weighted information (LWI) methods. Research Methods: Four groups of examinees (250, 500, 750, 1000) and a bank of 500 items with 10 different…

  18. Contact angles and wettability of ionic liquids on polar and non-polar surfaces†

    PubMed Central

    Sousa, Filipa L.; Silva, Nuno J. O.; Lopes-da-Silva, José A.; Coutinho, João A. P.; Freire, Mara G.

    2016-01-01

    Many applications involving ionic liquids (ILs) require the knowledge of their interfacial behaviour, such as wettability and adhesion. In this context, herein, two approaches were combined aiming at understanding the impact of the IL chemical structures on their wettability on both polar and non-polar surfaces, namely: (i) the experimental determination of the contact angles of a broad range of ILs (covering a wide number of anions of variable polarity, cations, and cation alkyl side chain lengths) on polar and non-polar solid substrates (glass, Al-plate, and poly-(tetrafluoroethylene) (PTFE)); and (ii) the correlation of the experimental contact angles with the cation–anion pair interaction energies generated by the Conductor-like Screening Model for Real Solvents (COSMO-RS). The combined results reveal that the hydrogen-bond basicity of ILs, and thus the IL anion, plays a major role through their wettability on both polar and non-polar surfaces. The increase of the IL hydrogen-bond accepting ability leads to an improved wettability of more polar surfaces (lower contact angles) while the opposite trend is observed on non-polar surfaces. The cation nature and alkyl side chain lengths have however a smaller impact on the wetting ability of ILs. Linear correlations were found between the experimental contact angles and the cation–anion hydrogen-bonding and cation ring energies, estimated using COSMO-RS, suggesting that these features primarily control the wetting ability of ILs. Furthermore, two-descriptor correlations are proposed here to predict the contact angles of a wide variety of ILs on glass, Al-plate, and PTFE surfaces. A new extended list is provided for the contact angles of ILs on three surfaces, which can be used as a priori information to choose appropriate ILs before a given application. PMID:26554705

  19. Salivary mRNA markers having the potential to detect oral squamous cell carcinoma segregated from oral leukoplakia with dysplasia.

    PubMed

    Michailidou, Evangelia; Tzimagiorgis, Georgios; Chatzopoulou, Fani; Vahtsevanos, Konstantinos; Antoniadis, Konstantinos; Kouidou, Sofia; Markopoulos, Anastasios; Antoniades, Dimitrios

    2016-08-01

    In the current study the presence of extracellular IL-1B, IL-8, OAZ and SAT mRNAs in the saliva was evaluated as a tool in the early detection of oral squamous cell carcinoma. 34 patients with primary oral squamous cell carcinoma stage T1N0M0/T2N0M0, 20 patients with oral leukoplakia and dysplasia (15 patients with mild dysplasia and 5 with severe dysplasia/in situ carcinoma) and 31 matched healthy-control subjects were included in the study. The presence of IL-1B, IL-8, OAZ and SAT mRNA was evaluated in extracellular RNA isolated from saliva samples using sequence-specific primers and real-time RT-PCR. ROC curve analysis was used to estimate the ability of the biomarkers to detect oral squamous cell carcinoma patients. The data reveal that the combination of these four biomarkers provides a good predictive probability of up to 80% (AUC=0.799, p=0.002) for patients with oral squamous cell carcinoma but not patients suffering from oral leukoplakia with dysplasia. Moreover, the combination of only the two biomarkers (SAT and IL-8) also raises a high predictive ability of 75.5% (AUC=0.755, p=0.007) approximately equal to the four biomarkers suggesting the use of the two biomarkers only in the prediction model for oral squamous cell carcinoma patients limiting the economic and health cost in half. SAT and IL-8 mRNAs are present in the saliva in high quality and quantity, with a good discriminatory ability for oral squamous cell carcinoma patients only but not for patients with oral leukoplakia and dysplasia an oral potentially malignant disorder. Copyright © 2016. Published by Elsevier Ltd.

  20. Contact angles and wettability of ionic liquids on polar and non-polar surfaces.

    PubMed

    Pereira, Matheus M; Kurnia, Kiki A; Sousa, Filipa L; Silva, Nuno J O; Lopes-da-Silva, José A; Coutinho, João A P; Freire, Mara G

    2015-12-21

    Many applications involving ionic liquids (ILs) require the knowledge of their interfacial behaviour, such as wettability and adhesion. In this context, herein, two approaches were combined aiming at understanding the impact of the IL chemical structures on their wettability on both polar and non-polar surfaces, namely: (i) the experimental determination of the contact angles of a broad range of ILs (covering a wide number of anions of variable polarity, cations, and cation alkyl side chain lengths) on polar and non-polar solid substrates (glass, Al-plate, and poly-(tetrafluoroethylene) (PTFE)); and (ii) the correlation of the experimental contact angles with the cation-anion pair interaction energies generated by the Conductor-like Screening Model for Real Solvents (COSMO-RS). The combined results reveal that the hydrogen-bond basicity of ILs, and thus the IL anion, plays a major role through their wettability on both polar and non-polar surfaces. The increase of the IL hydrogen-bond accepting ability leads to an improved wettability of more polar surfaces (lower contact angles) while the opposite trend is observed on non-polar surfaces. The cation nature and alkyl side chain lengths have however a smaller impact on the wetting ability of ILs. Linear correlations were found between the experimental contact angles and the cation-anion hydrogen-bonding and cation ring energies, estimated using COSMO-RS, suggesting that these features primarily control the wetting ability of ILs. Furthermore, two-descriptor correlations are proposed here to predict the contact angles of a wide variety of ILs on glass, Al-plate, and PTFE surfaces. A new extended list is provided for the contact angles of ILs on three surfaces, which can be used as a priori information to choose appropriate ILs before a given application.

  1. Aerial population estimates of wild horses (Equus caballus) in the adobe town and salt wells creek herd management areas using an integrated simultaneous double-count and sightability bias correction technique

    USGS Publications Warehouse

    Lubow, Bruce C.; Ransom, Jason I.

    2007-01-01

    An aerial survey technique combining simultaneous double-count and sightability bias correction methodologies was used to estimate the population of wild horses inhabiting Adobe Town and Salt Wells Creek Herd Management Areas, Wyoming. Based on 5 surveys over 4 years, we conclude that the technique produced estimates consistent with the known number of horses removed between surveys and an annual population growth rate of 16.2 percent per year. Therefore, evidence from this series of surveys supports the validity of this survey method. Our results also indicate that the ability of aerial observers to see horse groups is very strongly dependent on skill of the individual observer, size of the horse group, and vegetation cover. It is also more modestly dependent on the ruggedness of the terrain and the position of the sun relative to the observer. We further conclude that censuses, or uncorrected raw counts, are inadequate estimates of population size for this herd. Such uncorrected counts were all undercounts in our trials, and varied in magnitude from year to year and observer to observer. As of April 2007, we estimate that the population of the Adobe Town /Salt Wells Creek complex is 906 horses with a 95 percent confidence interval ranging from 857 to 981 horses.

  2. Estimation of rate constants of PCB dechlorination reactions using an anaerobic dehalogenation model.

    PubMed

    Karakas, Filiz; Imamoglu, Ipek

    2017-02-15

    This study aims to estimate anaerobic dechlorination rate constants (k m ) of reactions of individual PCB congeners using data from four laboratory microcosms set up using sediment from Baltimore Harbor. Pathway k m values are estimated by modifying a previously developed model as Anaerobic Dehalogenation Model (ADM) which can be applied to any halogenated hydrophobic organic (HOC). Improvements such as handling multiple dechlorination activities (DAs) and co-elution of congeners, incorporating constraints, using new goodness of fit evaluation led to an increase in accuracy, speed and flexibility of ADM. DAs published in the literature in terms of chlorine substitutions as well as specific microorganisms and their combinations are used for identification of pathways. The best fit explaining the congener pattern changes was found for pathways of Phylotype DEH10, which has the ability to remove doubly flanked chlorines in meta and para positions, para flanked chlorines in meta position. The range of estimated k m values is between 0.0001-0.133d -1 , the median of which is found to be comparable to the few available published biologically confirmed rate constants. Compound specific modelling studies such as that performed by ADM can enable monitoring and prediction of concentration changes as well as toxicity during bioremediation. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Methods for estimating dispersal probabilities and related parameters using marked animals

    USGS Publications Warehouse

    Bennetts, R.E.; Nichols, J.D.; Pradel, R.; Lebreton, J.D.; Kitchens, W.M.; Clobert, Jean; Danchin, Etienne; Dhondt, Andre A.; Nichols, James D.

    2001-01-01

    Deriving valid inferences about the causes and consequences of dispersal from empirical studies depends largely on our ability reliably to estimate parameters associated with dispersal. Here, we present a review of the methods available for estimating dispersal and related parameters using marked individuals. We emphasize methods that place dispersal in a probabilistic framework. In this context, we define a dispersal event as a movement of a specified distance or from one predefined patch to another, the magnitude of the distance or the definition of a `patch? depending on the ecological or evolutionary question(s) being addressed. We have organized the chapter based on four general classes of data for animals that are captured, marked, and released alive: (1) recovery data, in which animals are recovered dead at a subsequent time, (2) recapture/resighting data, in which animals are either recaptured or resighted alive on subsequent sampling occasions, (3) known-status data, in which marked animals are reobserved alive or dead at specified times with probability 1.0, and (4) combined data, in which data are of more than one type (e.g., live recapture and ring recovery). For each data type, we discuss the data required, the estimation techniques, and the types of questions that might be addressed from studies conducted at single and multiple sites.

  4. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    NASA Technical Reports Server (NTRS)

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized forest AGB sampling errors by 15 - 38%. Furthermore, spaceborne global scale accuracy requirements were achieved. At least 80% of the grid cells at 100m, 250m, 500m, and 1km grid levels met AGB density accuracy requirements using a combination of passive optical and SAR along with machine learning methods to predict vegetation structure metrics for forested areas without LiDAR samples. Finally, using either passive optical or SAR, accuracy requirements were met at the 500m and 250m grid level, respectively.

  5. The integration of probabilistic information during sensorimotor estimation is unimpaired in children with Cerebral Palsy

    PubMed Central

    Sokhey, Taegh; Gaebler-Spira, Deborah; Kording, Konrad P.

    2017-01-01

    Background It is important to understand the motor deficits of children with Cerebral Palsy (CP). Our understanding of this motor disorder can be enriched by computational models of motor control. One crucial stage in generating movement involves combining uncertain information from different sources, and deficits in this process could contribute to reduced motor function in children with CP. Healthy adults can integrate previously-learned information (prior) with incoming sensory information (likelihood) in a close-to-optimal way when estimating object location, consistent with the use of Bayesian statistics. However, there are few studies investigating how children with CP perform sensorimotor integration. We compare sensorimotor estimation in children with CP and age-matched controls using a model-based analysis to understand the process. Methods and findings We examined Bayesian sensorimotor integration in children with CP, aged between 5 and 12 years old, with Gross Motor Function Classification System (GMFCS) levels 1–3 and compared their estimation behavior with age-matched typically-developing (TD) children. We used a simple sensorimotor estimation task which requires participants to combine probabilistic information from different sources: a likelihood distribution (current sensory information) with a prior distribution (learned target information). In order to examine sensorimotor integration, we quantified how participants weighed statistical information from the two sources (prior and likelihood) and compared this to the statistical optimal weighting. We found that the weighing of statistical information in children with CP was as statistically efficient as that of TD children. Conclusions We conclude that Bayesian sensorimotor integration is not impaired in children with CP and therefore, does not contribute to their motor deficits. Future research has the potential to enrich our understanding of motor disorders by investigating the stages of motor processing set out by computational models. Therapeutic interventions should exploit the ability of children with CP to use statistical information. PMID:29186196

  6. Estimation of biomedical optical properties by simultaneous use of diffuse reflectometry and photothermal radiometry: investigation of light propagation models

    NASA Astrophysics Data System (ADS)

    Fonseca, E. S. R.; de Jesus, M. E. P.

    2007-07-01

    The estimation of optical properties of highly turbid and opaque biological tissue is a difficult task since conventional purely optical methods rapidly loose sensitivity as the mean photon path length decreases. Photothermal methods, such as pulsed or frequency domain photothermal radiometry (FD-PTR), on the other hand, show remarkable sensitivity in experimental conditions that produce very feeble optical signals. Photothermal Radiometry is primarily sensitive to absorption coefficient yielding considerably higher estimation errors on scattering coefficients. Conversely, purely optical methods such as Local Diffuse Reflectance (LDR) depend mainly on the scattering coefficient and yield much better estimates of this parameter. Therefore, at moderate transport albedos, the combination of photothermal and reflectance methods can improve considerably the sensitivity of detection of tissue optical properties. The authors have recently proposed a novel method that combines FD-PTR with LDR, aimed at improving sensitivity on the determination of both optical properties. Signal analysis was performed by global fitting the experimental data to forward models based on Monte-Carlo simulations. Although this approach is accurate, the associated computational burden often limits its use as a forward model. Therefore, the application of analytical models based on the diffusion approximation offers a faster alternative. In this work, we propose the calculation of the diffuse reflectance and the fluence rate profiles under the δ-P I approximation. This approach is known to approximate fluence rate expressions better close to collimated sources and boundaries than the standard diffusion approximation (SDA). We extend this study to the calculation of the diffuse reflectance profiles. The ability of the δ-P I based model to provide good estimates of the absorption, scattering and anisotropy coefficients is tested against Monte-Carlo simulations over a wide range of scattering to absorption ratios. Experimental validation of the proposed method is accomplished by a set of measurements on solid absorbing and scattering phantoms.

  7. Can Nonexperimental Estimates Replicate Estimates Based on Random Assignment in Evaluations of School Choice? A Within-Study Comparison

    ERIC Educational Resources Information Center

    Bifulco, Robert

    2012-01-01

    The ability of nonexperimental estimators to match impact estimates derived from random assignment is examined using data from the evaluation of two interdistrict magnet schools. As in previous within-study comparisons, nonexperimental estimates differ from estimates based on random assignment when nonexperimental estimators are implemented…

  8. Landsat 8 and ICESat-2: Performance and potential synergies for quantifying dryland ecosystem vegetation cover and biomass

    USGS Publications Warehouse

    Glenn, Nancy F.; Neuenschwander, Amy; Vierling, Lee A.; Spaete, Lucas; Li, Aihua; Shinneman, Douglas; Pilliod, David S.; Arkle, Robert; McIlroy, Susan

    2016-01-01

    To estimate the potential synergies of OLI and ICESat-2 we used simulated ICESat-2 photon data to predict vegetation structure. In a shrubland environment with a vegetation mean height of 1 m and mean vegetation cover of 33%, vegetation photons are able to explain nearly 50% of the variance in vegetation height. These results, and those from a comparison site, suggest that a lower detection threshold of ICESat-2 may be in the range of 30% canopy cover and roughly 1 m height in comparable dryland environments and these detection thresholds could be used to combine future ICESat-2 photon data with OLI spectral data for improved vegetation structure. Overall, the synergistic use of Landsat 8 and ICESat-2 may improve estimates of above-ground biomass and carbon storage in drylands that meet these minimum thresholds, increasing our ability to monitor drylands for fuel loading and the potential to sequester carbon.

  9. Molecular Dynamics Simulations and Kinetic Measurements to Estimate and Predict Protein-Ligand Residence Times.

    PubMed

    Mollica, Luca; Theret, Isabelle; Antoine, Mathias; Perron-Sierra, Françoise; Charton, Yves; Fourquez, Jean-Marie; Wierzbicki, Michel; Boutin, Jean A; Ferry, Gilles; Decherchi, Sergio; Bottegoni, Giovanni; Ducrot, Pierre; Cavalli, Andrea

    2016-08-11

    Ligand-target residence time is emerging as a key drug discovery parameter because it can reliably predict drug efficacy in vivo. Experimental approaches to binding and unbinding kinetics are nowadays available, but we still lack reliable computational tools for predicting kinetics and residence time. Most attempts have been based on brute-force molecular dynamics (MD) simulations, which are CPU-demanding and not yet particularly accurate. We recently reported a new scaled-MD-based protocol, which showed potential for residence time prediction in drug discovery. Here, we further challenged our procedure's predictive ability by applying our methodology to a series of glucokinase activators that could be useful for treating type 2 diabetes mellitus. We combined scaled MD with experimental kinetics measurements and X-ray crystallography, promptly checking the protocol's reliability by directly comparing computational predictions and experimental measures. The good agreement highlights the potential of our scaled-MD-based approach as an innovative method for computationally estimating and predicting drug residence times.

  10. Effective Fingerprint Quality Estimation for Diverse Capture Sensors

    PubMed Central

    Xie, Shan Juan; Yoon, Sook; Shin, Jinwook; Park, Dong Sun

    2010-01-01

    Recognizing the quality of fingerprints in advance can be beneficial for improving the performance of fingerprint recognition systems. The representative features to assess the quality of fingerprint images from different types of capture sensors are known to vary. In this paper, an effective quality estimation system that can be adapted for different types of capture sensors is designed by modifying and combining a set of features including orientation certainty, local orientation quality and consistency. The proposed system extracts basic features, and generates next level features which are applicable for various types of capture sensors. The system then uses the Support Vector Machine (SVM) classifier to determine whether or not an image should be accepted as input to the recognition system. The experimental results show that the proposed method can perform better than previous methods in terms of accuracy. In the meanwhile, the proposed method has an ability to eliminate residue images from the optical and capacitive sensors, and the coarse images from thermal sensors. PMID:22163632

  11. A CRITICAL ASSESSMENT OF BIODOSIMETRY METHODS FOR LARGE-SCALE INCIDENTS

    PubMed Central

    Swartz, Harold M.; Flood, Ann Barry; Gougelet, Robert M.; Rea, Michael E.; Nicolalde, Roberto J.; Williams, Benjamin B.

    2014-01-01

    Recognition is growing regarding the possibility that terrorism or large-scale accidents could result in potential radiation exposure of hundreds of thousands of people and that the present guidelines for evaluation after such an event are seriously deficient. Therefore, there is a great and urgent need for after-the-fact biodosimetric methods to estimate radiation dose. To accomplish this goal, the dose estimates must be at the individual level, timely, accurate, and plausibly obtained in large-scale disasters. This paper evaluates current biodosimetry methods, focusing on their strengths and weaknesses in estimating human radiation exposure in large-scale disasters at three stages. First, the authors evaluate biodosimetry’s ability to determine which individuals did not receive a significant exposure so they can be removed from the acute response system. Second, biodosimetry’s capacity to classify those initially assessed as needing further evaluation into treatment-level categories is assessed. Third, we review biodosimetry’s ability to guide treatment, both short- and long-term, is reviewed. The authors compare biodosimetric methods that are based on physical vs. biological parameters and evaluate the features of current dosimeters (capacity, speed and ease of getting information, and accuracy) to determine which are most useful in meeting patients’ needs at each of the different stages. Results indicate that the biodosimetry methods differ in their applicability to the three different stages, and that combining physical and biological techniques may sometimes be most effective. In conclusion, biodosimetry techniques have different properties, and knowledge of their properties for meeting the different needs for different stages will result in their most effective use in a nuclear disaster mass-casualty event. PMID:20065671

  12. Effects of Calibration Sample Size and Item Bank Size on Ability Estimation in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Sahin, Alper; Weiss, David J.

    2015-01-01

    This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…

  13. The Effect of Schooling and Ability on Achievement Test Scores. NBER Working Paper Series.

    ERIC Educational Resources Information Center

    Hansen, Karsten; Heckman, James J.; Mullen, Kathleen J.

    This study developed two methods for estimating the effect of schooling on achievement test scores that control for the endogeneity of schooling by postulating that both schooling and test scores are generated by a common unobserved latent ability. The methods were applied to data on schooling and test scores. Estimates from the two methods are in…

  14. Brief Report: Use of DQ for Estimating Cognitive Ability in Young Children with Autism

    ERIC Educational Resources Information Center

    Delmolino, Lara M.

    2006-01-01

    The utility of Developmental Quotients (DQ) from the Psychoeducational Profile--Revised (PEP-R) to estimate cognitive ability in young children with autism was assessed. DQ scores were compared to scores from the Stanford-Binet Intelligence Scales--Fourth Edition (SB-FE) for 27 preschool students with autism. Overall and domain DQ's on the PEP-R…

  15. Combining wrist age and third molars in forensic age estimation: how to calculate the joint age estimate and its error rate in age diagnostics.

    PubMed

    Gelbrich, Bianca; Frerking, Carolin; Weiss, Sandra; Schwerdt, Sebastian; Stellzig-Eisenhauer, Angelika; Tausche, Eve; Gelbrich, Götz

    2015-01-01

    Forensic age estimation in living adolescents is based on several methods, e.g. the assessment of skeletal and dental maturation. Combination of several methods is mandatory, since age estimates from a single method are too imprecise due to biological variability. The correlation of the errors of the methods being combined must be known to calculate the precision of combined age estimates. To examine the correlation of the errors of the hand and the third molar method and to demonstrate how to calculate the combined age estimate. Clinical routine radiographs of the hand and dental panoramic images of 383 patients (aged 7.8-19.1 years, 56% female) were assessed. Lack of correlation (r = -0.024, 95% CI = -0.124 to + 0.076, p = 0.64) allows calculating the combined age estimate as the weighted average of the estimates from hand bones and third molars. Combination improved the standard deviations of errors (hand = 0.97, teeth = 1.35 years) to 0.79 years. Uncorrelated errors of the age estimates obtained from both methods allow straightforward determination of the common estimate and its variance. This is also possible when reference data for the hand and the third molar method are established independently from each other, using different samples.

  16. Estimating spatiotemporal distribution of PM1 concentrations in China with satellite remote sensing, meteorology, and land use information.

    PubMed

    Chen, Gongbo; Knibbs, Luke D; Zhang, Wenyi; Li, Shanshan; Cao, Wei; Guo, Jianping; Ren, Hongyan; Wang, Boguang; Wang, Hao; Williams, Gail; Hamm, N A S; Guo, Yuming

    2018-02-01

    PM 1 might be more hazardous than PM 2.5 (particulate matter with an aerodynamic diameter ≤ 1 μm and ≤2.5 μm, respectively). However, studies on PM 1 concentrations and its health effects are limited due to a lack of PM 1 monitoring data. To estimate spatial and temporal variations of PM 1 concentrations in China during 2005-2014 using satellite remote sensing, meteorology, and land use information. Two types of Moderate Resolution Imaging Spectroradiometer (MODIS) Collection 6 aerosol optical depth (AOD) data, Dark Target (DT) and Deep Blue (DB), were combined. Generalised additive model (GAM) was developed to link ground-monitored PM 1 data with AOD data and other spatial and temporal predictors (e.g., urban cover, forest cover and calendar month). A 10-fold cross-validation was performed to assess the predictive ability. The results of 10-fold cross-validation showed R 2 and Root Mean Squared Error (RMSE) for monthly prediction were 71% and 13.0 μg/m 3 , respectively. For seasonal prediction, the R 2 and RMSE were 77% and 11.4 μg/m 3 , respectively. The predicted annual mean concentration of PM 1 across China was 26.9 μg/m 3 . The PM 1 level was highest in winter while lowest in summer. Generally, the PM 1 levels in entire China did not substantially change during the past decade. Regarding local heavy polluted regions, PM 1 levels increased substantially in the South-Western Hebei and Beijing-Tianjin region. GAM with satellite-retrieved AOD, meteorology, and land use information has high predictive ability to estimate ground-level PM 1 . Ambient PM 1 reached high levels in China during the past decade. The estimated results can be applied to evaluate the health effects of PM 1 . Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Biopsychosocial Characteristics of Community-dwelling Older Adults with Limited Ability to Walk ¼ Mile

    PubMed Central

    Hardy, Susan E.; McGurl, David J.; Studenski, Stephanie A.; Degenholtz, Howard B.

    2010-01-01

    Objectives To establish nationally representative estimates of the prevalence of self-reported difficulty and inability to walk ¼ mile among older adults and to identify the characteristics independently associated with difficulty or inability to walk ¼ mile. Design Cross-sectional analysis of data from the 2003 Cost and Use Medicare Current Beneficiary Survey. Setting Community. Participants 9563 community-dwelling Medicare beneficiaries aged 65 years or older, representing an estimated total population of 34.2 million older adults. Measurements Self-reported ability to walk ¼ mile, sociodemographics, chronic conditions, body mass index, smoking, and functional status. Results In 2003, an estimated 9.5 million aged Medicare beneficiaries had difficulty walking ¼ mile and 5.9 million were unable. Among the 20.2 million older adults with no difficulty in basic or instrumental activities of daily living (ADL), an estimated 4.3 million (21%) had limited ability to walk ¼ mile. Having difficulty or being unable to walk ¼ mile was independently associated with older age, female sex, non-Hispanic ethnicity, lower educational level, Medicaid entitlement, most chronic medical conditions, current smoking, and being overweight or obese. Conclusion Almost half of older adults, and 20% of those reporting no ADL limitations, report limited ability to walk ¼ mile. Among functionally independent older adults, reported ability to walk ¼ mile can identify vulnerable older adults with greater medical problems and fewer resources, and may be a valuable clinical marker in planning their care. Future work is needed to determine the association between ¼ mile walk ability and subsequent functional decline and healthcare utilization. PMID:20210817

  18. Absolute probability estimates of lethal vessel strikes to North Atlantic right whales in Roseway Basin, Scotian Shelf.

    PubMed

    van der Hoop, Julie M; Vanderlaan, Angelia S M; Taggart, Christopher T

    2012-10-01

    Vessel strikes are the primary source of known mortality for the endangered North Atlantic right whale (Eubalaena glacialis). Multi-institutional efforts to reduce mortality associated with vessel strikes include vessel-routing amendments such as the International Maritime Organization voluntary "area to be avoided" (ATBA) in the Roseway Basin right whale feeding habitat on the southwestern Scotian Shelf. Though relative probabilities of lethal vessel strikes have been estimated and published, absolute probabilities remain unknown. We used a modeling approach to determine the regional effect of the ATBA, by estimating reductions in the expected number of lethal vessel strikes. This analysis differs from others in that it explicitly includes a spatiotemporal analysis of real-time transits of vessels through a population of simulated, swimming right whales. Combining automatic identification system (AIS) vessel navigation data and an observationally based whale movement model allowed us to determine the spatial and temporal intersection of vessels and whales, from which various probability estimates of lethal vessel strikes are derived. We estimate one lethal vessel strike every 0.775-2.07 years prior to ATBA implementation, consistent with and more constrained than previous estimates of every 2-16 years. Following implementation, a lethal vessel strike is expected every 41 years. When whale abundance is held constant across years, we estimate that voluntary vessel compliance with the ATBA results in an 82% reduction in the per capita rate of lethal strikes; very similar to a previously published estimate of 82% reduction in the relative risk of a lethal vessel strike. The models we developed can inform decision-making and policy design, based on their ability to provide absolute, population-corrected, time-varying estimates of lethal vessel strikes, and they are easily transported to other regions and situations.

  19. CYGNSS Surface Wind Observations and Surface Flux Estimates within Low-Latitude Extratropical Cyclones

    NASA Astrophysics Data System (ADS)

    Crespo, J.; Posselt, D. J.

    2017-12-01

    The Cyclone Global Navigation Satellite System (CYGNSS), launched in December 2016, aims to improve estimates of surface wind speeds over the tropical oceans. While CYGNSS's core mission is to provide better estimates of surface winds within the core of tropical cyclones, previous research has shown that the constellation, with its orbital inclination of 35°, also has the ability to observe numerous extratropical cyclones that form in the lower latitudes. Along with its high spatial and temporal resolution, CYGNSS can provide new insights into how extratropical cyclones develop and evolve, especially in the presence of thick clouds and precipitation. We will demonstrate this by presenting case studies of multiple extratropical cyclones observed by CYGNSS early on in its mission in both Northern and Southern Hemispheres. By using the improved estimates of surface wind speeds from CYGNSS, we can obtain better estimates of surface latent and sensible heat fluxes within and around extratropical cyclones. Surface heat fluxes, driven by surface winds and strong vertical gradients of water vapor and temperature, play a key role in marine cyclogenesis as they increase instability within the boundary layer and may contribute to extreme marine cyclogenesis. In the past, it has been difficult to estimate surface heat fluxes from space borne instruments, as these fluxes cannot be observed directly from space, and deficiencies in spatial coverage and attenuation from clouds and precipitation lead to inaccurate estimates of surface flux components, such as surface wind speeds. While CYGNSS only contributes estimates of surface wind speeds, we can combine this data with other reanalysis and satellite data to provide improved estimates of surface sensible and latent heat fluxes within and around extratropical cyclones and throughout the entire CYGNSS mission.

  20. Estimating drought risk across Europe from reported drought impacts, hazard indicators and vulnerability factors

    NASA Astrophysics Data System (ADS)

    Blauhut, V.; Stahl, K.; Stagge, J. H.; Tallaksen, L. M.; De Stefano, L.; Vogt, J.

    2015-12-01

    Drought is one of the most costly natural hazards in Europe. Due to its complexity, drought risk, the combination of the natural hazard and societal vulnerability, is difficult to define and challenging to detect and predict, as the impacts of drought are very diverse, covering the breadth of socioeconomic and environmental systems. Pan-European maps of drought risk could inform the elaboration of guidelines and policies to address its documented severity and impact across borders. This work (1) tests the capability of commonly applied hazard indicators and vulnerability factors to predict annual drought impact occurrence for different sectors and macro regions in Europe and (2) combines information on past drought impacts, drought hazard indicators, and vulnerability factors into estimates of drought risk at the pan-European scale. This "hybrid approach" bridges the gap between traditional vulnerability assessment and probabilistic impact forecast in a statistical modelling framework. Multivariable logistic regression was applied to predict the likelihood of impact occurrence on an annual basis for particular impact categories and European macro regions. The results indicate sector- and macro region specific sensitivities of hazard indicators, with the Standardised Precipitation Evapotranspiration Index for a twelve month aggregation period (SPEI-12) as the overall best hazard predictor. Vulnerability factors have only limited ability to predict drought impacts as single predictor, with information about landuse and water resources as best vulnerability-based predictors. (3) The application of the "hybrid approach" revealed strong regional (NUTS combo level) and sector specific differences in drought risk across Europe. The majority of best predictor combinations rely on a combination of SPEI for shorter and longer aggregation periods, and a combination of information on landuse and water resources. The added value of integrating regional vulnerability information with drought risk prediction could be proven. Thus, the study contributes to the overall understanding of drivers of drought impacts, current practice of drought indicators selection for specific application, and drought risk assessment.

  1. Estimating drought risk across Europe from reported drought impacts, drought indices, and vulnerability factors

    NASA Astrophysics Data System (ADS)

    Blauhut, Veit; Stahl, Kerstin; Stagge, James Howard; Tallaksen, Lena M.; De Stefano, Lucia; Vogt, Jürgen

    2016-07-01

    Drought is one of the most costly natural hazards in Europe. Due to its complexity, drought risk, meant as the combination of the natural hazard and societal vulnerability, is difficult to define and challenging to detect and predict, as the impacts of drought are very diverse, covering the breadth of socioeconomic and environmental systems. Pan-European maps of drought risk could inform the elaboration of guidelines and policies to address its documented severity and impact across borders. This work tests the capability of commonly applied drought indices and vulnerability factors to predict annual drought impact occurrence for different sectors and macro regions in Europe and combines information on past drought impacts, drought indices, and vulnerability factors into estimates of drought risk at the pan-European scale. This hybrid approach bridges the gap between traditional vulnerability assessment and probabilistic impact prediction in a statistical modelling framework. Multivariable logistic regression was applied to predict the likelihood of impact occurrence on an annual basis for particular impact categories and European macro regions. The results indicate sector- and macro-region-specific sensitivities of drought indices, with the Standardized Precipitation Evapotranspiration Index (SPEI) for a 12-month accumulation period as the overall best hazard predictor. Vulnerability factors have only limited ability to predict drought impacts as single predictors, with information about land use and water resources being the best vulnerability-based predictors. The application of the hybrid approach revealed strong regional and sector-specific differences in drought risk across Europe. The majority of the best predictor combinations rely on a combination of SPEI for shorter and longer accumulation periods, and a combination of information on land use and water resources. The added value of integrating regional vulnerability information with drought risk prediction could be proven. Thus, the study contributes to the overall understanding of drivers of drought impacts, appropriateness of drought indices selection for specific applications, and drought risk assessment.

  2. Sensitivity of Polar Stratospheric Ozone Loss to Uncertainties in Chemical Reaction Kinetics

    NASA Technical Reports Server (NTRS)

    Kawa, S. Randolph; Stolarski, Richard S.; Douglass, Anne R.; Newman, Paul A.

    2008-01-01

    Several recent observational and laboratory studies of processes involved in polar stratospheric ozone loss have prompted a reexamination of aspect of out understanding for this key indicator of global change. To a large extent, our confidence in understanding and projecting changes in polar and global ozone is based on our ability to to simulate these process in numerical models of chemistry and transport. These models depend on laboratory-measured kinetic reaction rates and photlysis cross section to simulate molecular interactions. In this study we use a simple box-model scenario for Antarctic ozone to estimate the uncertainty in loss attributable to known reaction kinetic uncertainties. Following the method of earlier work, rates and uncertainties from the latest laboratory evaluation are applied in random combinations. We determine the key reaction and rates contributing the largest potential errors and compare the results to observations to evaluate which combinations are consistent with atmospheric data. Implications for our theoretical and practical understanding of polar ozone loss will be assessed.

  3. A combined LS-SVM & MLR QSAR workflow for predicting the inhibition of CXCR3 receptor by quinazolinone analogs.

    PubMed

    Afantitis, Antreas; Melagraki, Georgia; Sarimveis, Haralambos; Koutentis, Panayiotis A; Igglessi-Markopoulou, Olga; Kollias, George

    2010-05-01

    A novel QSAR workflow is constructed that combines MLR with LS-SVM classification techniques for the identification of quinazolinone analogs as "active" or "non-active" CXCR3 antagonists. The accuracy of the LS-SVM classification technique for the training set and test was 100% and 90%, respectively. For the "active" analogs a validated MLR QSAR model estimates accurately their I-IP10 IC(50) inhibition values. The accuracy of the QSAR model (R (2) = 0.80) is illustrated using various evaluation techniques, such as leave-one-out procedure (R(LOO2)) = 0.67) and validation through an external test set (R(pred2) = 0.78). The key conclusion of this study is that the selected molecular descriptors, Highest Occupied Molecular Orbital energy (HOMO), Principal Moment of Inertia along X and Y axes PMIX and PMIZ, Polar Surface Area (PSA), Presence of triple bond (PTrplBnd), and Kier shape descriptor ((1) kappa), demonstrate discriminatory and pharmacophore abilities.

  4. Monitoring hand, foot and mouth disease by combining search engine query data and meteorological factors.

    PubMed

    Huang, Da-Cang; Wang, Jin-Feng

    2018-01-15

    Hand, foot and mouth disease (HFMD) has been recognized as a significant public health threat and poses a tremendous challenge to disease control departments. To date, the relationship between meteorological factors and HFMD has been documented, and public interest of disease has been proven to be trackable from the Internet. However, no study has explored the combination of these two factors in the monitoring of HFMD. Therefore, the main aim of this study was to develop an effective monitoring model of HFMD in Guangzhou, China by utilizing historical HFMD cases, Internet-based search engine query data and meteorological factors. To this end, a case study was conducted in Guangzhou, using a network-based generalized additive model (GAM) including all factors related to HFMD. Three other models were also constructed using some of the variables for comparison. The results suggested that the model showed the best estimating ability when considering all of the related factors. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. A necessarily complex model to explain the biogeography of the amphibians and reptiles of Madagascar.

    PubMed

    Brown, Jason L; Cameron, Alison; Yoder, Anne D; Vences, Miguel

    2014-10-09

    Pattern and process are inextricably linked in biogeographic analyses, though we can observe pattern, we must infer process. Inferences of process are often based on ad hoc comparisons using a single spatial predictor. Here, we present an alternative approach that uses mixed-spatial models to measure the predictive potential of combinations of hypotheses. Biodiversity patterns are estimated from 8,362 occurrence records from 745 species of Malagasy amphibians and reptiles. By incorporating 18 spatially explicit predictions of 12 major biogeographic hypotheses, we show that mixed models greatly improve our ability to explain the observed biodiversity patterns. We conclude that patterns are influenced by a combination of diversification processes rather than by a single predominant mechanism. A 'one-size-fits-all' model does not exist. By developing a novel method for examining and synthesizing spatial parameters such as species richness, endemism and community similarity, we demonstrate the potential of these analyses for understanding the diversification history of Madagascar's biota.

  6. Mathematical Ability and Socio-Economic Background: IRT Modeling to Estimate Genotype by Environment Interaction.

    PubMed

    Schwabe, Inga; Boomsma, Dorret I; van den Berg, Stéphanie M

    2017-12-01

    Genotype by environment interaction in behavioral traits may be assessed by estimating the proportion of variance that is explained by genetic and environmental influences conditional on a measured moderating variable, such as a known environmental exposure. Behavioral traits of interest are often measured by questionnaires and analyzed as sum scores on the items. However, statistical results on genotype by environment interaction based on sum scores can be biased due to the properties of a scale. This article presents a method that makes it possible to analyze the actually observed (phenotypic) item data rather than a sum score by simultaneously estimating the genetic model and an item response theory (IRT) model. In the proposed model, the estimation of genotype by environment interaction is based on an alternative parametrization that is uniquely identified and therefore to be preferred over standard parametrizations. A simulation study shows good performance of our method compared to analyzing sum scores in terms of bias. Next, we analyzed data of 2,110 12-year-old Dutch twin pairs on mathematical ability. Genetic models were evaluated and genetic and environmental variance components estimated as a function of a family's socio-economic status (SES). Results suggested that common environmental influences are less important in creating individual differences in mathematical ability in families with a high SES than in creating individual differences in mathematical ability in twin pairs with a low or average SES.

  7. Surround-Masking Affects Visual Estimation Ability

    PubMed Central

    Jastrzebski, Nicola R.; Hugrass, Laila E.; Crewther, Sheila G.; Crewther, David P.

    2017-01-01

    Visual estimation of numerosity involves the discrimination of magnitude between two distributions or perceptual sets that vary in number of elements. How performance on such estimation depends on peripheral sensory stimulation is unclear, even in typically developing adults. Here, we varied the central and surround contrast of stimuli that comprised a visual estimation task in order to determine whether mechanisms involved with the removal of unessential visual input functionally contributes toward number acuity. The visual estimation judgments of typically developed adults were significantly impaired for high but not low contrast surround stimulus conditions. The center and surround contrasts of the stimuli also differentially affected the accuracy of numerosity estimation depending on whether fewer or more dots were presented. Remarkably, observers demonstrated the highest mean percentage accuracy across stimulus conditions in the discrimination of more elements when the surround contrast was low and the background luminance of the central region containing the elements was dark (black center). Conversely, accuracy was severely impaired during the discrimination of fewer elements when the surround contrast was high and the background luminance of the central region was mid level (gray center). These findings suggest that estimation ability is functionally related to the quality of low-order filtration of unessential visual information. These surround masking results may help understanding of the poor visual estimation ability commonly observed in developmental dyscalculia. PMID:28360845

  8. Quantifying enzymatic lysis: estimating the combined effects of chemistry, physiology and physics.

    PubMed

    Mitchell, Gabriel J; Nelson, Daniel C; Weitz, Joshua S

    2010-10-04

    The number of microbial pathogens resistant to antibiotics continues to increase even as the rate of discovery and approval of new antibiotic therapeutics steadily decreases. Many researchers have begun to investigate the therapeutic potential of naturally occurring lytic enzymes as an alternative to traditional antibiotics. However, direct characterization of lytic enzymes using techniques based on synthetic substrates is often difficult because lytic enzymes bind to the complex superstructure of intact cell walls. Here we present a new standard for the analysis of lytic enzymes based on turbidity assays which allow us to probe the dynamics of lysis without preparing a synthetic substrate. The challenge in the analysis of these assays is to infer the microscopic details of lysis from macroscopic turbidity data. We propose a model of enzymatic lysis that integrates the chemistry responsible for bond cleavage with the physical mechanisms leading to cell wall failure. We then present a solution to an inverse problem in which we estimate reaction rate constants and the heterogeneous susceptibility to lysis among target cells. We validate our model given simulated and experimental turbidity assays. The ability to estimate reaction rate constants for lytic enzymes will facilitate their biochemical characterization and development as antimicrobial therapeutics.

  9. Effects of the combination of metyrapone and oxazepam on cocaine craving and cocaine taking: a double-blind, randomized, placebo-controlled pilot study.

    PubMed

    Kablinger, Anita S; Lindner, Marie A; Casso, Stephanie; Hefti, Franz; DeMuth, George; Fox, Barbara S; McNair, Lindsay A; McCarthy, Bruce G; Goeders, Nicholas E

    2012-07-01

    Although cocaine dependence affects an estimated 1.6 million people in the USA, there are currently no medications approved for the treatment of this disorder. Experiments performed in animal models have demonstrated that inhibitors of the stress response effectively reduce intravenous cocaine self-administration. This exploratory, double-blind, placebo-controlled study was designed to assess the safety and efficacy of combinations of the cortisol synthesis inhibitor metyrapone, and the benzodiazepine oxazepam, in 45 cocaine-dependent individuals. The subjects were randomized to a total daily dose of 500 mg metyrapone/20 mg oxazepam (low dose), a total daily dose of 1500 mg metyrapone/20 mg oxazepam (high dose), or placebo for 6 weeks of treatment. The outcome measures were a reduction in cocaine craving and associated cocaine use as determined by quantitative measurements of the cocaine metabolite benzoylecgonine (BE) in urine at all visits. Of the randomized subjects, 49% completed the study. The combination of metyrapone and oxazepam was well tolerated and tended to reduce cocaine craving and cocaine use, with significant reductions at several time points when controlling for baseline scores. These data suggest that further assessments of the ability of the metyrapone and oxazepam combination to support cocaine abstinence in cocaine-dependent subjects are warranted.

  10. Estimation and comparison of potential runoff-contributing areas in Kansas using topographic, soil, and land-use information

    USGS Publications Warehouse

    Juracek, Kyle E.

    2000-01-01

    Digital topographic, soil, and land-use information was used to estimate potential runoff-contributing areas in Kansas. The results were used to compare 91 selected subbasins representing slope, soil, land-use, and runoff variability across the State. Potential runoff-contributing areas were estimated collectively for the processes of infiltration-excess and saturation-excess overland flow using a set of environmental conditions that represented, in relative terms, very high, high, moderate, low, very low, and extremely low potential for runoff. Various rainfall-intensity and soil-permeability values were used to represent the threshold conditions at which infiltration-excess overland flow may occur. Antecedent soil-moisture conditions and a topographic wetness index (TWI) were used to represent the threshold conditions at which saturation-excess overland flow may occur. Land-use patterns were superimposed over the potential runoff-contributing areas for each set of environmental conditions. Results indicated that the very low potential-runoff conditions (soil permeability less than or equal to 1.14 inches per hour and TWI greater than or equal to 14.4) provided the best statewide ability to quantitatively distinguish subbasins as having relatively high, moderate, or low potential for runoff on the basis of the percentage of potential runoff-contributing areas within each subbasin. The very low and (or) extremely low potential-runoff conditions (soil permeability less than or equal to 0.57 inch per hour and TWI greater than or equal to 16.3) provided the best ability to qualitatively compare potential for runoff among areas within individual subbasins. The majority of subbasins with relatively high potential for runoff are located in the eastern half of the State where soil permeability is generally less and precipitation is typically greater. The ability to distinguish subbasins as having relatively high, moderate, or low potential for runoff was possible mostly due to the variability of soil permeability across the State. The spatial distribution of potential contributing areas, in combination with the superimposed land-use patterns, may be used to help identify and prioritize subbasin areas for the implementation of best-management practices to manage runoff and meet Federally mandated total maximum daily load requirements.

  11. An estimate of the U.S. government's undercount of nonfatal occupational injuries and illnesses in agriculture.

    PubMed

    Leigh, J Paul; Du, Juan; McCurdy, Stephen A

    2014-04-01

    Debate surrounds the accuracy of U.S. government's estimates of job-related injuries and illnesses in agriculture. Whereas studies have attempted to estimate the undercount for all industries combined, none have specifically addressed agriculture. Data were drawn from the U.S. government's premier sources for workplace injuries and illnesses and employment: the Bureau of Labor Statistics databanks for the Survey of Occupational Injuries and Illnesses (SOII), the Quarterly Census of Employment and Wages, and the Current Population Survey. Estimates were constructed using transparent assumptions; for example, that the rate (cases-per-employee) of injuries and illnesses on small farms was the same as on large farms (an assumption we altered in sensitivity analysis). We estimated 74,932 injuries and illnesses for crop farms and 68,504 for animal farms, totaling 143,436 cases in 2011. We estimated that SOII missed 73.7% of crop farm cases and 81.9% of animal farm cases for an average of 77.6% for all agriculture. Sensitivity analyses suggested that the percent missed ranged from 61.5% to 88.3% for all agriculture. We estimate considerable undercounting of nonfatal injuries and illnesses in agriculture and believe the undercounting is larger than any other industry. Reasons include: SOII's explicit exclusion of employees on small farms and of farmers and family members and Quarterly Census of Employment and Wages's undercounts of employment. Undercounting limits our ability to identify and address occupational health problems in agriculture, affecting both workers and society. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. An improved null model for assessing the net effects of multiple stressors on communities.

    PubMed

    Thompson, Patrick L; MacLennan, Megan M; Vinebrooke, Rolf D

    2018-01-01

    Ecological stressors (i.e., environmental factors outside their normal range of variation) can mediate each other through their interactions, leading to unexpected combined effects on communities. Determining whether the net effect of stressors is ecologically surprising requires comparing their cumulative impact to a null model that represents the linear combination of their individual effects (i.e., an additive expectation). However, we show that standard additive and multiplicative null models that base their predictions on the effects of single stressors on community properties (e.g., species richness or biomass) do not provide this linear expectation, leading to incorrect interpretations of antagonistic and synergistic responses by communities. We present an alternative, the compositional null model, which instead bases its predictions on the effects of stressors on individual species, and then aggregates them to the community level. Simulations demonstrate the improved ability of the compositional null model to accurately provide a linear expectation of the net effect of stressors. We simulate the response of communities to paired stressors that affect species in a purely additive fashion and compare the relative abilities of the compositional null model and two standard community property null models (additive and multiplicative) to predict these linear changes in species richness and community biomass across different combinations (both positive, negative, or opposite) and intensities of stressors. The compositional model predicts the linear effects of multiple stressors under almost all scenarios, allowing for proper classification of net effects, whereas the standard null models do not. Our findings suggest that current estimates of the prevalence of ecological surprises on communities based on community property null models are unreliable, and should be improved by integrating the responses of individual species to the community level as does our compositional null model. © 2017 John Wiley & Sons Ltd.

  13. Using digital soil maps to infer edaphic affinities of plant species in Amazonia: Problems and prospects.

    PubMed

    Moulatlet, Gabriel Massaine; Zuquim, Gabriela; Figueiredo, Fernando Oliveira Gouvêa; Lehtonen, Samuli; Emilio, Thaise; Ruokolainen, Kalle; Tuomisto, Hanna

    2017-10-01

    Amazonia combines semi-continental size with difficult access, so both current ranges of species and their ability to cope with environmental change have to be inferred from sparse field data. Although efficient techniques for modeling species distributions on the basis of a small number of species occurrences exist, their success depends on the availability of relevant environmental data layers. Soil data are important in this context, because soil properties have been found to determine plant occurrence patterns in Amazonian lowlands at all spatial scales. Here we evaluate the potential for this purpose of three digital soil maps that are freely available online: SOTERLAC, HWSD, and SoilGrids. We first tested how well they reflect local soil cation concentration as documented with 1,500 widely distributed soil samples. We found that measured soil cation concentration differed by up to two orders of magnitude between sites mapped into the same soil class. The best map-based predictor of local soil cation concentration was obtained with a regression model combining soil classes from HWSD with cation exchange capacity (CEC) from SoilGrids. Next, we evaluated to what degree the known edaphic affinities of thirteen plant species (as documented with field data from 1,200 of the soil sample sites) can be inferred from the soil maps. The species segregated clearly along the soil cation concentration gradient in the field, but only partially along the model-estimated cation concentration gradient, and hardly at all along the mapped CEC gradient. The main problems reducing the predictive ability of the soil maps were insufficient spatial resolution and/or georeferencing errors combined with thematic inaccuracy and absence of the most relevant edaphic variables. Addressing these problems would provide better models of the edaphic environment for ecological studies in Amazonia.

  14. A Monte Carlo Simulation Investigating the Validity and Reliability of Ability Estimation in Item Response Theory with Speeded Computer Adaptive Tests

    ERIC Educational Resources Information Center

    Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M.

    2010-01-01

    Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…

  15. Human papillomavirus DNA testing as an adjunct to cytology in cervical screening programs.

    PubMed

    Lörincz, Attila T; Richart, Ralph M

    2003-08-01

    Our objective was to review current large studies of human papillomavirus (HPV) DNA testing as an adjunct to the Papanicolaou test for cervical cancer screening programs. We analyzed 10 large screening studies that used the Hybrid Capture 2 test and 3 studies that used the polymerase chain reaction test in a manner that enabled reliable estimates of accuracy for detecting or predicting high-grade cervical intraepithelial neoplasia (CIN). Most studies allowed comparison of HPV DNA and Papanicolaou testing and estimates of the performance of Papanicolaou and HPV DNA as combined tests. The studies were selected on the basis of a sufficient number of cases of high-grade CIN and cancer to provide meaningful statistical values. Investigators had to demonstrate the ability to generate reasonably reliable Hybrid Capture 2 or polymerase chain reaction data that were either minimally biased by nature of study design or that permitted analytical techniques for addressing issues of study bias to be applied. Studies had to provide data for the calculation of test sensitivity, specificity, predictive values, odds ratios, relative risks, confidence intervals, and other relevant measures. Final data were abstracted directly from published articles or estimated from descriptive statistics presented in the articles. In some studies, new analyses were performed from raw data supplied by the principal investigators. We concluded that HPV DNA testing was a more sensitive indicator for prevalent high-grade CIN than either conventional or liquid cytology. A combination of HPV DNA and Papanicolaou testing had almost 100% sensitivity and negative predictive value. The specificity of the combined tests was slightly lower than the specificity of the Papanicolaou test alone, but this decrease could potentially be offset by greater protection from neoplastic progression and cost savings available from extended screening intervals. One "double-negative" HPV DNA and Papanicolaou test indicated better prognostic assurance against risk of future CIN 3 than 3 subsequent negative conventional Papanicolaou tests and may safely allow 3-year screening intervals for such low-risk women.

  16. Cognitive factors affecting children's nonsymbolic and symbolic magnitude judgment abilities: A latent profile analysis.

    PubMed

    Chew, Cindy S; Forte, Jason D; Reeve, Robert A

    2016-12-01

    Early math abilities are claimed to be linked to magnitude representation ability. Some claim that nonsymbolic magnitude abilities scaffold the acquisition of symbolic (Arabic number) magnitude abilities and influence math ability. Others claim that symbolic magnitude abilities, and ipso facto math abilities, are independent of nonsymbolic abilities and instead depend on the ability to process number symbols (e.g., 2, 7). Currently, the issue of whether symbolic abilities are or are not related to nonsymbolic abilities, and the cognitive factors associated with nonsymbolic-symbolic relationships, remains unresolved. We suggest that different nonsymbolic-symbolic relationships reside within the general magnitude ability distribution and that different cognitive abilities are likely associated with these different relationships. We further suggest that the different nonsymbolic-symbolic relationships and cognitive abilities in combination differentially predict math abilities. To test these claims, we used latent profile analysis to identify nonsymbolic-symbolic judgment patterns of 124, 5- to 7-year-olds. We also assessed four cognitive factors (visuospatial working memory [VSWM], naming numbers, nonverbal IQ, and basic reaction time [RT]) and two math abilities (number transcoding and single-digit addition abilities). Four nonsymbolic-symbolic ability profiles were identified. Naming numbers, VSWM, and basic RT abilities were differentially associated with the different ability profiles and in combination differentially predicted math abilities. Findings show that different patterns of nonsymbolic-symbolic magnitude abilities can be identified and suggest that an adequate account of math development should specify the inter-relationship between cognitive factors and nonsymbolic-symbolic ability patterns. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Evaluation of the information content of long-term wastewater characteristics data in relation to activated sludge model parameters.

    PubMed

    Alikhani, Jamal; Takacs, Imre; Al-Omari, Ahmed; Murthy, Sudhir; Massoudieh, Arash

    2017-03-01

    A parameter estimation framework was used to evaluate the ability of observed data from a full-scale nitrification-denitrification bioreactor to reduce the uncertainty associated with the bio-kinetic and stoichiometric parameters of an activated sludge model (ASM). Samples collected over a period of 150 days from the effluent as well as from the reactor tanks were used. A hybrid genetic algorithm and Bayesian inference were used to perform deterministic and parameter estimations, respectively. The main goal was to assess the ability of the data to obtain reliable parameter estimates for a modified version of the ASM. The modified ASM model includes methylotrophic processes which play the main role in methanol-fed denitrification. Sensitivity analysis was also used to explain the ability of the data to provide information about each of the parameters. The results showed that the uncertainty in the estimates of the most sensitive parameters (including growth rate, decay rate, and yield coefficients) decreased with respect to the prior information.

  18. Does feeling respected influence return to work? Cross-sectional study on sick-listed patients' experiences of encounters with social insurance office staff.

    PubMed

    Lynöe, Niels; Wessel, Maja; Olsson, Daniel; Alexanderson, Kristina; Helgesson, Gert

    2013-03-23

    Previous research shows that how patients perceive encounters with healthcare staff may affect their health and self-estimated ability to return to work. The aim of the present study was to explore long-term sick-listed patients' encounters with social insurance office staff and the impact of these encounters on self-estimated ability to return to work. A random sample of long-term sick-listed patients (n = 10,042) received a questionnaire containing questions about their experiences of positive and negative encounters and item lists specifying such experiences. Respondents were also asked whether the encounters made them feel respected or wronged and how they estimated the effect of these encounters on their ability to return to work. Statistical analysis was conducted using 95% confidence intervals (CI) for proportions, and attributable risk (AR) with 95% CI. The response rate was 58%. Encounter items strongly associated with feeling respected were, among others: listened to me, believed me, and answered my questions. Encounter items strongly associated with feeling wronged were, among others: did not believe me, doubted my condition, and questioned my motivation to work. Positive encounters facilitated patients' self-estimated ability to return to work [26.9% (CI: 22.1-31.7)]. This effect was significantly increased if the patients also felt respected [49.3% (CI: 47.5-51.1)]. Negative encounters impeded self-estimated ability to return to work [29.1% (CI: 24.6-33.6)]; when also feeling wronged return to work was significantly further impeded [51.3% (CI: 47.1-55.5)]. Long-term sick-listed patients find that their self-reported ability to return to work is affected by positive and negative encounters with social insurance office staff. This effect is further enhanced by feeling respected or wronged, respectively.

  19. Optical Communications Channel Combiner

    NASA Technical Reports Server (NTRS)

    Quirk, Kevin J.; Quirk, Kevin J.; Nguyen, Danh H.; Nguyen, Huy

    2012-01-01

    NASA has identified deep-space optical communications links as an integral part of a unified space communication network in order to provide data rates in excess of 100 Mb/s. The distances and limited power inherent in a deep-space optical downlink necessitate the use of photon-counting detectors and a power-efficient modulation such as pulse position modulation (PPM). For the output of each photodetector, whether from a separate telescope or a portion of the detection area, a communication receiver estimates a log-likelihood ratio for each PPM slot. To realize the full effective aperture of these receivers, their outputs must be combined prior to information decoding. A channel combiner was developed to synchronize the log-likelihood ratio (LLR) sequences of multiple receivers, and then combines these into a single LLR sequence for information decoding. The channel combiner synchronizes the LLR sequences of up to three receivers and then combines these into a single LLR sequence for output. The channel combiner has three channel inputs, each of which takes as input a sequence of four-bit LLRs for each PPM slot in a codeword via a XAUI 10 Gb/s quad optical fiber interface. The cross-correlation between the channels LLR time series are calculated and used to synchronize the sequences prior to combining. The output of the channel combiner is a sequence of four-bit LLRs for each PPM slot in a codeword via a XAUI 10 Gb/s quad optical fiber interface. The unit is controlled through a 1 Gb/s Ethernet UDP/IP interface. A deep-space optical communication link has not yet been demonstrated. This ground-station channel combiner was developed to demonstrate this capability and is unique in its ability to process such a signal.

  20. Optimal Model-Based Fault Estimation and Correction for Particle Accelerators and Industrial Plants Using Combined Support Vector Machines and First Principles Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayyar-Rodsari, Bijan; Schweiger, Carl; /SLAC /Pavilion Technologies, Inc., Austin, TX

    2010-08-25

    Timely estimation of deviations from optimal performance in complex systems and the ability to identify corrective measures in response to the estimated parameter deviations has been the subject of extensive research over the past four decades. The implications in terms of lost revenue from costly industrial processes, operation of large-scale public works projects and the volume of the published literature on this topic clearly indicates the significance of the problem. Applications range from manufacturing industries (integrated circuits, automotive, etc.), to large-scale chemical plants, pharmaceutical production, power distribution grids, and avionics. In this project we investigated a new framework for buildingmore » parsimonious models that are suited for diagnosis and fault estimation of complex technical systems. We used Support Vector Machines (SVMs) to model potentially time-varying parameters of a First-Principles (FP) description of the process. The combined SVM & FP model was built (i.e. model parameters were trained) using constrained optimization techniques. We used the trained models to estimate faults affecting simulated beam lifetime. In the case where a large number of process inputs are required for model-based fault estimation, the proposed framework performs an optimal nonlinear principal component analysis of the large-scale input space, and creates a lower dimension feature space in which fault estimation results can be effectively presented to the operation personnel. To fulfill the main technical objectives of the Phase I research, our Phase I efforts have focused on: (1) SVM Training in a Combined Model Structure - We developed the software for the constrained training of the SVMs in a combined model structure, and successfully modeled the parameters of a first-principles model for beam lifetime with support vectors. (2) Higher-order Fidelity of the Combined Model - We used constrained training to ensure that the output of the SVM (i.e. the parameters of the beam lifetime model) are physically meaningful. (3) Numerical Efficiency of the Training - We investigated the numerical efficiency of the SVM training. More specifically, for the primal formulation of the training, we have developed a problem formulation that avoids the linear increase in the number of the constraints as a function of the number of data points. (4) Flexibility of Software Architecture - The software framework for the training of the support vector machines was designed to enable experimentation with different solvers. We experimented with two commonly used nonlinear solvers for our simulations. The primary application of interest for this project has been the sustained optimal operation of particle accelerators at the Stanford Linear Accelerator Center (SLAC). Particle storage rings are used for a variety of applications ranging from 'colliding beam' systems for high-energy physics research to highly collimated x-ray generators for synchrotron radiation science. Linear accelerators are also used for collider research such as International Linear Collider (ILC), as well as for free electron lasers, such as the Linear Coherent Light Source (LCLS) at SLAC. One common theme in the operation of storage rings and linear accelerators is the need to precisely control the particle beams over long periods of time with minimum beam loss and stable, yet challenging, beam parameters. We strongly believe that beyond applications in particle accelerators, the high fidelity and cost benefits of a combined model-based fault estimation/correction system will attract customers from a wide variety of commercial and scientific industries. Even though the acquisition of Pavilion Technologies, Inc. by Rockwell Automation Inc. in 2007 has altered the small business status of the Pavilion and it no longer qualifies for a Phase II funding, our findings in the course of the Phase I research have convinced us that further research will render a workable model-based fault estimation and correction for particle accelerators and industrial plants feasible.« less

  1. Estimation of genetic parameters and breeding values across challenged environments to select for robust pigs.

    PubMed

    Herrero-Medrano, J M; Mathur, P K; ten Napel, J; Rashidi, H; Alexandri, P; Knol, E F; Mulder, H A

    2015-04-01

    Robustness is an important issue in the pig production industry. Since pigs from international breeding organizations have to withstand a variety of environmental challenges, selection of pigs with the inherent ability to sustain their productivity in diverse environments may be an economically feasible approach in the livestock industry. The objective of this study was to estimate genetic parameters and breeding values across different levels of environmental challenge load. The challenge load (CL) was estimated as the reduction in reproductive performance during different weeks of a year using 925,711 farrowing records from farms distributed worldwide. A wide range of levels of challenge, from favorable to unfavorable environments, was observed among farms with high CL values being associated with confirmed situations of unfavorable environment. Genetic parameters and breeding values were estimated in high- and low-challenge environments using a bivariate analysis, as well as across increasing levels of challenge with a random regression model using Legendre polynomials. Although heritability estimates of number of pigs born alive were slightly higher in environments with extreme CL than in those with intermediate levels of CL, the heritabilities of number of piglet losses increased progressively as CL increased. Genetic correlations among environments with different levels of CL suggest that selection in environments with extremes of low or high CL would result in low response to selection. Therefore, selection programs of breeding organizations that are commonly conducted under favorable environments could have low response to selection in commercial farms that have unfavorable environmental conditions. Sows that had experienced high levels of challenge at least once during their productive life were ranked according to their EBV. The selection of pigs using EBV ignoring environmental challenges or on the basis of records from only favorable environments resulted in a sharp decline in productivity as the level of challenge increased. In contrast, selection using the random regression approach resulted in limited change in productivity with increasing levels of challenge. Hence, we demonstrate that the use of a quantitative measure of environmental CL and a random regression approach can be comprehensively combined for genetic selection of pigs with enhanced ability to maintain high productivity in harsh environments.

  2. The Effect of a Combined High-Intensity Plyometric and Speed Training Program on the Running and Jumping Ability of Male Handball Players

    PubMed Central

    Cherif, Monsef; Said, Mohamed; Chaatani, Sana; Nejlaoui, Olfa; Gomri, Daghbaji; Abdallah, Aouidet

    2012-01-01

    Purpose The aim of this study was to investigate the effect of a combined program including sprint repetitions and drop jump training in the same session on male handball players. Methods Twenty-two male handball players aged more than 20 years were assigned into 2 groups: experimental group (n=11) and control group (n=11). Selection was based on variables “axis” and “lines”, goalkeepers were not included. The experimental group was subjected to 2 testing periods (test and retest) separated by 12 weeks of an additional combined plyometric and running speed training program. The control group performed the usual handball training. The testing period comprised, at the first day, a medical checking, anthropometric measurements and an incremental exercise test called yo-yo intermittent recovery test. 2 days later, participants performed the Repeated Sprint Ability test (RSA), and performed the Jumping Performance using 3 different events: Squat jump (SJ), Countermovement jump without (CMJ) and with arms (CMJA), and Drop jump (DJ). At the end of the training period, participants performed again the repeated sprint ability test, and the jumping performance. Results The conventional combined program improved the explosive force ability of handball players in CMJ (P=0.01), CMJA (P=0.01) and DJR (P=0.03). The change was 2.78, 2.42 and 2.62% respectively. No significant changes were noted in performances of the experimental group at the squat jump test and the drop jump with the left leg test. The training intervention also improved the running speed ability of the experimental group (P=0.003). No statistical differences were observed between lines or axes. Conclusion Additional combined training program between sprint repetition and vertical jump in the same training session positively influence the jumping ability and the sprint ability of handball players. PMID:22461962

  3. The effect of a combined high-intensity plyometric and speed training program on the running and jumping ability of male handball players.

    PubMed

    Cherif, Monsef; Said, Mohamed; Chaatani, Sana; Nejlaoui, Olfa; Gomri, Daghbaji; Abdallah, Aouidet

    2012-03-01

    The aim of this study was to investigate the effect of a combined program including sprint repetitions and drop jump training in the same session on male handball players. Twenty-two male handball players aged more than 20 years were assigned into 2 groups: experimental group (n=11) and control group (n=11). Selection was based on variables "axis" and "lines", goalkeepers were not included. The experimental group was subjected to 2 testing periods (test and retest) separated by 12 weeks of an additional combined plyometric and running speed training program. The control group performed the usual handball training. The testing period comprised, at the first day, a medical checking, anthropometric measurements and an incremental exercise test called yo-yo intermittent recovery test. 2 days later, participants performed the Repeated Sprint Ability test (RSA), and performed the Jumping Performance using 3 different events: Squat jump (SJ), Countermovement jump without (CMJ) and with arms (CMJA), and Drop jump (DJ). At the end of the training period, participants performed again the repeated sprint ability test, and the jumping performance. The conventional combined program improved the explosive force ability of handball players in CMJ (P=0.01), CMJA (P=0.01) and DJR (P=0.03). The change was 2.78, 2.42 and 2.62% respectively. No significant changes were noted in performances of the experimental group at the squat jump test and the drop jump with the left leg test. The training intervention also improved the running speed ability of the experimental group (P=0.003). No statistical differences were observed between lines or axes. Additional combined training program between sprint repetition and vertical jump in the same training session positively influence the jumping ability and the sprint ability of handball players.

  4. Large-Scale and Global Hydrology. Chapter 92

    NASA Technical Reports Server (NTRS)

    Rodell, Matthew; Beaudoing, Hiroko Kato; Koster, Randal; Peters-Lidard, Christa D.; Famiglietti, James S.; Lakshmi, Venkat

    2016-01-01

    Powered by the sun, water moves continuously between and through Earths oceanic, atmospheric, and terrestrial reservoirs. It enables life, shapes Earths surface, and responds to and influences climate change. Scientists measure various features of the water cycle using a combination of ground, airborne, and space-based observations, and seek to characterize it at multiple scales with the aid of numerical models. Over time our understanding of the water cycle and ability to quantify it have improved, owing to advances in observational capabilities, the extension of the data record, and increases in computing power and storage. Here we present some of the most recent estimates of global and continental ocean basin scale water cycle stocks and fluxes and provide examples of modern numerical modeling systems and reanalyses.Further, we discuss prospects for predicting water cycle variability at seasonal and longer scales, which is complicated by a changing climate and direct human impacts related to water management and agriculture. Changes to the water cycle will be among the most obvious and important facets of climate change, thus it is crucial that we continue to invest in our ability to monitor it.

  5. Practice Guidelines for Operative Performance Assessments.

    PubMed

    Williams, Reed G; Kim, Michael J; Dunnington, Gary L

    2016-12-01

    To provide recommended practice guidelines for assessing single operative performances and for combining results of operative performance assessments into estimates of overall operative performance ability. Operative performance is one defining characteristic of surgeons. Assessment of operative performance is needed to provide feedback with learning benefits to surgical residents in training and to assist in making progress decisions for residents. Operative performance assessment has been a focus of investigation over the past 20 years. This review is designed to integrate findings of this research into a set of recommended operative performance practices. Literature from surgery and from other pertinent research areas (psychology, education, business) was reviewed looking for evidence to inform practice guideline development. Guidelines were created along with a conceptual and scientific foundation for each guideline. Ten guidelines are provided for assessing individual operative performances and 10 are provided for combing data from individual operative performances into overall judgments of operative performance ability. The practice guidelines organize available information to be immediately useful to program directors, to support surgical training, and to provide a conceptual framework upon which to build as the base of pertinent knowledge expands through future research and development efforts.

  6. Temporal rainfall estimation using input data reduction and model inversion

    NASA Astrophysics Data System (ADS)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.

  7. A Model-Based Probabilistic Inversion Framework for Wire Fault Detection Using TDR

    NASA Technical Reports Server (NTRS)

    Schuet, Stefan R.; Timucin, Dogan A.; Wheeler, Kevin R.

    2010-01-01

    Time-domain reflectometry (TDR) is one of the standard methods for diagnosing faults in electrical wiring and interconnect systems, with a long-standing history focused mainly on hardware development of both high-fidelity systems for laboratory use and portable hand-held devices for field deployment. While these devices can easily assess distance to hard faults such as sustained opens or shorts, their ability to assess subtle but important degradation such as chafing remains an open question. This paper presents a unified framework for TDR-based chafing fault detection in lossy coaxial cables by combining an S-parameter based forward modeling approach with a probabilistic (Bayesian) inference algorithm. Results are presented for the estimation of nominal and faulty cable parameters from laboratory data.

  8. Availability of new drugs and Americans' ability to work.

    PubMed

    Lichtenberg, Frank R

    2005-04-01

    The objective of this work was the investigation of the extent to which the introduction of new drugs has increased society's ability to produce goods and services by increasing the number of hours worked per member of the working-age population. Econometric models of ability-to-work measures from data on approximately 200,000 individuals with 47 major chronic conditions observed throughout a 15-year period (1982-1996) were estimated. Under very conservative assumptions, the estimates indicate that the value of the increase in ability to work attributable to new drugs is 2.5 times as great as expenditure on new drugs. The potential of drugs to increase employee productivity should be considered in the design of drug-reimbursement policies. Conversely, policies that broadly reduce the development and utilization of new drugs may ultimately reduce our ability to produce other goods and services.

  9. Mapping Dependence Between Extreme Rainfall and Storm Surge

    NASA Astrophysics Data System (ADS)

    Wu, Wenyan; McInnes, Kathleen; O'Grady, Julian; Hoeke, Ron; Leonard, Michael; Westra, Seth

    2018-04-01

    Dependence between extreme storm surge and rainfall can have significant implications for flood risk in coastal and estuarine regions. To supplement limited observational records, we use reanalysis surge data from a hydrodynamic model as the basis for dependence mapping, providing information at a resolution of approximately 30 km along the Australian coastline. We evaluated this approach by comparing the dependence estimates from modeled surge to that calculated using historical surge records from 79 tide gauges around Australia. The results show reasonable agreement between the two sets of dependence values, with the exception of lower seasonal variation in the modeled dependence values compared to the observed data, especially at locations where there are multiple processes driving extreme storm surge. This is due to the combined impact of local bathymetry as well as the resolution of the hydrodynamic model and its meteorological inputs. Meteorological drivers were also investigated for different combinations of extreme rainfall and surge—namely rain-only, surge-only, and coincident extremes—finding that different synoptic patterns are responsible for each combination. The ability to supplement observational records with high-resolution modeled surge data enables a much more precise quantification of dependence along the coastline, strengthening the physical basis for assessments of flood risk in coastal regions.

  10. Advanced algorithms for radiographic material discrimination and inspection system design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, Andrew J.; McDonald, Benjamin S.; Deinert, Mark R.

    X-ray and neutron radiography are powerful tools for non-invasively inspecting the interior of objects. Materials can be discriminated by noting how the radiographic signal changes with variations in the input spectrum or inspection mode. However, current methods are limited in their ability to differentiate when multiple materials are present, especially within large and complex objects. With X-ray radiography, the inability to distinguish materials of a similar atomic number is especially problematic. To overcome these critical limitations, we augmented our existing inverse problem framework with two important expansions: 1) adapting the previous methodology for use with multi-modal radiography and energy-integrating detectors,more » and 2) applying the Cramer-Rao lower bound to select an optimal set of inspection modes for a given application a priori. Adding these expanded capabilities to our algorithmic framework with adaptive regularization, we observed improved discrimination between high-Z materials, specifically plutonium and tungsten. The combined system can estimate plutonium mass within our simulated system to within 1%. Three types of inspection modes were modeled: multi-endpoint X-ray radiography alone; in combination with neutron radiography using deuterium-deuterium (DD); or in combination with neutron radiography using deuterium-tritium (DT) sources.« less

  11. The ionospheric eclipse factor method (IEFM) and its application to determining the ionospheric delay for GPS

    NASA Astrophysics Data System (ADS)

    Yuan, Y.; Tscherning, C. C.; Knudsen, P.; Xu, G.; Ou, J.

    2008-01-01

    A new method for modeling the ionospheric delay using global positioning system (GPS) data is proposed, called the ionospheric eclipse factor method (IEFM). It is based on establishing a concept referred to as the ionospheric eclipse factor (IEF) λ of the ionospheric pierce point (IPP) and the IEF’s influence factor (IFF) bar{λ}. The IEF can be used to make a relatively precise distinction between ionospheric daytime and nighttime, whereas the IFF is advantageous for describing the IEF’s variations with day, month, season and year, associated with seasonal variations of total electron content (TEC) of the ionosphere. By combining λ and bar{λ} with the local time t of IPP, the IEFM has the ability to precisely distinguish between ionospheric daytime and nighttime, as well as efficiently combine them during different seasons or months over a year at the IPP. The IEFM-based ionospheric delay estimates are validated by combining an absolute positioning mode with several ionospheric delay correction models or algorithms, using GPS data at an international Global Navigation Satellite System (GNSS) service (IGS) station (WTZR). Our results indicate that the IEFM may further improve ionospheric delay modeling using GPS data.

  12. Biomotor structures in elite female handball players.

    PubMed

    Katić, Ratko; Cavala, Marijana; Srhoj, Vatromir

    2007-09-01

    In order to identify biomotor structures in elite female handball players, factor structures of morphological characteristics and basic motor abilities of elite female handball players (N = 53) were determined first, followed by determination of relations between the morphological-motor space factors obtained and the set of criterion variables evaluating situation motor abilities in handball. Factor analysis of 14 morphological measures produced three morphological factors, i.e. factor of absolute voluminosity (mesoendomorph), factor of longitudinal skeleton dimensionality, and factor of transverse hand dimensionality. Factor analysis of 15 motor variables yielded five basic motor dimensions, i.e. factor of agility, factor of jumping explosive strength, factor of throwing explosive strength, factor of movement frequency rate, and factor of running explosive strength (sprint). Four significant canonic correlations, i.e. linear combinations, explained the correlation between the set of eight latent variables of the morphological and basic motor space and five variables of situation motoricity. First canonic linear combination is based on the positive effect of the factors of agility/coordination on the ability of fast movement without ball. Second linear combination is based on the effect of jumping explosive strength and transverse hand dimensionality on ball manipulation, throw precision, and speed of movement with ball. Third linear combination is based on the running explosive strength determination by the speed of movement with ball, whereas fourth combination is determined by throwing and jumping explosive strength, and agility on ball pass. The results obtained were consistent with the model of selection in female handball proposed (Srhoj et al., 2006), showing the speed of movement without ball and the ability of ball manipulation to be the predominant specific abilities, as indicated by the first and second linear combination.

  13. Why do we differ in number sense? Evidence from a genetically sensitive investigation☆

    PubMed Central

    Tosto, M.G.; Petrill, S.A.; Halberda, J.; Trzaskowski, M.; Tikhomirova, T.N.; Bogdanova, O.Y.; Ly, R.; Wilmer, J.B.; Naiman, D.Q.; Germine, L.; Plomin, R.; Kovas, Y.

    2014-01-01

    Basic intellectual abilities of quantity and numerosity estimation have been detected across animal species. Such abilities are referred to as ‘number sense’. For human species, individual differences in number sense are detectable early in life, persist in later development, and relate to general intelligence. The origins of these individual differences are unknown. To address this question, we conducted the first large-scale genetically sensitive investigation of number sense, assessing numerosity discrimination abilities in 837 pairs of monozygotic and 1422 pairs of dizygotic 16-year-old twin pairs. Univariate genetic analysis of the twin data revealed that number sense is modestly heritable (32%), with individual differences being largely explained by non-shared environmental influences (68%) and no contribution from shared environmental factors. Sex-Limitation model fitting revealed no differences between males and females in the etiology of individual differences in number sense abilities. We also carried out Genome-wide Complex Trait Analysis (GCTA) that estimates the population variance explained by additive effects of DNA differences among unrelated individuals. For 1118 unrelated individuals in our sample with genotyping information on 1.7 million DNA markers, GCTA estimated zero heritability for number sense, unlike other cognitive abilities in the same twin study where the GCTA heritability estimates were about 25%. The low heritability of number sense, observed in this study, is consistent with the directional selection explanation whereby additive genetic variance for evolutionary important traits is reduced. PMID:24696527

  14. Preserved, deteriorated, and premorbidly impaired patterns of intellectual ability in schizophrenia.

    PubMed

    Ammari, Narmeen; Heinrichs, R Walter; Pinnock, Farena; Miles, Ashley A; Muharib, Eva; McDermid Vaz, Stephanie

    2014-05-01

    The main purpose of this investigation was to identify patterns of intellectual performance in schizophrenia patients suggesting preserved, deteriorated, and premorbidly impaired ability, and to determine clinical, cognitive, and functional correlates of these patterns. We assessed 101 patients with schizophrenia or schizoaffective disorder and 80 non-psychiatric control participants. The "preserved" performance pattern was defined by average-range estimated premorbid and current IQ with no evidence of decline (premorbid-current IQ difference <10 points). The "deteriorated" pattern was defined by a difference between estimated premorbid and current IQ estimates of 10 points or more. The premorbidly "impaired" pattern was defined by below average estimated premorbid and current IQ and no evidence of decline greater than 10 points. Preserved and deteriorated patterns in healthy controls were also identified and studied in comparison to patient findings. The groups were compared on demographic, neurocognitive, clinical and functionality variables. Patients with the preserved pattern outperformed those meeting criteria for deteriorated and compromised intellectual ability on a composite measure of neurocognitive ability as well as in terms of functional competence. Patients demonstrating the deteriorated and compromised patterns were equivalent across all measures. However, "preserved" patients failed to show any advantage in terms of community functioning and demonstrated cognitive impairments relative to control participants. Our results suggest that proposed patterns of intellectual decline and stability exist in both the schizophrenia and general populations, but may not hold true across other cognitive abilities and do not translate into differential functional outcome.

  15. Strategies for Estimating Discrete Quantities.

    ERIC Educational Resources Information Center

    Crites, Terry W.

    1993-01-01

    Describes the benchmark and decomposition-recomposition estimation strategies and presents five techniques to develop students' estimation ability. Suggests situations involving quantities of candy and popcorn in which the teacher can model those strategies for the students. (MDH)

  16. Head pose estimation in computer vision: a survey.

    PubMed

    Murphy-Chutorian, Erik; Trivedi, Mohan Manubhai

    2009-04-01

    The capacity to estimate the head pose of another person is a common human ability that presents a unique challenge for computer vision systems. Compared to face detection and recognition, which have been the primary foci of face-related vision research, identity-invariant head pose estimation has fewer rigorously evaluated systems or generic solutions. In this paper, we discuss the inherent difficulties in head pose estimation and present an organized survey describing the evolution of the field. Our discussion focuses on the advantages and disadvantages of each approach and spans 90 of the most innovative and characteristic papers that have been published on this topic. We compare these systems by focusing on their ability to estimate coarse and fine head pose, highlighting approaches that are well suited for unconstrained environments.

  17. Behavioural evidence for a visual and proprioceptive control of head roll in hoverflies (Episyrphus balteatus).

    PubMed

    Goulard, Roman; Julien-Laferriere, Alice; Fleuriet, Jérome; Vercher, Jean-Louis; Viollet, Stéphane

    2015-12-01

    The ability of hoverflies to control their head orientation with respect to their body contributes importantly to their agility and their autonomous navigation abilities. Many tasks performed by this insect during flight, especially while hovering, involve a head stabilization reflex. This reflex, which is mediated by multisensory channels, prevents the visual processing from being disturbed by motion blur and maintains a consistent perception of the visual environment. The so-called dorsal light response (DLR) is another head control reflex, which makes insects sensitive to the brightest part of the visual field. In this study, we experimentally validate and quantify the control loop driving the head roll with respect to the horizon in hoverflies. The new approach developed here consisted of using an upside-down horizon in a body roll paradigm. In this unusual configuration, tethered flying hoverflies surprisingly no longer use purely vision-based control for head stabilization. These results shed new light on the role of neck proprioceptor organs in head and body stabilization with respect to the horizon. Based on the responses obtained with male and female hoverflies, an improved model was then developed in which the output signals delivered by the neck proprioceptor organs are combined with the visual error in the estimated position of the body roll. An internal estimation of the body roll angle with respect to the horizon might explain the extremely accurate flight performances achieved by some hovering insects. © 2015. Published by The Company of Biologists Ltd.

  18. Ice nucleation by particles immersed in supercooled cloud droplets.

    PubMed

    Murray, B J; O'Sullivan, D; Atkinson, J D; Webb, M E

    2012-10-07

    The formation of ice particles in the Earth's atmosphere strongly affects the properties of clouds and their impact on climate. Despite the importance of ice formation in determining the properties of clouds, the Intergovernmental Panel on Climate Change (IPCC, 2007) was unable to assess the impact of atmospheric ice formation in their most recent report because our basic knowledge is insufficient. Part of the problem is the paucity of quantitative information on the ability of various atmospheric aerosol species to initiate ice formation. Here we review and assess the existing quantitative knowledge of ice nucleation by particles immersed within supercooled water droplets. We introduce aerosol species which have been identified in the past as potentially important ice nuclei and address their ice-nucleating ability when immersed in a supercooled droplet. We focus on mineral dusts, biological species (pollen, bacteria, fungal spores and plankton), carbonaceous combustion products and volcanic ash. In order to make a quantitative comparison we first introduce several ways of describing ice nucleation and then summarise the existing information according to the time-independent (singular) approximation. Using this approximation in combination with typical atmospheric loadings, we estimate the importance of ice nucleation by different aerosol types. According to these estimates we find that ice nucleation below about -15 °C is dominated by soot and mineral dusts. Above this temperature the only materials known to nucleate ice are biological, with quantitative data for other materials absent from the literature. We conclude with a summary of the challenges our community faces.

  19. Women's ability to self-screen for contraindications to combined oral contraceptive pills in Tanzanian drug shops.

    PubMed

    Chin-Quee, Dawn; Ngadaya, Esther; Kahwa, Amos; Mwinyiheri, Thomas; Otterness, Conrad; Mfinanga, Sayoki; Nanda, Kavita

    2013-10-01

    To estimate the accuracy of self-screening for contraindications to combined oral contraceptive pills (COCs) and to estimate the proportion of women with contraindications to hormonal methods among those using drug shops in Tanzania. Trained nurses interviewed 1651 women aged 18-39 years who self-screened for contraindications to COCs with the help of a poster at drug shops in Tanzania. Nurse assessment of the women served as the gold standard for comparison with self-assessment. Blood pressure was also measured onsite. Nurses reported that 437 (26.5%) women were not eligible to use COCs, compared with 485 (29.4%) according to self-report. Overall, 133 (8.1%) women who said that they were eligible were deemed ineligible by nurses. The rate of ineligibility was artificially high owing to participant and nurse assessments that were incorrectly based on adverse effects of pill use and cultural reasons, and because of the sampling procedure, which intercepted women regardless of their reasons for visiting the drug shop. Adjusted rates of ineligibility were 8.6% and 12.7%, respectively, according to nurse and participant assessment. Both nurses and women underestimated the prevalence of hypertension in the present group. Self-screening among women in rural and peri-urban Tanzania with regard to contraindications to COC use was comparable to assessment by trained nurses. © 2013.

  20. The assessment of chronic health conditions on work performance, absence, and total economic impact for employers.

    PubMed

    Collins, James J; Baase, Catherine M; Sharda, Claire E; Ozminkowski, Ronald J; Nicholson, Sean; Billotti, Gary M; Turpin, Robin S; Olson, Michael; Berger, Marc L

    2005-06-01

    The objective of this study was to determine the prevalence and estimate total costs for chronic health conditions in the U.S. workforce for the Dow Chemical Company (Dow). Using the Stanford Presenteeism Scale, information was collected from workers at five locations on work impairment and absenteeism based on self-reported "primary" chronic health conditions. Survey data were merged with employee demographics, medical and pharmaceutical claims, smoking status, biometric health risk factors, payroll records, and job type. Almost 65% of respondents reported having one or more of the surveyed chronic conditions. The most common were allergies, arthritis/joint pain or stiffness, and back or neck disorders. The associated absenteeism by chronic condition ranged from 0.9 to 5.9 hours in a 4-week period, and on-the-job work impairment ranged from a 17.8% to 36.4% decrement in ability to function at work. The presence of a chronic condition was the most important determinant of the reported levels of work impairment and absence after adjusting for other factors (P < 0.000). The total cost of chronic conditions was estimated to be 10.7% of the total labor costs for Dow in the United States; 6.8% was attributable to work impairment alone. For all chronic conditions studied, the cost associated with performance based work loss or "presenteeism" greatly exceeded the combined costs of absenteeism and medical treatment combined.

  1. Biopsychosocial characteristics of community-dwelling older adults with limited ability to walk one-quarter of a mile.

    PubMed

    Hardy, Susan E; McGurl, David J; Studenski, Stephanie A; Degenholtz, Howard B

    2010-03-01

    To establish nationally representative estimates of the prevalence of self-reported difficulty and inability of older adults to walk one-quarter of a mile and to identify the characteristics independently associated with difficulty or inability to walk one-quarter of a mile. Cross-sectional analysis of data from the 2003 Cost and Use Medicare Current Beneficiary Survey. Community. Nine thousand five hundred sixty-three community-dwelling Medicare beneficiaries aged 65 and older, representing an estimated total population of 34.2 million older adults. Self-reported ability to walk one-quarter of a mile, sociodemographics, chronic conditions, body mass index, smoking, functional status. In 2003, an estimated 9.5 million older Medicare beneficiaries had difficulty walking one-quarter of a mile, and 5.9 million were unable to do so. Of the 20.2 million older adults with no difficulty in activities of daily living (ADLs) or instrumental activities of daily living (IADLs), an estimated 4.3 million (21%) had limited ability to walk one-quarter of a mile. Having difficulty or being unable to walk one-quarter of a mile was independently associated with older age, female sex, non-Hispanic ethnicity, lower educational level, Medicaid entitlement, most chronic medical conditions, current smoking, and being overweight or obese. Almost half of older adults and 20% of those reporting no ADL or IADL limitations report limited ability to walk one-quarter of a mile. For functionally independent older adults, reported ability to walk one-quarter of a mile can identify vulnerable older adults with greater medical problems and fewer resources and may be a valuable clinical marker in planning their care. Future work is needed to determine the association between ability to walk one-quarter of a mile walk and subsequent functional decline and healthcare use.

  2. The Confounding Effects of Ability, Item Difficulty, and Content Balance within Multiple Dimensions on the Estimation of Unidimensional Thetas

    ERIC Educational Resources Information Center

    Matlock, Ki Lynn

    2013-01-01

    When test forms that have equal total test difficulty and number of items vary in difficulty and length within sub-content areas, an examinee's estimated score may vary across equivalent forms, depending on how well his or her true ability in each sub-content area aligns with the difficulty of items and number of items within these areas.…

  3. The Utility of Selection for Military and Civilian Jobs

    DTIC Science & Technology

    1989-07-01

    parsimonious use of information; the relative ease in making threshold (break-even) judgments compared to estimating actual SDy values higher than a... threshold value, even though judges are unlikely to agree on the exact point estimate for the SDy parameter; and greater understanding of how even small...ability, spatial ability, introversion , anxiety) considered to vary or differ across individuals. A construct (sometimes called a latent variable) is not

  4. Global precipitation estimates based on a technique for combining satellite-based estimates, rain gauge analysis, and NWP model precipitation information

    NASA Technical Reports Server (NTRS)

    Huffman, George J.; Adler, Robert F.; Rudolf, Bruno; Schneider, Udo; Keehn, Peter R.

    1995-01-01

    The 'satellite-gauge model' (SGM) technique is described for combining precipitation estimates from microwave satellite data, infrared satellite data, rain gauge analyses, and numerical weather prediction models into improved estimates of global precipitation. Throughout, monthly estimates on a 2.5 degrees x 2.5 degrees lat-long grid are employed. First, a multisatellite product is developed using a combination of low-orbit microwave and geosynchronous-orbit infrared data in the latitude range 40 degrees N - 40 degrees S (the adjusted geosynchronous precipitation index) and low-orbit microwave data alone at higher latitudes. Then the rain gauge analysis is brougth in, weighting each field by its inverse relative error variance to produce a nearly global, observationally based precipitation estimate. To produce a complete global estimate, the numerical model results are used to fill data voids in the combined satellite-gauge estimate. Our sequential approach to combining estimates allows a user to select the multisatellite estimate, the satellite-gauge estimate, or the full SGM estimate (observationally based estimates plus the model information). The primary limitation in the method is imperfections in the estimation of relative error for the individual fields. The SGM results for one year of data (July 1987 to June 1988) show important differences from the individual estimates, including model estimates as well as climatological estimates. In general, the SGM results are drier in the subtropics than the model and climatological results, reflecting the relatively dry microwave estimates that dominate the SGM in oceanic regions.

  5. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  6. Why does self-reported emotional intelligence predict job performance? A meta-analytic investigation of mixed EI.

    PubMed

    Joseph, Dana L; Jin, Jing; Newman, Daniel A; O'Boyle, Ernest H

    2015-03-01

    Recent empirical reviews have claimed a surprisingly strong relationship between job performance and self-reported emotional intelligence (also commonly called trait EI or mixed EI), suggesting self-reported/mixed EI is one of the best known predictors of job performance (e.g., ρ = .47; Joseph & Newman, 2010b). Results further suggest mixed EI can robustly predict job performance beyond cognitive ability and Big Five personality traits (Joseph & Newman, 2010b; O'Boyle, Humphrey, Pollack, Hawver, & Story, 2011). These criterion-related validity results are problematic, given the paucity of evidence and the questionable construct validity of mixed EI measures themselves. In the current research, we update and reevaluate existing evidence for mixed EI, in light of prior work regarding the content of mixed EI measures. Results of the current meta-analysis demonstrate that (a) the content of mixed EI measures strongly overlaps with a set of well-known psychological constructs (i.e., ability EI, self-efficacy, and self-rated performance, in addition to Conscientiousness, Emotional Stability, Extraversion, and general mental ability; multiple R = .79), (b) an updated estimate of the meta-analytic correlation between mixed EI and supervisor-rated job performance is ρ = .29, and (c) the mixed EI-job performance relationship becomes nil (β = -.02) after controlling for the set of covariates listed above. Findings help to establish the construct validity of mixed EI measures and further support an intuitive theoretical explanation for the uncommonly high association between mixed EI and job performance--mixed EI instruments assess a combination of ability EI and self-perceptions, in addition to personality and cognitive ability. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  7. Time estimation predicts mathematical intelligence.

    PubMed

    Kramer, Peter; Bressan, Paola; Grassi, Massimo

    2011-01-01

    Performing mental subtractions affects time (duration) estimates, and making time estimates disrupts mental subtractions. This interaction has been attributed to the concurrent involvement of time estimation and arithmetic with general intelligence and working memory. Given the extant evidence of a relationship between time and number, here we test the stronger hypothesis that time estimation correlates specifically with mathematical intelligence, and not with general intelligence or working-memory capacity. Participants performed a (prospective) time estimation experiment, completed several subtests of the WAIS intelligence test, and self-rated their mathematical skill. For five different durations, we found that time estimation correlated with both arithmetic ability and self-rated mathematical skill. Controlling for non-mathematical intelligence (including working memory capacity) did not change the results. Conversely, correlations between time estimation and non-mathematical intelligence either were nonsignificant, or disappeared after controlling for mathematical intelligence. We conclude that time estimation specifically predicts mathematical intelligence. On the basis of the relevant literature, we furthermore conclude that the relationship between time estimation and mathematical intelligence is likely due to a common reliance on spatial ability.

  8. Genome-wide estimates of inbreeding in unrelated individuals and their association with cognitive ability.

    PubMed

    Power, Robert A; Nagoshi, Craig; DeFries, John C; Plomin, Robert

    2014-03-01

    The consequence of reduced cognitive ability from inbreeding has long been investigated, mainly restricted to cousin-cousin marriages. Molecular genetic techniques now allow us to test the relationship between increased ancestral inbreeding and cognitive ability in a population of traditionally unrelated individuals. In a representative UK sample of 2329 individuals, we used genome-wide SNP data to estimate the percentage of the genome covered by runs of homozygous SNPs (ROH). This was tested for association with general cognitive ability, as well as measures of verbal and non-verbal ability. Further, association was tested between these traits and specific ROH. Burden of ROH was not associated with cognitive ability after correction for multiple testing, although burden of ROH was nominally associated with increased non-verbal cognitive ability (P=0.03). Moreover, although no individual ROH was significantly associated with cognitive ability, there was a significant bias towards increased cognitive ability in carriers of ROH (P=0.002). A potential explanation for these results is increased positive assortative mating in spouses with higher cognitive ability, although we found no evidence in support of this hypothesis in a separate sample. Reduced minor allele frequency across the genome was associated with higher cognitive ability, which could contribute to an apparent increase in ROH. This may reflect minor alleles being more likely to be deleterious.

  9. Genome-wide estimates of inbreeding in unrelated individuals and their association with cognitive ability

    PubMed Central

    Power, Robert A; Nagoshi, Craig; DeFries, John C; Donnelly, Peter; Barroso, Ines; Blackwell, Jenefer M; Bramon, Elvira; Brown, Matthew A; Casas, Juan P; Corvin, Aiden; Deloukas, Panos; Duncanson, Audrey; Jankowski, Janusz; Markus, Hugh S; Mathew, Christopher G; Palmer, Colin NA; Plomin, Robert; Rautanen, Anna; Sawcer, Stephen J; Trembath, Richard C; Viswanathan, Ananth C; Wood, Nicholas W; Spencer, Chris C A; Band, Gavin; Bellenguez, Céline; Freeman, Colin; Hellenthal, Garrett; Giannoulatou, Eleni; Pirinen, Matti; Pearson, Richard; Strange, Amy; Su, Zhan; Vukcevic, Damjan; Donnelly, Peter; Langford, Cordelia; Hunt, Sarah E; Edkins, Sarah; Gwilliam, Rhian; Blackburn, Hannah; Bumpstead, Suzannah J; Dronov, Serge; Gillman, Matthew; Gray, Emma; Hammond, Naomi; Jayakumar, Alagurevathi; McCann, Owen T; Liddle, Jennifer; Potter, Simon C; Ravindrarajah, Radhi; Ricketts, Michelle; Waller, Matthew; Weston, Paul; Widaa, Sara; Whittaker, Pamela; Barroso, Ines; Deloukas, Panos; Mathew, Christopher G; Blackwell, Jenefer M; Brown, Matthew A; Corvin, Aiden; Spencer, Chris C A; Plomin, Robert

    2014-01-01

    The consequence of reduced cognitive ability from inbreeding has long been investigated, mainly restricted to cousin–cousin marriages. Molecular genetic techniques now allow us to test the relationship between increased ancestral inbreeding and cognitive ability in a population of traditionally unrelated individuals. In a representative UK sample of 2329 individuals, we used genome-wide SNP data to estimate the percentage of the genome covered by runs of homozygous SNPs (ROH). This was tested for association with general cognitive ability, as well as measures of verbal and non-verbal ability. Further, association was tested between these traits and specific ROH. Burden of ROH was not associated with cognitive ability after correction for multiple testing, although burden of ROH was nominally associated with increased non-verbal cognitive ability (P=0.03). Moreover, although no individual ROH was significantly associated with cognitive ability, there was a significant bias towards increased cognitive ability in carriers of ROH (P=0.002). A potential explanation for these results is increased positive assortative mating in spouses with higher cognitive ability, although we found no evidence in support of this hypothesis in a separate sample. Reduced minor allele frequency across the genome was associated with higher cognitive ability, which could contribute to an apparent increase in ROH. This may reflect minor alleles being more likely to be deleterious. PMID:23860046

  10. Improvement of Speaking Ability through Interrelated Skills

    ERIC Educational Resources Information Center

    Liao, Guoqiang

    2009-01-01

    How to improve students' ability of speaking English? That is the key point we are concerned about. This paper discusses the possibility and necessity of improving students' ability by combining the four skills of speaking, listening, reading and writing.

  11. Investigation of speed estimation using single loop detectors.

    DOT National Transportation Integrated Search

    2008-05-15

    The ability to collect or estimate accurate speed information is of great importance to a large number of : Intelligent Transportation Systems (ITS) applications. Estimating speeds from the widely used single : inductive loop sensor has been a diffic...

  12. Design with limited anthropometric data: A method of interpreting sums of percentiles in anthropometric design.

    PubMed

    Albin, Thomas J

    2017-07-01

    Occasionally practitioners must work with single dimensions defined as combinations (sums or differences) of percentile values, but lack information (e.g. variances) to estimate the accommodation achieved. This paper describes methods to predict accommodation proportions for such combinations of percentile values, e.g. two 90th percentile values. Kreifeldt and Nah z-score multipliers were used to estimate the proportions accommodated by combinations of percentile values of 2-15 variables; two simplified versions required less information about variance and/or correlation. The estimates were compared to actual observed proportions; for combinations of 2-15 percentile values the average absolute differences ranged between 0.5 and 1.5 percentage points. The multipliers were also used to estimate adjusted percentile values, that, when combined, estimate a desired proportion of the combined measurements. For combinations of two and three adjusted variables, the average absolute difference between predicted and observed proportions ranged between 0.5 and 3.0 percentage points. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Effect of Mixing Ability Groups on Ability Levels Attained.

    ERIC Educational Resources Information Center

    de Jong, John H. A. L.

    In 1983, in the Netherlands' highly differentiated school system, two types of curriculum representing different ability levels were combined as a first step towards a more heterogeneous grouping of student abilities. A study of one aspect of the results of this change compared over 1000 samples of English and German second language listening…

  14. On the usage of divergence nudging in the DMI nowcasting system

    NASA Astrophysics Data System (ADS)

    Korsholm, Ulrik; Petersen, Claus; Hansen Sass, Bent; Woetmann Nielsen, Niels; Getreuer Jensen, David; Olsen, Bjarke Tobias; Vedel, Henrik

    2014-05-01

    DMI has recently proposed a new method for nudging radar reflectivity CAPPI products into their operational nowcasting system. The system is based on rapid update cycles (with hourly frequency) with the High Resolution Limited Area Model combined with surface and upper air analysis at each initial time. During the first 1.5 hours of a simulation the model dynamical state is nudged in accordance with the CAPPI product after which a free forecast is produced with a forecast length of 12 hours. The nudging method is based on the assumption that precipitation is forced by low level moisture convergence and an enhanced moisture source will lead to convective triggering of the model cloud scheme. If the model under-predicts precipitation before cut-off horizontal low level divergence is nudged towards an estimated value. These pseudo observations are calculated from the CAPPI product by assuming a specific vertical profile of the change in divergence field. The strength of the nudging is proportional to the difference between observed and modelled precipitation. When over-predicting, the low level moisture source is reduced, and in-cloud moisture is nudged towards environmental values. Results have been analysed in terms of the fractions skill score and the ability of the nudging method to position the precipitation cells correctly is discussed. The ability of the model to retain memory of the precipitation systems in the free forecast has also been investigated and examples of combining the nudging method with extrapolated reflectivity fields are also shown.

  15. A multiplexed magnetic tweezer with precision particle tracking and bi-directional force control.

    PubMed

    Johnson, Keith C; Clemmens, Emilie; Mahmoud, Hani; Kirkpatrick, Robin; Vizcarra, Juan C; Thomas, Wendy E

    2017-01-01

    In the past two decades, methods have been developed to measure the mechanical properties of single biomolecules. One of these methods, Magnetic tweezers, is amenable to aquisition of data on many single molecules simultaneously, but to take full advantage of this "multiplexing" ability, it is necessary to simultaneously incorprorate many capabilities that ahve been only demonstrated separately. Our custom built magnetic tweezer combines high multiplexing, precision bead tracking, and bi-directional force control into a flexible and stable platform for examining single molecule behavior. This was accomplished using electromagnets, which provide high temporal control of force while achieving force levels similar to permanent magnets via large paramagnetic beads. Here we describe the instrument and its ability to apply 2-260 pN of force on up to 120 beads simultaneously, with a maximum spatial precision of 12 nm using a variety of bead sizes and experimental techniques. We also demonstrate a novel method for increasing the precision of force estimations on heterogeneous paramagnetic beads using a combination of density separation and bi-directional force correlation which reduces the coefficient of variation of force from 27% to 6%. We then use the instrument to examine the force dependence of uncoiling and recoiling velocity of type 1 fimbriae from Eschericia coli ( E. coli ) bacteria, and see similar results to previous studies. This platform provides a simple, effective, and flexible method for efficiently gathering single molecule force spectroscopy measurements.

  16. Gaussian Process Kalman Filter for Focal Plane Wavefront Correction and Exoplanet Signal Extraction

    NASA Astrophysics Data System (ADS)

    Sun, He; Kasdin, N. Jeremy

    2018-01-01

    Currently, the ultimate limitation of space-based coronagraphy is the ability to subtract the residual PSF after wavefront correction to reveal the planet. Called reference difference imaging (RDI), the technique consists of conducting wavefront control to collect the reference point spread function (PSF) by observing a bright star, and then extracting target planet signals by subtracting a weighted sum of reference PSFs. Unfortunately, this technique is inherently inefficient because it spends a significant fraction of the observing time on the reference star rather than the target star with the planet. Recent progress in model based wavefront estimation suggests an alternative approach. A Kalman filter can be used to estimate the stellar PSF for correction by the wavefront control system while simultaneously estimating the planet signal. Without observing the reference star, the (extended) Kalman filter directly utilizes the wavefront correction data and combines the time series observations and model predictions to estimate the stellar PSF and planet signals. Because wavefront correction is used during the entire observation with no slewing, the system has inherently better stability. In this poster we show our results aimed at further improving our Kalman filter estimation accuracy by including not only temporal correlations but also spatial correlations among neighboring pixels in the images. This technique is known as a Gaussian process Kalman filter (GPKF). We also demonstrate the advantages of using a Kalman filter rather than RDI by simulating a real space exoplanet detection mission.

  17. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models.

    PubMed

    Gelfand, Lois A; MacKinnon, David P; DeRubeis, Robert J; Baraldi, Amanda N

    2016-01-01

    Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome-underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.

  18. Remote sensing of sagebrush canopy nitrogen

    USGS Publications Warehouse

    Mitchell, Jessica J.; Glenn, Nancy F.; Sankey, Temuulen T.; Derryberry, DeWayne R.; Germino, Matthew J.

    2012-01-01

    This paper presents a combination of techniques suitable for remotely sensing foliar Nitrogen (N) in semiarid shrublands – a capability that would significantly improve our limited understanding of vegetation functionality in dryland ecosystems. The ability to estimate foliar N distributions across arid and semi-arid environments could help answer process-driven questions related to topics such as controls on canopy photosynthesis, the influence of N on carbon cycling behavior, nutrient pulse dynamics, and post-fire recovery. Our study determined that further exploration into estimating sagebrush canopy N concentrations from an airborne platform is warranted, despite remote sensing challenges inherent to open canopy systems. Hyperspectral data transformed using standard derivative analysis were capable of quantifying sagebrush canopy N concentrations using partial least squares (PLS) regression with an R2 value of 0.72 and an R2 predicted value of 0.42 (n = 35). Subsetting the dataset to minimize the influence of bare ground (n = 19) increased R2 to 0.95 (R2 predicted = 0.56). Ground-based estimates of canopy N using leaf mass per unit area measurements (LMA) yielded consistently better model fits than ground-based estimates of canopy N using cover and height measurements. The LMA approach is likely a method that could be extended to other semiarid shrublands. Overall, the results of this study are encouraging for future landscape scale N estimates and represent an important step in addressing the confounding influence of bare ground, which we found to be a major influence on predictions of sagebrush canopy N from an airborne platform.

  19. Estimating verbal fluency and naming ability from the test of premorbid functioning and demographic variables: Regression equations derived from a regional UK sample.

    PubMed

    Jenkinson, Toni-Marie; Muncer, Steven; Wheeler, Miranda; Brechin, Don; Evans, Stephen

    2018-06-01

    Neuropsychological assessment requires accurate estimation of an individual's premorbid cognitive abilities. Oral word reading tests, such as the test of premorbid functioning (TOPF), and demographic variables, such as age, sex, and level of education, provide a reasonable indication of premorbid intelligence, but their ability to predict other related cognitive abilities is less well understood. This study aimed to develop regression equations, based on the TOPF and demographic variables, to predict scores on tests of verbal fluency and naming ability. A sample of 119 healthy adults provided demographic information and were tested using the TOPF, FAS, animal naming test (ANT), and graded naming test (GNT). Multiple regression analyses, using the TOPF and demographics as predictor variables, were used to estimate verbal fluency and naming ability test scores. Change scores and cases of significant impairment were calculated for two clinical samples with diagnosed neurological conditions (TBI and meningioma) using the method in Knight, McMahon, Green, and Skeaff (). Demographic variables provided a significant contribution to the prediction of all verbal fluency and naming ability test scores; however, adding TOPF score to the equation considerably improved prediction beyond that afforded by demographic variables alone. The percentage of variance accounted for by demographic variables and/or TOPF score varied from 19 per cent (FAS), 28 per cent (ANT), and 41 per cent (GNT). Change scores revealed significant differences in performance in the clinical groups, particularity the TBI group. Demographic variables, particularly education level, and scores on the TOPF should be taken into consideration when interpreting performance on tests of verbal fluency and naming ability. © 2017 The British Psychological Society.

  20. Shared Mechanisms in the Estimation of Self-Generated Actions and the Prediction of Other's Actions by Humans.

    PubMed

    Ikegami, Tsuyoshi; Ganesh, Gowrishankar

    2017-01-01

    The question of how humans predict outcomes of observed motor actions by others is a fundamental problem in cognitive and social neuroscience. Previous theoretical studies have suggested that the brain uses parts of the forward model (used to estimate sensory outcomes of self-generated actions) to predict outcomes of observed actions. However, this hypothesis has remained controversial due to the lack of direct experimental evidence. To address this issue, we analyzed the behavior of darts experts in an understanding learning paradigm and utilized computational modeling to examine how outcome prediction of observed actions affected the participants' ability to estimate their own actions. We recruited darts experts because sports experts are known to have an accurate outcome estimation of their own actions as well as prediction of actions observed in others. We first show that learning to predict the outcomes of observed dart throws deteriorates an expert's abilities to both produce his own darts actions and estimate the outcome of his own throws (or self-estimation). Next, we introduce a state-space model to explain the trial-by-trial changes in the darts performance and self-estimation through our experiment. The model-based analysis reveals that the change in an expert's self-estimation is explained only by considering a change in the individual's forward model, showing that an improvement in an expert's ability to predict outcomes of observed actions affects the individual's forward model. These results suggest that parts of the same forward model are utilized in humans to both estimate outcomes of self-generated actions and predict outcomes of observed actions.

  1. Deriving Global Discharge Records from SWOT Observations

    NASA Astrophysics Data System (ADS)

    Pan, M.; Fisher, C. K.; Wood, E. F.

    2017-12-01

    River flows are poorly monitored in many regions of the world, hindering our ability to accurately estimate water global water usage, and thus estimate global water and energy budgets or the variability in the global water cycle. Recent developments in satellite remote sensing, such as water surface elevations from radar altimetry or surface water extents from visible/infrared imagery, aim to fill this void; however, the streamflow estimates derived from these are inherently intermittent in both space and time. There is then a need for new methods that are able to derive spatially and temporally continuous records of discharge from the many available data sources. One particular application of this will be the Surface Water and Ocean Topography (SWOT) mission, which is designed to provide global observations of water surface elevation and slope from which river discharge can be estimated. Within the 21-day repeat cycle, a river reach will be observed 2-4 times on average. Due to the relationship between the basin orientation and the orbit, these observations are not evenly distributed in time or space. In this study, we investigate how SWOT will observe global river basins and how the temporal and spatial sampling impacts our ability to reconstruct discharge records.River flows can be estimated throughout a basin by assimilating SWOT observations using the Inverse Streamflow Routing (ISR) model of Pan and Wood [2013]. This method is applied to 32 global basins with different geometries and crossing patterns for the future orbit, assimilating theoretical SWOT-retrieved "gauges". Results show that the model is able to reconstruct basin-wide discharge from SWOT observations alone; however, the performance varies significantly across basins and is driven by the orientation, flow distance, and travel time in each, as well as the sensitivity of the reconstruction method to errors in the satellite retrieval. These properties are combined to estimate the "observability" of each basin. We then apply this metric globally and relate it to the discharge reconstruction performance to gain a better understanding of the impact that spatially and temporally sparse observations, such as those from SWOT, may have in basins with limited in-situ observations. Pan, M; Wood, E F 2013 Inverse streamflow routing, HESS 17(11):4577-4588

  2. Estimation of cardiac motion in cine-MRI sequences by correlation transform optical flow of monogenic features distance

    NASA Astrophysics Data System (ADS)

    Gao, Bin; Liu, Wanyu; Wang, Liang; Liu, Zhengjun; Croisille, Pierre; Delachartre, Philippe; Clarysse, Patrick

    2016-12-01

    Cine-MRI is widely used for the analysis of cardiac function in clinical routine, because of its high soft tissue contrast and relatively short acquisition time in comparison with other cardiac MRI techniques. The gray level distribution in cardiac cine-MRI is relatively homogenous within the myocardium, and can therefore make motion quantification difficult. To ensure that the motion estimation problem is well posed, more image features have to be considered. This work is inspired by a method previously developed for color image processing. The monogenic signal provides a framework to estimate the local phase, orientation, and amplitude, of an image, three features which locally characterize the 2D intensity profile. The independent monogenic features are combined into a 3D matrix for motion estimation. To improve motion estimation accuracy, we chose the zero-mean normalized cross-correlation as a matching measure, and implemented a bilateral filter for denoising and edge-preservation. The monogenic features distance is used in lieu of the color space distance in the bilateral filter. Results obtained from four realistic simulated sequences outperformed two other state of the art methods even in the presence of noise. The motion estimation errors (end point error) using our proposed method were reduced by about 20% in comparison with those obtained by the other tested methods. The new methodology was evaluated on four clinical sequences from patients presenting with cardiac motion dysfunctions and one healthy volunteer. The derived strain fields were analyzed favorably in their ability to identify myocardial regions with impaired motion.

  3. Estimation of premorbid general fluid intelligence using traditional Chinese reading performance in Taiwanese samples.

    PubMed

    Chen, Ying-Jen; Ho, Meng-Yang; Chen, Kwan-Ju; Hsu, Chia-Fen; Ryu, Shan-Jin

    2009-08-01

    The aims of the present study were to (i) investigate if traditional Chinese word reading ability can be used for estimating premorbid general intelligence; and (ii) to provide multiple regression equations for estimating premorbid performance on Raven's Standard Progressive Matrices (RSPM), using age, years of education and Chinese Graded Word Reading Test (CGWRT) scores as predictor variables. Four hundred and twenty-six healthy volunteers (201 male, 225 female), aged 16-93 years (mean +/- SD, 41.92 +/- 18.19 years) undertook the tests individually under supervised conditions. Seventy percent of subjects were randomly allocated to the derivation group (n = 296), and the rest to the validation group (n = 130). RSPM score was positively correlated with CGWRT score and years of education. RSPM and CGWRT scores and years of education were also inversely correlated with age, but the declining trend for RSPM performance against age was steeper than that for CGWRT performance. Separate multiple regression equations were derived for estimating RSPM scores using different combinations of age, years of education, and CGWRT score for both groups. The multiple regression coefficient of each equation ranged from 0.71 to 0.80 with the standard error of estimate between 7 and 8 RSPM points. When fitting the data of one group to the equations derived from its counterpart group, the cross-validation multiple regression coefficients ranged from 0.71 to 0.79. There were no significant differences in the 'predicted-obtained' RSPM discrepancies between any equations. The regression equations derived in the present study may provide a basis for estimating premorbid RSPM performance.

  4. Modelling ranging behaviour of female orang-utans: a case study in Tuanan, Central Kalimantan, Indonesia.

    PubMed

    Wartmann, Flurina M; Purves, Ross S; van Schaik, Carel P

    2010-04-01

    Quantification of the spatial needs of individuals and populations is vitally important for management and conservation. Geographic information systems (GIS) have recently become important analytical tools in wildlife biology, improving our ability to understand animal movement patterns, especially when very large data sets are collected. This study aims at combining the field of GIS with primatology to model and analyse space-use patterns of wild orang-utans. Home ranges of female orang-utans in the Tuanan Mawas forest reserve in Central Kalimantan, Indonesia were modelled with kernel density estimation methods. Kernel results were compared with minimum convex polygon estimates, and were found to perform better, because they were less sensitive to sample size and produced more reliable estimates. Furthermore, daily travel paths were calculated from 970 complete follow days. Annual ranges for the resident females were approximately 200 ha and remained stable over several years; total home range size was estimated to be 275 ha. On average, each female shared a third of her home range with each neighbouring female. Orang-utan females in Tuanan built their night nest on average 414 m away from the morning nest, whereas average daily travel path length was 777 m. A significant effect of fruit availability on day path length was found. Sexually active females covered longer distances per day and may also temporarily expand their ranges.

  5. Independent and combined influence of the components of physical fitness on academic performance in youth.

    PubMed

    Esteban-Cornejo, Irene; Tejero-González, Carlos Ma; Martinez-Gomez, David; del-Campo, Juan; González-Galo, Ana; Padilla-Moledo, Carmen; Sallis, James F; Veiga, Oscar L

    2014-08-01

    To examine the independent and combined associations of the components of physical fitness with academic performance among youths. This cross-sectional study included a total of 2038 youths (989 girls) aged 6-18 years. Cardiorespiratory capacity was measured using the 20-m shuttle run test. Motor ability was assessed with the 4×10-m shuttle run test of speed of movement, agility, and coordination. A muscular strength z-score was computed based on handgrip strength and standing long jump distance. Academic performance was assessed through school records using 4 indicators: Mathematics, Language, an average of Mathematics and Language, and grade point average score. Cardiorespiratory capacity and motor ability were independently associated with all academic variables in youth, even after adjustment for fitness and fatness indicators (all P≤.001), whereas muscular strength was not associated with academic performance independent of the other 2 physical fitness components. In addition, the combined adverse effects of low cardiorespiratory capacity and motor ability on academic performance were observed across the risk groups (P for trend<.001). Cardiorespiratory capacity and motor ability, both independently and combined, may have a beneficial influence on academic performance in youth. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. A combined telemetry - tag return approach to estimate fishing and natural mortality rates of an estuarine fish

    USGS Publications Warehouse

    Bacheler, N.M.; Buckel, J.A.; Hightower, J.E.; Paramore, L.M.; Pollock, K.H.

    2009-01-01

    A joint analysis of tag return and telemetry data should improve estimates of mortality rates for exploited fishes; however, the combined approach has thus far only been tested in terrestrial systems. We tagged subadult red drum (Sciaenops ocellatus) with conventional tags and ultrasonic transmitters over 3 years in coastal North Carolina, USA, to test the efficacy of the combined telemetry - tag return approach. There was a strong seasonal pattern to monthly fishing mortality rate (F) estimates from both conventional and telemetry tags; highest F values occurred in fall months and lowest levels occurred during winter. Although monthly F values were similar in pattern and magnitude between conventional tagging and telemetry, information on F in the combined model came primarily from conventional tags. The estimated natural mortality rate (M) in the combined model was low (estimated annual rate ?? standard error: 0.04 ?? 0.04) and was based primarily upon the telemetry approach. Using high-reward tagging, we estimated different tag reporting rates for state agency and university tagging programs. The combined telemetry - tag return approach can be an effective approach for estimating F and M as long as several key assumptions of the model are met.

  7. Cues of upper body strength account for most of the variance in men's bodily attractiveness.

    PubMed

    Sell, Aaron; Lukazsweski, Aaron W; Townsley, Michael

    2017-12-20

    Evolution equips sexually reproducing species with mate choice mechanisms that function to evaluate the reproductive consequences of mating with different individuals. Indeed, evolutionary psychologists have shown that women's mate choice mechanisms track many cues of men's genetic quality and ability to invest resources in the woman and her offspring. One variable that predicted both a man's genetic quality and his ability to invest is the man's formidability (i.e. fighting ability or resource holding power/potential). Modern women, therefore, should have mate choice mechanisms that respond to ancestral cues of a man's fighting ability. One crucial component of a man's ability to fight is his upper body strength. Here, we test how important physical strength is to men's bodily attractiveness. Three sets of photographs of men's bodies were shown to raters who estimated either their physical strength or their attractiveness. Estimates of physical strength determined over 70% of men's bodily attractiveness. Additional analyses showed that tallness and leanness were also favoured, and, along with estimates of physical strength, accounted for 80% of men's bodily attractiveness. Contrary to popular theories of men's physical attractiveness, there was no evidence of a nonlinear effect; the strongest men were the most attractive in all samples. © 2017 The Author(s).

  8. [Estimating heavy metal concentrations in topsoil from vegetation reflectance spectra of Hyperion images: A case study of Yushu County, Qinghai, China.

    PubMed

    Yang, Ling Yu; Gao, Xiao Hong; Zhang, Wei; Shi, Fei Fei; He, Lin Hua; Jia, Wei

    2016-06-01

    In this study, we explored the feasibility of estimating the soil heavy metal concentrations using the hyperspectral satellite image. The concentration of As, Pb, Zn and Cd elements in 48 topsoil samples collected from the field in Yushu County of the Sanjiangyuan regions was measured in the laboratory. We then extracted 176 vegetation spectral reflectance bands of 48 soil samples as well as five vegetation indices from two Hyperion images. Following that, the partial least squares regression (PLSR) method was employed to estimate the soil heavy metal concentrations using the above two independent sets of Hyperion-derived variables, separately constructed the estimation model between the 176 vegetation spectral reflectance bands and the soil heavy metal concentrations (called the vegetation spectral reflectance-based estimation model), and between the five vegetation indices being used as the independent variable and the soil heavy metal concentrations (called synthetic vegetation index-based estimation model). Using RPD (the ratio of standard deviation from the 4 heavy metals measured values of the validation samples to RMSE) as the validation criteria, the RPDs of As and Pb concentrations from the two models were both less than 1.4, which suggested that both models were incapable of roughly estimating As and Pb concentrations; whereas the RPDs of Zn and Cd were 1.53, 1.46 and 1.46, 1.42, respectively, which implied that both models had the ability for rough estimation of Zn and Cd concentrations. Based on those results, the vegetation spectral-based estimation model was selected to obtain the spatial distribution map of Zn concentration in combination with the Hyperion image. The estimated Zn map showed that the zones with high Zn concentrations were distributed near the provincial road 308, national road 214 and towns, which could be influenced by human activities. Our study proved that the spectral reflectance of Hyperion image was useful in estimating the soil concentrations of Zn and Cd.

  9. Is Bayesian Estimation Proper for Estimating the Individual's Ability? Research Report 80-3.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    The effect of prior information in Bayesian estimation is considered, mainly from the standpoint of objective testing. In the estimation of a parameter belonging to an individual, the prior information is, in most cases, the density function of the population to which the individual belongs. Bayesian estimation was compared with maximum likelihood…

  10. Impacts of Perinatal Dioxin Exposure on Motor Coordination and Higher Cognitive Development in Vietnamese Preschool Children: A Five-Year Follow-Up

    PubMed Central

    Tran, Nghi Ngoc; Pham, Tai The; Ozawa, Kyoko; Nishijo, Muneko; Nguyen, Anh Thi Nguyet; Tran, Tuong Quy; Hoang, Luong Van; Tran, Anh Hai; Phan, Vu Huy Anh; Nakai, Akio; Nishino, Yoshikazu; Nishijo, Hisao

    2016-01-01

    Dioxin concentrations remain elevated in the environment and in humans residing near former US Air Force bases in South Vietnam. Our previous epidemiological studies showed adverse effects of dioxin exposure on neurodevelopment for the first 3 years of life. Subsequently, we extended the follow-up period and investigated the influence of perinatal dioxin exposure on neurodevelopment, including motor coordination and higher cognitive ability, in preschool children. Presently, we investigated 176 children in a hot spot of dioxin contamination who were followed up from birth until 5 years old. Perinatal dioxin exposure levels were estimated by measuring dioxin levels in maternal breast milk. Dioxin toxicity was evaluated using two indices; toxic equivalent (TEQ)-polychlorinated dibenzo-p-dioxins/furans (PCDDs/Fs) and concentration of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD). Coordinated movements, including manual dexterity, aiming and catching, and balance, were assessed using the Movement Assessment Battery for Children, Second Edition (Movement ABC-2). Cognitive ability was assessed using the nonverbal index (NVI) of the Kaufman Assessment Battery for Children, Second Edition (KABC-II). In boys, total test and balance scores of Movement ABC-2 were significantly lower in the high TEQ- PCDDs/Fs group compared with the moderate and low exposure groups. NVI scores and the pattern reasoning subscale of the KABC-II indicating planning ability were also significantly lower in the high TCDD exposure group compared with the low exposure group of boys. However, in girls, no significant differences in Movement ABC-2 and KABC-II scores were found among the different TEQ-PCDDs/Fs and TCDD exposure groups. Furthermore, in high risk cases, five boys and one girl highly exposed to TEQ-PCDDs/Fs and TCDD had double the risk for difficulties in both neurodevelopmental skills. These results suggest differential impacts of TEQ-PCDDs/Fs and TCDD exposure on motor coordination and higher cognitive ability, respectively. Moreover, high TEQ-PCDDs/Fs exposure combined with high TCDD exposure may increase autistic traits combined with developmental coordination disorder. PMID:26824471

  11. Impacts of Perinatal Dioxin Exposure on Motor Coordination and Higher Cognitive Development in Vietnamese Preschool Children: A Five-Year Follow-Up.

    PubMed

    Tran, Nghi Ngoc; Pham, Tai The; Ozawa, Kyoko; Nishijo, Muneko; Nguyen, Anh Thi Nguyet; Tran, Tuong Quy; Hoang, Luong Van; Tran, Anh Hai; Phan, Vu Huy Anh; Nakai, Akio; Nishino, Yoshikazu; Nishijo, Hisao

    2016-01-01

    Dioxin concentrations remain elevated in the environment and in humans residing near former US Air Force bases in South Vietnam. Our previous epidemiological studies showed adverse effects of dioxin exposure on neurodevelopment for the first 3 years of life. Subsequently, we extended the follow-up period and investigated the influence of perinatal dioxin exposure on neurodevelopment, including motor coordination and higher cognitive ability, in preschool children. Presently, we investigated 176 children in a hot spot of dioxin contamination who were followed up from birth until 5 years old. Perinatal dioxin exposure levels were estimated by measuring dioxin levels in maternal breast milk. Dioxin toxicity was evaluated using two indices; toxic equivalent (TEQ)-polychlorinated dibenzo-p-dioxins/furans (PCDDs/Fs) and concentration of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD). Coordinated movements, including manual dexterity, aiming and catching, and balance, were assessed using the Movement Assessment Battery for Children, Second Edition (Movement ABC-2). Cognitive ability was assessed using the nonverbal index (NVI) of the Kaufman Assessment Battery for Children, Second Edition (KABC-II). In boys, total test and balance scores of Movement ABC-2 were significantly lower in the high TEQ- PCDDs/Fs group compared with the moderate and low exposure groups. NVI scores and the pattern reasoning subscale of the KABC-II indicating planning ability were also significantly lower in the high TCDD exposure group compared with the low exposure group of boys. However, in girls, no significant differences in Movement ABC-2 and KABC-II scores were found among the different TEQ-PCDDs/Fs and TCDD exposure groups. Furthermore, in high risk cases, five boys and one girl highly exposed to TEQ-PCDDs/Fs and TCDD had double the risk for difficulties in both neurodevelopmental skills. These results suggest differential impacts of TEQ-PCDDs/Fs and TCDD exposure on motor coordination and higher cognitive ability, respectively. Moreover, high TEQ-PCDDs/Fs exposure combined with high TCDD exposure may increase autistic traits combined with developmental coordination disorder.

  12. Bayesian Estimation of Combined Accuracy for Tests with Verification Bias

    PubMed Central

    Broemeling, Lyle D.

    2011-01-01

    This presentation will emphasize the estimation of the combined accuracy of two or more tests when verification bias is present. Verification bias occurs when some of the subjects are not subject to the gold standard. The approach is Bayesian where the estimation of test accuracy is based on the posterior distribution of the relevant parameter. Accuracy of two combined binary tests is estimated employing either “believe the positive” or “believe the negative” rule, then the true and false positive fractions for each rule are computed for two tests. In order to perform the analysis, the missing at random assumption is imposed, and an interesting example is provided by estimating the combined accuracy of CT and MRI to diagnose lung cancer. The Bayesian approach is extended to two ordinal tests when verification bias is present, and the accuracy of the combined tests is based on the ROC area of the risk function. An example involving mammography with two readers with extreme verification bias illustrates the estimation of the combined test accuracy for ordinal tests. PMID:26859487

  13. What do parents know about their children's comprehension of emotions? accuracy of parental estimates in a community sample of pre-schoolers.

    PubMed

    Kårstad, S B; Kvello, O; Wichstrøm, L; Berg-Nielsen, T S

    2014-05-01

    Parents' ability to correctly perceive their child's skills has implications for how the child develops. In some studies, parents have shown to overestimate their child's abilities in areas such as IQ, memory and language. Emotion Comprehension (EC) is a skill central to children's emotion regulation, initially learned from their parents. In this cross-sectional study we first tested children's EC and then asked parents to estimate the child's performance. Thus, a measure of accuracy between child performance and parents' estimates was obtained. Subsequently, we obtained information on child and parent factors that might predict parents' accuracy in estimating their child's EC. Child EC and parental accuracy of estimation was tested by studying a community sample of 882 4-year-olds who completed the Test of Emotion Comprehension (TEC). The parents were instructed to guess their children's responses on the TEC. Predictors of parental accuracy of estimation were child actual performance on the TEC, child language comprehension, observed parent-child interaction, the education level of the parent, and child mental health. Ninety-one per cent of the parents overestimated their children's EC. On average, parents estimated that their 4-year-old children would display the level of EC corresponding to a 7-year-old. Accuracy of parental estimation was predicted by child high performance on the TEC, child advanced language comprehension, and more optimal parent-child interaction. Parents' ability to estimate the level of their child's EC was characterized by a substantial overestimation. The more competent the child, and the more sensitive and structuring the parent was interacting with the child, the more accurate the parent was in the estimation of their child's EC. © 2013 John Wiley & Sons Ltd.

  14. Shrinkage estimation of effect sizes as an alternative to hypothesis testing followed by estimation in high-dimensional biology: applications to differential gene expression.

    PubMed

    Montazeri, Zahra; Yanofsky, Corey M; Bickel, David R

    2010-01-01

    Research on analyzing microarray data has focused on the problem of identifying differentially expressed genes to the neglect of the problem of how to integrate evidence that a gene is differentially expressed with information on the extent of its differential expression. Consequently, researchers currently prioritize genes for further study either on the basis of volcano plots or, more commonly, according to simple estimates of the fold change after filtering the genes with an arbitrary statistical significance threshold. While the subjective and informal nature of the former practice precludes quantification of its reliability, the latter practice is equivalent to using a hard-threshold estimator of the expression ratio that is not known to perform well in terms of mean-squared error, the sum of estimator variance and squared estimator bias. On the basis of two distinct simulation studies and data from different microarray studies, we systematically compared the performance of several estimators representing both current practice and shrinkage. We find that the threshold-based estimators usually perform worse than the maximum-likelihood estimator (MLE) and they often perform far worse as quantified by estimated mean-squared risk. By contrast, the shrinkage estimators tend to perform as well as or better than the MLE and never much worse than the MLE, as expected from what is known about shrinkage. However, a Bayesian measure of performance based on the prior information that few genes are differentially expressed indicates that hard-threshold estimators perform about as well as the local false discovery rate (FDR), the best of the shrinkage estimators studied. Based on the ability of the latter to leverage information across genes, we conclude that the use of the local-FDR estimator of the fold change instead of informal or threshold-based combinations of statistical tests and non-shrinkage estimators can be expected to substantially improve the reliability of gene prioritization at very little risk of doing so less reliably. Since the proposed replacement of post-selection estimates with shrunken estimates applies as well to other types of high-dimensional data, it could also improve the analysis of SNP data from genome-wide association studies.

  15. Accurate Visual Heading Estimation at High Rotation Rate Without Oculomotor or Static-Depth Cues

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Perrone, John A.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    It has been claimed that either oculomotor or static depth cues provide the signals about self-rotation necessary approx.-1 deg/s. We tested this hypothesis by simulating self-motion along a curved path with the eyes fixed in the head (plus or minus 16 deg/s of rotation). Curvilinear motion offers two advantages: 1) heading remains constant in retinotopic coordinates, and 2) there is no visual-oculomotor conflict (both actual and simulated eye position remain stationary). We simulated 400 ms of rotation combined with 16 m/s of translation at fixed angles with respect to gaze towards two vertical planes of random dots initially 12 and 24 m away, with a field of view of 45 degrees. Four subjects were asked to fixate a central cross and to respond whether they were translating to the left or right of straight-ahead gaze. From the psychometric curves, heading bias (mean) and precision (semi-interquartile) were derived. The mean bias over 2-5 runs was 3.0, 4.0, -2.0, -0.4 deg for the first author and three naive subjects, respectively (positive indicating towards the rotation direction). The mean precision was 2.0, 1.9, 3.1, 1.6 deg. respectively. The ability of observers to make relatively accurate and precise heading judgments, despite the large rotational flow component, refutes the view that extra-flow-field information is necessary for human visual heading estimation at high rotation rates. Our results support models that process combined translational/rotational flow to estimate heading, but should not be construed to suggest that other cues do not play an important role when they are available to the observer.

  16. Historical (1850-2000) gridded anthropogenic and biomass burning emissions of reactive gases and aerosols:methodology and application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamarque, J. F.; Bond, Tami C.; Eyring, Veronika

    2010-08-11

    We present and discuss a new dataset of gridded emissions covering the historical period (1850-2000) in decadal increments at a horizontal resolution of 0.5° in latitude and longitude. The primary purpose of this inventory is to provide consistent gridded emissions of reactive gases and aerosols for use in chemistry model simulations needed by climate models for the Climate Model Intercomparison Program #5 (CMIP5) in support of the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment report. Our best estimate for the year 2000 inventory represents a combination of existing regional and global inventories to capture the best information available atmore » this point; 40 regions and 12 sectors were used to combine the various sources. The historical reconstruction of each emitted compound, for each region and sector, was then forced to agree with our 2000 estimate, ensuring continuity between past and 2000 emissions. Application of these emissions into two chemistry-climate models is used to test their ability to capture long-term changes in atmospheric ozone, carbon monoxide and aerosols distributions. The simulated long-term change in the Northern mid-latitudes surface and mid-troposphere ozone is not quite as rapid as observed. However, stations outside this latitude band show much better agreement in both present-day and long-term trend. The model simulations consistently underestimate the carbon monoxide trend, while capturing the long-term trend at the Mace Head station. The simulated sulfate and black carbon deposition over Greenland is in very good agreement with the ice-core observations spanning the simulation period. Finally, aerosol optical depth and additional aerosol diagnostics are shown to be in good agreement with previously published estimates.« less

  17. Multidate, multisensor remote sensing reveals high density of carbon-rich mountain peatlands in the páramo of Ecuador.

    PubMed

    Hribljan, John A; Suarez, Esteban; Bourgeau-Chavez, Laura; Endres, Sarah; Lilleskov, Erik A; Chimbolema, Segundo; Wayson, Craig; Serocki, Eleanor; Chimner, Rodney A

    2017-12-01

    Tropical peatlands store a significant portion of the global soil carbon (C) pool. However, tropical mountain peatlands contain extensive peat soils that have yet to be mapped or included in global C estimates. This lack of data hinders our ability to inform policy and apply sustainable management practices to these peatlands that are experiencing unprecedented high rates of land use and land cover change. Rapid large-scale mapping activities are urgently needed to quantify tropical wetland extent and rate of degradation. We tested a combination of multidate, multisensor radar and optical imagery (Landsat TM/PALSAR/RADARSAT-1/TPI image stack) for detecting peatlands in a 2715 km 2 area in the high elevation mountains of the Ecuadorian páramo. The map was combined with an extensive soil coring data set to produce the first estimate of regional peatland soil C storage in the páramo. Our map displayed a high coverage of peatlands (614 km 2 ) containing an estimated 128.2 ± 9.1 Tg of peatland belowground soil C within the mapping area. Scaling-up to the country level, páramo peatlands likely represent less than 1% of the total land area of Ecuador but could contain as much as ~23% of the above- and belowground vegetation C stocks in Ecuadorian forests. These mapping approaches provide an essential methodological improvement applicable to mountain peatlands across the globe, facilitating mapping efforts in support of effective policy and sustainable management, including national and global C accounting and C management efforts. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  18. Global snowfall: A combined CloudSat, GPM, and reanalysis perspective.

    NASA Astrophysics Data System (ADS)

    Milani, Lisa; Kulie, Mark S.; Skofronick-Jackson, Gail; Munchak, S. Joseph; Wood, Norman B.; Levizzani, Vincenzo

    2017-04-01

    Quantitative global snowfall estimates derived from multi-year data records will be presented to highlight recent advances in high latitude precipitation retrievals using spaceborne observations. More specifically, the analysis features the 2006-2016 CloudSat Cloud Profiling Radar (CPR) and the 2014-2016 Global Precipitation (GPM) Microwave Imager (GMI) and Dual-frequency Precipitation Radar (DPR) observational datasets and derived products. The ERA-Interim reanalysis dataset is also used to define the meteorological context and an independent combined modeling/observational evaluation dataset. An overview is first provided of CloudSat CPR-derived results that have stimulated significant recent research regarding global snowfall, including seasonal analyses of unique snowfall modes. GMI and DPR global annual snowfall retrievals are then evaluated against the CloudSat estimates to highlight regions where the datasets provide both consistent and diverging snowfall estimates. A hemispheric seasonal analysis for both datasets will also be provided. These comparisons aim at providing a unified global snowfall characterization that leverages the respective instrument's strengths. Attention will also be devoted to regions around the globe that experience unique snowfall modes. For instance, CloudSat has demonstrated an ability to effectively discern snowfall produced by shallow cumuliform cloud structures (e.g., lake/ocean-induced convective snow produced by air/water interactions associated with seasonal cold air outbreaks). The CloudSat snowfall database also reveals prevalent seasonal shallow cumuliform snowfall trends over climate-sensitive regions like the Greenland Ice Sheet. Other regions with unique snowfall modes, such as the US East Coast winter storm track zone that experiences intense snowfall rates directly associated with strong low pressure systems, will also be highlighted to demonstrate GPM's observational effectiveness. Linkages between CloudSat and GPM global snowfall analyses and independent ERA-Interim datasets will also be presented as a final evaluation exercise.

  19. Experiences in multiyear combined state-parameter estimation with an ecosystem model of the North Atlantic and Arctic Oceans using the Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Simon, Ehouarn; Samuelsen, Annette; Bertino, Laurent; Mouysset, Sandrine

    2015-12-01

    A sequence of one-year combined state-parameter estimation experiments has been conducted in a North Atlantic and Arctic Ocean configuration of the coupled physical-biogeochemical model HYCOM-NORWECOM over the period 2007-2010. The aim is to evaluate the ability of an ensemble-based data assimilation method to calibrate ecosystem model parameters in a pre-operational setting, namely the production of the MyOcean pilot reanalysis of the Arctic biology. For that purpose, four biological parameters (two phyto- and two zooplankton mortality rates) are estimated by assimilating weekly data such as, satellite-derived Sea Surface Temperature, along-track Sea Level Anomalies, ice concentrations and chlorophyll-a concentrations with an Ensemble Kalman Filter. The set of optimized parameters locally exhibits seasonal variations suggesting that time-dependent parameters should be used in ocean ecosystem models. A clustering analysis of the optimized parameters is performed in order to identify consistent ecosystem regions. In the north part of the domain, where the ecosystem model is the most reliable, most of them can be associated with Longhurst provinces and new provinces emerge in the Arctic Ocean. However, the clusters do not coincide anymore with the Longhurst provinces in the Tropics due to large model errors. Regarding the ecosystem state variables, the assimilation of satellite-derived chlorophyll concentration leads to significant reduction of the RMS errors in the observed variables during the first year, i.e. 2008, compared to a free run simulation. However, local filter divergences of the parameter component occur in 2009 and result in an increase in the RMS error at the time of the spring bloom.

  20. Stand-volume estimation from multi-source data for coppiced and high forest Eucalyptus spp. silvicultural systems in KwaZulu-Natal, South Africa

    NASA Astrophysics Data System (ADS)

    Dube, Timothy; Sibanda, Mbulisi; Shoko, Cletah; Mutanga, Onisimo

    2017-10-01

    Forest stand volume is one of the crucial stand parameters, which influences the ability of these forests to provide ecosystem goods and services. This study thus aimed at examining the potential of integrating multispectral SPOT 5 image, with ancillary data (forest age and rainfall metrics) in estimating stand volume between coppiced and planted Eucalyptus spp. in KwaZulu-Natal, South Africa. To achieve this objective, Partial Least Squares Regression (PLSR) algorithm was used. The PLSR algorithm was implemented by applying three tier analysis stages: stage I: using ancillary data as an independent dataset, stage II: SPOT 5 spectral bands as an independent dataset and stage III: combined SPOT 5 spectral bands and ancillary data. The results of the study showed that the use of an independent ancillary dataset better explained the volume of Eucalyptus spp. growing from coppices (adjusted R2 (R2Adj) = 0.54, RMSEP = 44.08 m3/ha), when compared with those that were planted (R2Adj = 0.43, RMSEP = 53.29 m3/ha). Similar results were also observed when SPOT 5 spectral bands were applied as an independent dataset, whereas improved volume estimates were produced when using combined dataset. For instance, planted Eucalyptus spp. were better predicted adjusted R2 (R2Adj) = 0.77, adjusted R2Adj = 0.59, RMSEP = 36.02 m3/ha) when compared with those that grow from coppices (R2 = 0.76, R2Adj = 0.46, RMSEP = 40.63 m3/ha). Overall, the findings of this study demonstrated the relevance of multi-source data in ecosystems modelling.

  1. Speech recognition in one- and two-talker maskers in school-age children and adults: Development of perceptual masking and glimpsing

    PubMed Central

    Buss, Emily; Leibold, Lori J.; Porter, Heather L.; Grose, John H.

    2017-01-01

    Children perform more poorly than adults on a wide range of masked speech perception paradigms, but this effect is particularly pronounced when the masker itself is also composed of speech. The present study evaluated two factors that might contribute to this effect: the ability to perceptually isolate the target from masker speech, and the ability to recognize target speech based on sparse cues (glimpsing). Speech reception thresholds (SRTs) were estimated for closed-set, disyllabic word recognition in children (5–16 years) and adults in a one- or two-talker masker. Speech maskers were 60 dB sound pressure level (SPL), and they were either presented alone or in combination with a 50-dB-SPL speech-shaped noise masker. There was an age effect overall, but performance was adult-like at a younger age for the one-talker than the two-talker masker. Noise tended to elevate SRTs, particularly for older children and adults, and when summed with the one-talker masker. Removing time-frequency epochs associated with a poor target-to-masker ratio markedly improved SRTs, with larger effects for younger listeners; the age effect was not eliminated, however. Results were interpreted as indicating that development of speech-in-speech recognition is likely impacted by development of both perceptual masking and the ability recognize speech based on sparse cues. PMID:28464682

  2. Simultaneous head tissue conductivity and EEG source location estimation.

    PubMed

    Akalin Acar, Zeynep; Acar, Can E; Makeig, Scott

    2016-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15cm(2)-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm(2)-scale accurate 3-D functional cortical imaging modality. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Simultaneous head tissue conductivity and EEG source location estimation

    PubMed Central

    Acar, Can E.; Makeig, Scott

    2015-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3 cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15 cm2-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm2-scale accurate 3-D functional cortical imaging modality. PMID:26302675

  4. Methodological Challenges in Collecting Social and Behavioural Data Regarding the HIV Epidemic among Gay and Other Men Who Have Sex with Men in Australia

    PubMed Central

    Holt, Martin; de Wit, John; Brown, Graham; Maycock, Bruce; Fairley, Christopher; Prestage, Garrett

    2014-01-01

    Background Behavioural surveillance and research among gay and other men who have sex with men (GMSM) commonly relies on non-random recruitment approaches. Methodological challenges limit their ability to accurately represent the population of adult GMSM. We compared the social and behavioural profiles of GMSM recruited via venue-based, online, and respondent-driven sampling (RDS) and discussed their utility for behavioural surveillance. Methods Data from four studies were selected to reflect each recruitment method. We compared demographic characteristics and the prevalence of key indicators including sexual and HIV testing practices obtained from samples recruited through different methods, and population estimates from respondent-driven sampling partition analysis. Results Overall, the socio-demographic profile of GMSM was similar across samples, with some differences observed in age and sexual identification. Men recruited through time-location sampling appeared more connected to the gay community, reported a greater number of sexual partners, but engaged in less unprotected anal intercourse with regular (UAIR) or casual partners (UAIC). The RDS sample overestimated the proportion of HIV-positive men and appeared to recruit men with an overall higher number of sexual partners. A single-website survey recruited a sample with characteristics which differed considerably from the population estimates with regards to age, ethnically diversity and behaviour. Data acquired through time-location sampling underestimated the rates of UAIR and UAIC, while RDS and online sampling both generated samples that underestimated UAIR. Simulated composite samples combining recruits from time-location and multi-website online sampling may produce characteristics more consistent with the population estimates, particularly with regards to sexual practices. Conclusion Respondent-driven sampling produced the sample that was most consistent to population estimates, but this methodology is complex and logistically demanding. Time-location and online recruitment are more cost-effective and easier to implement; using these approaches in combination may offer the potential to recruit a more representative sample of GMSM. PMID:25409440

  5. Methodological challenges in collecting social and behavioural data regarding the HIV epidemic among gay and other men who have sex with men in Australia.

    PubMed

    Zablotska, Iryna B; Frankland, Andrew; Holt, Martin; de Wit, John; Brown, Graham; Maycock, Bruce; Fairley, Christopher; Prestage, Garrett

    2014-01-01

    Behavioural surveillance and research among gay and other men who have sex with men (GMSM) commonly relies on non-random recruitment approaches. Methodological challenges limit their ability to accurately represent the population of adult GMSM. We compared the social and behavioural profiles of GMSM recruited via venue-based, online, and respondent-driven sampling (RDS) and discussed their utility for behavioural surveillance. Data from four studies were selected to reflect each recruitment method. We compared demographic characteristics and the prevalence of key indicators including sexual and HIV testing practices obtained from samples recruited through different methods, and population estimates from respondent-driven sampling partition analysis. Overall, the socio-demographic profile of GMSM was similar across samples, with some differences observed in age and sexual identification. Men recruited through time-location sampling appeared more connected to the gay community, reported a greater number of sexual partners, but engaged in less unprotected anal intercourse with regular (UAIR) or casual partners (UAIC). The RDS sample overestimated the proportion of HIV-positive men and appeared to recruit men with an overall higher number of sexual partners. A single-website survey recruited a sample with characteristics which differed considerably from the population estimates with regards to age, ethnically diversity and behaviour. Data acquired through time-location sampling underestimated the rates of UAIR and UAIC, while RDS and online sampling both generated samples that underestimated UAIR. Simulated composite samples combining recruits from time-location and multi-website online sampling may produce characteristics more consistent with the population estimates, particularly with regards to sexual practices. Respondent-driven sampling produced the sample that was most consistent to population estimates, but this methodology is complex and logistically demanding. Time-location and online recruitment are more cost-effective and easier to implement; using these approaches in combination may offer the potential to recruit a more representative sample of GMSM.

  6. Inheritance of Resistance to Sorghum Shoot Fly, Atherigona soccata in Sorghum, Sorghum bicolor (L.) Moench

    PubMed Central

    Mohammed, Riyazaddin; Are, Ashok Kumar; Munghate, Rajendra Sudhakar; Bhavanasi, Ramaiah; Polavarapu, Kavi Kishor B.; Sharma, Hari Chand

    2016-01-01

    Sorghum production is affected by a wide array of biotic constraints, of which sorghum shoot fly, Atherigona soccata is the most important pest, which severely damages the sorghum crop during the seedling stage. Host plant resistance is one of the major components to control sorghum shoot fly, A. soccata. To understand the nature of gene action for inheritance of shoot fly resistance, we evaluated 10 parents, 45 F1's and their reciprocals in replicated trials during the rainy and postrainy seasons. The genotypes ICSV 700, Phule Anuradha, ICSV 25019, PS 35805, IS 2123, IS 2146, and IS 18551 exhibited resistance to shoot fly damage across seasons. Crosses between susceptible parents were preferred for egg laying by the shoot fly females, resulting in a susceptible reaction. ICSV 700, ICSV 25019, PS 35805, IS 2123, IS 2146, and IS 18551 exhibited significant and negative general combining ability (gca) effects for oviposition, deadheart incidence, and overall resistance score. The plant morphological traits associated with expression of resistance/susceptibility to shoot fly damage such as leaf glossiness, plant vigor, and leafsheath pigmentation also showed significant gca effects by these genotypes, suggesting the potential for use as a selection criterion to breed for resistance to shoot fly, A. soccata. ICSV 700, Phule Anuradha, IS 2146 and IS 18551 with significant positive gca effects for trichome density can also be utilized in improving sorghums for shoot fly resistance. The parents involved in hybrids with negative specific combining ability (sca) effects for shoot fly resistance traits can be used in developing sorghum hybrids with adaptation to postrainy season. The significant reciprocal effects of combining abilities for oviposition, leaf glossy score and trichome density suggested the influence of cytoplasmic factors in inheritance of shoot fly resistance. Higher values of variance due to specific combining ability (σ2s), dominance variance (σ2d), and lower predictability ratios than the variance due to general combining ability (σ2g) and additive variance (σ2a) for shoot fly resistance traits indicated the predominance of dominance type of gene action, whereas trichome density, leaf glossy score, and plant vigor score with high σ2g, additive variance, predictability ratio, and the ratio of general combining ability to the specific combining ability showed predominance of additive type of gene action indicating importance of heterosis breeding followed by simple selection in breeding shoot fly-resistant sorghums. Most of the traits exhibited high broadsense heritability, indicating high inheritance of shoot fly resistance traits. PMID:27200020

  7. Career Adapt-Abilities Scale--Netherlands Form: Psychometric Properties and Relationships to Ability, Personality, and Regulatory Focus

    ERIC Educational Resources Information Center

    van Vianen, Annelies E. M.; Klehe, Ute-Christine; Koen, Jessie; Dries, Nicky

    2012-01-01

    The Career Adapt-Abilities Scale (CAAS)--Netherlands Form consists of four scales, each with six items, which measure concern, control, curiosity, and confidence as psychosocial resources for managing occupational transitions, developmental tasks, and work traumas. Internal consistency estimates for the subscale and total scores ranged from…

  8. The Consequence of Combined Pain and Stress on Work Ability in Female Laboratory Technicians: A Cross-Sectional Study.

    PubMed

    Jay, Kenneth; Friborg, Maria Kristine; Sjøgaard, Gisela; Jakobsen, Markus Due; Sundstrup, Emil; Brandt, Mikkel; Andersen, Lars Louis

    2015-12-11

    Musculoskeletal pain and stress-related disorders are leading causes of impaired work ability, sickness absences and disability pensions. However, knowledge about the combined detrimental effect of pain and stress on work ability is lacking. This study investigates the association between pain in the neck-shoulders, perceived stress, and work ability. In a cross-sectional survey at a large pharmaceutical company in Denmark 473 female laboratory technicians replied to questions about stress (Perceived Stress Scale), musculoskeletal pain intensity (scale 0-10) of the neck and shoulders, and work ability (Work Ability Index). General linear models tested the association between variables. In the multi-adjusted model, stress (p < 0.001) and pain (p < 0.001) had independent main effects on the work ability index score, and there was no significant stress by pain interaction (p = 0.32). Work ability decreased gradually with both increased stress and pain. Workers with low stress and low pain had the highest Work Ability Index score (44.6 (95% CI 43.9-45.3)) and workers with high stress and high pain had the lowest score (32.7 (95% CI 30.6-34.9)). This cross-sectional study indicates that increased stress and musculoskeletal pain are independently associated with lower work ability in female laboratory technicians.

  9. Capillary pressure-saturation relationships for porous granular materials: Pore morphology method vs. pore unit assembly method

    NASA Astrophysics Data System (ADS)

    Sweijen, Thomas; Aslannejad, Hamed; Hassanizadeh, S. Majid

    2017-09-01

    In studies of two-phase flow in complex porous media it is often desirable to have an estimation of the capillary pressure-saturation curve prior to measurements. Therefore, we compare in this research the capability of three pore-scale approaches in reproducing experimentally measured capillary pressure-saturation curves. To do so, we have generated 12 packings of spheres that are representative of four different glass-bead packings and eight different sand packings, for which we have found experimental data on the capillary pressure-saturation curve in the literature. In generating the packings, we matched the particle size distributions and porosity values of the granular materials. We have used three different pore-scale approaches for generating the capillary pressure-saturation curves of each packing: i) the Pore Unit Assembly (PUA) method in combination with the Mayer and Stowe-Princen (MS-P) approximation for estimating the entry pressures of pore throats, ii) the PUA method in combination with the hemisphere approximation, and iii) the Pore Morphology Method (PMM) in combination with the hemisphere approximation. The three approaches were also used to produce capillary pressure-saturation curves for the coating layer of paper, used in inkjet printing. Curves for such layers are extremely difficult to determine experimentally, due to their very small thickness and the presence of extremely small pores (less than one micrometer in size). Results indicate that the PMM and PUA-hemisphere method give similar capillary pressure-saturation curves, because both methods rely on a hemisphere to represent the air-water interface. The ability of the hemisphere approximation and the MS-P approximation to reproduce correct capillary pressure seems to depend on the type of particle size distribution, with the hemisphere approximation working well for narrowly distributed granular materials.

  10. Combining operational models and data into a dynamic vessel risk assessment tool for coastal regions

    NASA Astrophysics Data System (ADS)

    Fernandes, R.; Braunschweig, F.; Lourenço, F.; Neves, R.

    2015-07-01

    The technological evolution in terms of computational capacity, data acquisition systems, numerical modelling and operational oceanography is supplying opportunities for designing and building holistic approaches and complex tools for newer and more efficient management (planning, prevention and response) of coastal water pollution risk events. A combined methodology to dynamically estimate time and space variable shoreline risk levels from ships has been developed, integrating numerical metocean forecasts and oil spill simulations with vessel tracking automatic identification systems (AIS). The risk rating combines the likelihood of an oil spill occurring from a vessel navigating in a study area - Portuguese Continental shelf - with the assessed consequences to the shoreline. The spill likelihood is based on dynamic marine weather conditions and statistical information from previous accidents. The shoreline consequences reflect the virtual spilled oil amount reaching shoreline and its environmental and socio-economic vulnerabilities. The oil reaching shoreline is quantified with an oil spill fate and behaviour model running multiple virtual spills from vessels along time. Shoreline risks can be computed in real-time or from previously obtained data. Results show the ability of the proposed methodology to estimate the risk properly sensitive to dynamic metocean conditions and to oil transport behaviour. The integration of meteo-oceanic + oil spill models with coastal vulnerability and AIS data in the quantification of risk enhances the maritime situational awareness and the decision support model, providing a more realistic approach in the assessment of shoreline impacts. The risk assessment from historical data can help finding typical risk patterns, "hot spots" or developing sensitivity analysis to specific conditions, whereas real time risk levels can be used in the prioritization of individual ships, geographical areas, strategic tug positioning and implementation of dynamic risk-based vessel traffic monitoring.

  11. Developing a Measure of General Academic Ability: An Application of Maximal Reliability and Optimal Linear Combination to High School Students' Scores

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.; Raykov, Tenko; AL-Qataee, Abdullah Ali

    2015-01-01

    This article is concerned with developing a measure of general academic ability (GAA) for high school graduates who apply to colleges, as well as with the identification of optimal weights of the GAA indicators in a linear combination that yields a composite score with maximal reliability and maximal predictive validity, employing the framework of…

  12. Combining state-and-transition simulations and species distribution models to anticipate the effects of climate change

    USGS Publications Warehouse

    Miller, Brian W.; Frid, Leonardo; Chang, Tony; Piekielek, N. B.; Hansen, Andrew J.; Morisette, Jeffrey T.

    2015-01-01

    State-and-transition simulation models (STSMs) are known for their ability to explore the combined effects of multiple disturbances, ecological dynamics, and management actions on vegetation. However, integrating the additional impacts of climate change into STSMs remains a challenge. We address this challenge by combining an STSM with species distribution modeling (SDM). SDMs estimate the probability of occurrence of a given species based on observed presence and absence locations as well as environmental and climatic covariates. Thus, in order to account for changes in habitat suitability due to climate change, we used SDM to generate continuous surfaces of species occurrence probabilities. These data were imported into ST-Sim, an STSM platform, where they dictated the probability of each cell transitioning between alternate potential vegetation types at each time step. The STSM was parameterized to capture additional processes of vegetation growth and disturbance that are relevant to a keystone species in the Greater Yellowstone Ecosystem—whitebark pine (Pinus albicaulis). We compared historical model runs against historical observations of whitebark pine and a key disturbance agent (mountain pine beetle, Dendroctonus ponderosae), and then projected the simulation into the future. Using this combination of correlative and stochastic simulation models, we were able to reproduce historical observations and identify key data gaps. Results indicated that SDMs and STSMs are complementary tools, and combining them is an effective way to account for the anticipated impacts of climate change, biotic interactions, and disturbances, while also allowing for the exploration of management options.

  13. Combinations of Epoch Durations and Cut-Points to Estimate Sedentary Time and Physical Activity among Adolescents

    ERIC Educational Resources Information Center

    Fröberg, Andreas; Berg, Christina; Larsson, Christel; Boldemann, Cecilia; Raustorp, Anders

    2017-01-01

    The purpose of the current study was to investigate how combinations of different epoch durations and cut-points affect the estimations of sedentary time and physical activity in adolescents. Accelerometer data from 101 adolescents were derived and 30 combinations were used to estimate sedentary time, light, moderate, vigorous, and combined…

  14. Using Robust Standard Errors to Combine Multiple Regression Estimates with Meta-Analysis

    ERIC Educational Resources Information Center

    Williams, Ryan T.

    2012-01-01

    Combining multiple regression estimates with meta-analysis has continued to be a difficult task. A variety of methods have been proposed and used to combine multiple regression slope estimates with meta-analysis, however, most of these methods have serious methodological and practical limitations. The purpose of this study was to explore the use…

  15. Evaluation of microarray data normalization procedures using spike-in experiments

    PubMed Central

    Rydén, Patrik; Andersson, Henrik; Landfors, Mattias; Näslund, Linda; Hartmanová, Blanka; Noppa, Laila; Sjöstedt, Anders

    2006-01-01

    Background Recently, a large number of methods for the analysis of microarray data have been proposed but there are few comparisons of their relative performances. By using so-called spike-in experiments, it is possible to characterize the analyzed data and thereby enable comparisons of different analysis methods. Results A spike-in experiment using eight in-house produced arrays was used to evaluate established and novel methods for filtration, background adjustment, scanning, channel adjustment, and censoring. The S-plus package EDMA, a stand-alone tool providing characterization of analyzed cDNA-microarray data obtained from spike-in experiments, was developed and used to evaluate 252 normalization methods. For all analyses, the sensitivities at low false positive rates were observed together with estimates of the overall bias and the standard deviation. In general, there was a trade-off between the ability of the analyses to identify differentially expressed genes (i.e. the analyses' sensitivities) and their ability to provide unbiased estimators of the desired ratios. Virtually all analysis underestimated the magnitude of the regulations; often less than 50% of the true regulations were observed. Moreover, the bias depended on the underlying mRNA-concentration; low concentration resulted in high bias. Many of the analyses had relatively low sensitivities, but analyses that used either the constrained model (i.e. a procedure that combines data from several scans) or partial filtration (a novel method for treating data from so-called not-found spots) had with few exceptions high sensitivities. These methods gave considerable higher sensitivities than some commonly used analysis methods. Conclusion The use of spike-in experiments is a powerful approach for evaluating microarray preprocessing procedures. Analyzed data are characterized by properties of the observed log-ratios and the analysis' ability to detect differentially expressed genes. If bias is not a major problem; we recommend the use of either the CM-procedure or partial filtration. PMID:16774679

  16. Statistical Indexes for Monitoring Item Behavior under Computer Adaptive Testing Environment.

    ERIC Educational Resources Information Center

    Zhu, Renbang; Yu, Feng; Liu, Su

    A computerized adaptive test (CAT) administration usually requires a large supply of items with accurately estimated psychometric properties, such as item response theory (IRT) parameter estimates, to ensure the precision of examinee ability estimation. However, an estimated IRT model of a given item in any given pool does not always correctly…

  17. Large Sample Confidence Intervals for Item Response Theory Reliability Coefficients

    ERIC Educational Resources Information Center

    Andersson, Björn; Xin, Tao

    2018-01-01

    In applications of item response theory (IRT), an estimate of the reliability of the ability estimates or sum scores is often reported. However, analytical expressions for the standard errors of the estimators of the reliability coefficients are not available in the literature and therefore the variability associated with the estimated reliability…

  18. A comparison of sap flux-based evapotranspiration estimates with catchment-scale water balance

    Treesearch

    Chelcy R. Ford; Robert M. Hubbard; Brian D. Kloeppel; James M. Vose

    2007-01-01

    Many researchers are using sap flux to estimate tree-level transpiration, and to scale to stand- and catchment-level transpiration; yet studies evaluating the comparability of sap flux-based estimates of transpiration (E) with alternative methods for estimating Et at this spatial scale are rare. Our ability to...

  19. The National Football League (NFL) combine: does normalized data better predict performance in the NFL draft?

    PubMed

    Robbins, Daniel W

    2010-11-01

    The objective of this study was to investigate the predictive ability of National Football League (NFL) combine physical test data to predict draft order over the years 2005-2009. The NFL combine provides a setting in which NFL personnel can evaluate top draft prospects. The predictive ability of combine data in its raw form and when normalized in both a ratio and allometric manner was examined for 17 positions. Data from 8 combine physical performance tests were correlated with draft order to determine the direction and strength of relationship between the various combine measures and draft order. Players invited to the combine and subsequently drafted in the same year (n = 1,155) were included in the study. The primary finding was that performance in the combine physical test battery, whether normalized or not, has little association with draft success. In terms of predicting draft order from outcomes of the 8 tests making up the combine battery, normalized data provided no advantage over raw data. Of the 8 performance measures investigated, straight sprint time and jumping ability seem to hold the most weight with NFL personnel responsible for draft decisions. The NFL should consider revising the combine test battery to reflect the physical characteristics it deems important. It may be that NFL teams are more interested in attributes other than the purely physical traits reflected in the combine test battery. Players with aspirations of entering the NFL may be well advised to develop mental and technical skills in addition to developing the physical characteristics necessary to optimize performance.

  20. Coherent beam combining in atmospheric channels using gated backscatter.

    PubMed

    Naeh, Itay; Katzir, Abraham

    2016-02-01

    This paper introduces the concept of atmospheric channels and describes a possible approach for the coherent beam combining of lasers of an optical phased array (OPA) in a turbulent atmosphere. By using the recently introduced sparse spectrum harmonic augmentation method, a comprehensive simulative investigation was performed and the exceptional properties of the atmospheric channels were numerically demonstrated. Among the interesting properties are the ability to guide light in a confined manner in a refractive channel, the ability to gather different sources to the same channel, and the ability to maintain a constant relative phase within the channel between several sources. The newly introduced guiding properties combined with a suggested method for channel probing and phase measurement by aerosol backscattered radiation allows coherence improvement of the phased array's elements and energy refocusing at the location of the channel in order to increase power in the bucket without feedback from the target. The method relies on the electronic focusing, electronic scanning, and time gating of the OPA, combined with elements of the relative phase measurements.

  1. Effects of prolonged wakefulness combined with alcohol and hands-free cell phone divided attention tasks on simulated driving.

    PubMed

    Iudice, A; Bonanni, E; Gelli, A; Frittelli, C; Iudice, G; Cignoni, F; Ghicopulos, I; Murri, L

    2005-03-01

    Simulated driving ability was assessed following administration of alcohol, at an estimated blood level of 0.05%, and combined prolonged wakefulness, while participants were undertaking divided attention tasks over a hands-free mobile phone. Divided attention tasks were structured to provide a sustained cognitive workload to the subjects. Twenty three young healthy individuals drove 10 km simulated driving under four conditions in a counterbalanced, within-subject design: alcohol, alcohol and 19 h wakefulness, alcohol and 24 h wakefulness, and while sober. Study measures were: simulated driving, self-reported sleepiness, critical flicker fusion threshold (CFFT), Stroop word-colour interference test (Stroop) and simple visual reaction times (SVRT). As expected, subjective sleepiness was highly correlated with both sleep restriction and alcohol consumption. The combination of alcohol and 24 h sustained wakefulness produced the highest driving impairment, significantly beyond the alcohol effect itself. Concurrent alcohol and 19 h wakefulness significantly affected only driving time-to-collision. No significant changes of study measures occurred following alcohol intake in unrestricted sleep conditions. CFFT, SVRT and Stroop results showed a similar trend in the four study conditions. Thus apparently 'safe' blood alcohol levels in combination with prolonged wakefulness resulted in significant driving impairments. In normal sleep conditions alcohol effects on driving were partially counteracted by the concomitant hands-free phone based psychometric tasks. 2005 John Wiley & Sons, Ltd.

  2. A General Model for Estimating and Correcting the Effects of Nonindependence in Meta-Analysis.

    ERIC Educational Resources Information Center

    Strube, Michael J.

    A general model is described which can be used to represent the four common types of meta-analysis: (1) estimation of effect size by combining study outcomes; (2) estimation of effect size by contrasting study outcomes; (3) estimation of statistical significance by combining study outcomes; and (4) estimation of statistical significance by…

  3. A gender approach to work ability and its relationship to professional and domestic work hours among nursing personnel.

    PubMed

    Rotenberg, Lúcia; Portela, Luciana Fernandes; Banks, Bahby; Griep, Rosane Harter; Fischer, Frida Marina; Landsbergis, Paul

    2008-09-01

    The association between working hours and work ability was examined in a cross-sectional study of male (N=156) and female (N=1092) nurses in three public hospitals. Working hours were considered in terms of their professional and domestic hours per week and their combined impact; total work load. Logistic regression analysis showed a significant association between total work load and inadequate work ability index (WAI) for females only. Females reported a higher proportion of inadequate WAI, fewer professional work hours but longer domestic work hours. There were no significant differences in total work load by gender. The combination of professional and domestic work hours in females seemed to best explain their lower work ability. The findings suggest that investigations into female well-being need to consider their total work load. Our male sample may have lacked sufficient power to detect a relationship between working hours and work ability.

  4. An Evaluation of Empirical Bayes's Estimation of Value-Added Teacher Performance Measures

    ERIC Educational Resources Information Center

    Guarino, Cassandra M.; Maxfield, Michelle; Reckase, Mark D.; Thompson, Paul N.; Wooldridge, Jeffrey M.

    2015-01-01

    Empirical Bayes's (EB) estimation has become a popular procedure used to calculate teacher value added, often as a way to make imprecise estimates more reliable. In this article, we review the theory of EB estimation and use simulated and real student achievement data to study the ability of EB estimators to properly rank teachers. We compare the…

  5. Measurement of operator workload in an information processing task

    NASA Technical Reports Server (NTRS)

    Jenney, L. L.; Older, H. J.; Cameron, B. J.

    1972-01-01

    This was an experimental study to develop an improved methodology for measuring workload in an information processing task and to assess the effects of shift length and communication density (rate of information flow) on the ability to process and classify verbal messages. Each of twelve subjects was exposed to combinations of three shift lengths and two communication densities in a counterbalanced, repeated measurements experimental design. Results indicated no systematic variation in task performance measures or in other dependent measures as a function of shift length or communication density. This is attributed to the absence of a secondary loading task, an insufficiently taxing work schedule, and the lack of psychological stress. Subjective magnitude estimates of workload showed fatigue (and to a lesser degree, tension) to be a power function of shift length. Estimates of task difficulty and fatigue were initially lower but increased more sharply over time under low density than under high density conditions. An interpretation of findings and recommedations for furture research are included. This research has major implications to human workload problems in information processing of air traffic control verbal data.

  6. A semi-empirical model for the estimation of maximum horizontal displacement due to liquefaction-induced lateral spreading

    USGS Publications Warehouse

    Faris, Allison T.; Seed, Raymond B.; Kayen, Robert E.; Wu, Jiaer

    2006-01-01

    During the 1906 San Francisco Earthquake, liquefaction-induced lateral spreading and resultant ground displacements damaged bridges, buried utilities, and lifelines, conventional structures, and other developed works. This paper presents an improved engineering tool for the prediction of maximum displacement due to liquefaction-induced lateral spreading. A semi-empirical approach is employed, combining mechanistic understanding and data from laboratory testing with data and lessons from full-scale earthquake field case histories. The principle of strain potential index, based primary on correlation of cyclic simple shear laboratory testing results with in-situ Standard Penetration Test (SPT) results, is used as an index to characterized the deformation potential of soils after they liquefy. A Bayesian probabilistic approach is adopted for development of the final predictive model, in order to take fullest advantage of the data available and to deal with the inherent uncertainties intrinstiic to the back-analyses of field case histories. A case history from the 1906 San Francisco Earthquake is utilized to demonstrate the ability of the resultant semi-empirical model to estimate maximum horizontal displacement due to liquefaction-induced lateral spreading.

  7. Measuring the critical band for speech.

    PubMed

    Healy, Eric W; Bacon, Sid P

    2006-02-01

    The current experiments were designed to measure the frequency resolution employed by listeners during the perception of everyday sentences. Speech bands having nearly vertical filter slopes and narrow bandwidths were sharply partitioned into various numbers of equal log- or ERBN-width subbands. The temporal envelope from each partition was used to amplitude modulate a corresponding band of low-noise noise, and the modulated carriers were combined and presented to normal-hearing listeners. Intelligibility increased and reached asymptote as the number of partitions increased. In the mid- and high-frequency regions of the speech spectrum, the partition bandwidth corresponding to asymptotic performance matched current estimates of psychophysical tuning across a number of conditions. These results indicate that, in these regions, the critical band for speech matches the critical band measured using traditional psychoacoustic methods and nonspeech stimuli. However, in the low-frequency region, partition bandwidths at asymptote were somewhat narrower than would be predicted based upon psychophysical tuning. It is concluded that, overall, current estimates of psychophysical tuning represent reasonably well the ability of listeners to extract spectral detail from running speech.

  8. User's guide to the Reliability Estimation System Testbed (REST)

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam

    1992-01-01

    The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.

  9. Use of near infrared spectroscopy for estimating meat chemical composition, quality traits and fatty acid content from cattle fed sunflower or flaxseed.

    PubMed

    Prieto, N; López-Campos, O; Aalhus, J L; Dugan, M E R; Juárez, M; Uttaro, B

    2014-10-01

    This study tested the ability of near infrared reflectance spectroscopy (NIRS) to predict meat chemical composition, quality traits and fatty acid (FA) composition from 63 steers fed sunflower or flaxseed in combination with high forage diets. NIRS calibrations, tested by cross-validation, were successful for predicting crude protein, moisture and fat content with coefficients of determination (R(2)) (RMSECV, g·100g(-1) wet matter) of 0.85 (0.48), 0.90 (0.60) and 0.86 (1.08), respectively, but were not reliable for meat quality attributes. This technology accurately predicted saturated, monounsaturated and branched FA and conjugated linoleic acid content (R(2): 0.83-0.97; RMSECV: 0.04-1.15mg·g(-1) tissue) and might be suitable for screening purposes in meat based on the content of FAs beneficial to human health such as rumenic and vaccenic acids. Further research applying NIRS to estimate meat quality attributes will require the use on-line of a fibre-optic probe on intact samples. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Purification and properties of aryl acylamidase from Pseudomonas fluorescens ATCC 39004.

    PubMed

    Hammond, P M; Price, C P; Scawen, M D

    1983-05-16

    Aryl acylamidase has been purified from a strain of Pseudomonas fluorescens ATCC 39004, selected from soil on the basis of its ability to utilise acylanilide compounds as a sole source of carbon. The enzyme was purified to homogeneity by a combination of ion-exchange, hydrophobic and gel-permeation chromatography. A relative molecular mass of about 52 500 was estimated by gel filtration. The native enzyme was shown to be a monomeric protein by sodium dodecyl sulphate/polyacrylamide gel electrophoresis. The enzyme was maximally active at a pH of 8.6 and at a temperature of 45 degrees C. The enzyme shows Michaelis-Menten kinetics; Km values for nitroacetanilide (69 microM) and hydroxyacetanilide (6.1 microM) were low, indicating that the enzyme has a very high affinity for both substrates.

  11. The AMIDAS Website: An Online Tool for Direct Dark Matter Detection Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shan, Chung-Lin

    2010-02-10

    Following our long-erm work on development of model-independent data analysis methods for reconstructing the one-dimensional velocity distribution function of halo WIMPs as well as for determining their mass and couplings on nucleons by using data from direct Dark Matter detection experiments directly, we combined the simulation programs to a compact system: AMIDAS (A Model-Independent Data Analysis System). For users' convenience an online system has also been established at the same time. AMIDAS has the ability to do full Monte Carlo simulations, faster theoretical estimations, as well as to analyze (real) data sets recorded in direct detection experiments without modifying themore » source code. In this article, I give an overview of functions of the AMIDAS code based on the use of its website.« less

  12. Shared Mechanisms in the Estimation of Self-Generated Actions and the Prediction of Other’s Actions by Humans

    PubMed Central

    Ganesh, Gowrishankar

    2017-01-01

    Abstract The question of how humans predict outcomes of observed motor actions by others is a fundamental problem in cognitive and social neuroscience. Previous theoretical studies have suggested that the brain uses parts of the forward model (used to estimate sensory outcomes of self-generated actions) to predict outcomes of observed actions. However, this hypothesis has remained controversial due to the lack of direct experimental evidence. To address this issue, we analyzed the behavior of darts experts in an understanding learning paradigm and utilized computational modeling to examine how outcome prediction of observed actions affected the participants’ ability to estimate their own actions. We recruited darts experts because sports experts are known to have an accurate outcome estimation of their own actions as well as prediction of actions observed in others. We first show that learning to predict the outcomes of observed dart throws deteriorates an expert’s abilities to both produce his own darts actions and estimate the outcome of his own throws (or self-estimation). Next, we introduce a state-space model to explain the trial-by-trial changes in the darts performance and self-estimation through our experiment. The model-based analysis reveals that the change in an expert’s self-estimation is explained only by considering a change in the individual’s forward model, showing that an improvement in an expert’s ability to predict outcomes of observed actions affects the individual’s forward model. These results suggest that parts of the same forward model are utilized in humans to both estimate outcomes of self-generated actions and predict outcomes of observed actions. PMID:29340300

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jorgenson, Jennie; Mehos, Mark; Denholm, Paul

    Concentrated solar power with thermal energy storage (CSP-TES) is a unique source of renewable energy in that its energy can be shifted over time and it can provide the electricity system with dependable generation capacity. In this study, we provide a framework to determine if the benefits of CSP-TES (shiftable energy and the ability to provide firm capacity) exceed the benefits of PV and firm capacity sources such as long-duration battery storage or conventional natural gas combustion turbines (CTs). The results of this study using current capital cost estimates indicate that a combination of PV and conventional gas CTs providesmore » a lower net cost compared to CSP-TES and PV with batteries. Some configurations of CSP-TES have a lower net cost than PV with batteries for even the lowest battery cost estimate. Using projected capital cost targets, however, some configurations of CSP-TES have a lower net cost than PV with either option for even the lowest battery cost estimate. The net cost of CSP-TES varies with configuration, and lower solar multiples coupled with less storage are more attractive at current cost levels, due to high component costs. However, higher solar multiples show a lower net cost using projected future costs for heliostats and thermal storage materials.« less

  14. Cryptosporidiosis susceptibility and risk: a case study.

    PubMed

    Makri, Anna; Modarres, Reza; Parkin, Rebecca

    2004-02-01

    Regional estimates of cryptosporidiosis risks from drinking water exposure were developed and validated, accounting for AIDS status and age. We constructed a model with probability distributions and point estimates representing Cryptosporidium in tap water, tap water consumed per day (exposure characterization); dose response, illness given infection, prolonged illness given illness; and three conditional probabilities describing the likelihood of case detection by active surveillance (health effects characterization). The model predictions were combined with population data to derive expected case numbers and incidence rates per 100,000 population, by age and AIDS status, borough specific and for New York City overall in 2000 (risk characterization). They were compared with same-year surveillance data to evaluate predictive ability, assumed to represent true incidence of waterborne cryptosporidiosis. The predicted mean risks, similar to previously published estimates for this region, overpredicted observed incidence-most extensively when accounting for AIDS status. The results suggest that overprediction may be due to conservative parameters applied to both non-AIDS and AIDS populations, and that biological differences for children need to be incorporated. Interpretations are limited by the unknown accuracy of available surveillance data, in addition to variability and uncertainty of model predictions. The model appears sensitive to geographical differences in AIDS prevalence. The use of surveillance data for validation and model parameters pertinent to susceptibility are discussed.

  15. BAYESIAN PROTEIN STRUCTURE ALIGNMENT.

    PubMed

    Rodriguez, Abel; Schmidler, Scott C

    The analysis of the three-dimensional structure of proteins is an important topic in molecular biochemistry. Structure plays a critical role in defining the function of proteins and is more strongly conserved than amino acid sequence over evolutionary timescales. A key challenge is the identification and evaluation of structural similarity between proteins; such analysis can aid in understanding the role of newly discovered proteins and help elucidate evolutionary relationships between organisms. Computational biologists have developed many clever algorithmic techniques for comparing protein structures, however, all are based on heuristic optimization criteria, making statistical interpretation somewhat difficult. Here we present a fully probabilistic framework for pairwise structural alignment of proteins. Our approach has several advantages, including the ability to capture alignment uncertainty and to estimate key "gap" parameters which critically affect the quality of the alignment. We show that several existing alignment methods arise as maximum a posteriori estimates under specific choices of prior distributions and error models. Our probabilistic framework is also easily extended to incorporate additional information, which we demonstrate by including primary sequence information to generate simultaneous sequence-structure alignments that can resolve ambiguities obtained using structure alone. This combined model also provides a natural approach for the difficult task of estimating evolutionary distance based on structural alignments. The model is illustrated by comparison with well-established methods on several challenging protein alignment examples.

  16. Comparing the net cost of CSP-TES to PV deployed with battery storage

    NASA Astrophysics Data System (ADS)

    Jorgenson, Jennie; Mehos, Mark; Denholm, Paul

    2016-05-01

    Concentrated solar power with thermal energy storage (CSP-TES) is a unique source of renewable energy in that its energy can be shifted over time and it can provide the electricity system with dependable generation capacity. In this study, we provide a framework to determine if the benefits of CSP-TES (shiftable energy and the ability to provide firm capacity) exceed the benefits of PV and firm capacity sources such as long-duration battery storage or conventional natural gas combustion turbines (CTs). The results of this study using current capital cost estimates indicate that a combination of PV and conventional gas CTs provides a lower net cost compared to CSP-TES and PV with batteries. Some configurations of CSP-TES have a lower net cost than PV with batteries for even the lowest battery cost estimate. Using projected capital cost targets, however, some configurations of CSP-TES have a lower net cost than PV with either option for even the lowest battery cost estimate. The net cost of CSP-TES varies with configuration, and lower solar multiples coupled with less storage are more attractive at current cost levels, due to high component costs. However, higher solar multiples show a lower net cost using projected future costs for heliostats and thermal storage materials.

  17. Shrinkage covariance matrix approach based on robust trimmed mean in gene sets detection

    NASA Astrophysics Data System (ADS)

    Karjanto, Suryaefiza; Ramli, Norazan Mohamed; Ghani, Nor Azura Md; Aripin, Rasimah; Yusop, Noorezatty Mohd

    2015-02-01

    Microarray involves of placing an orderly arrangement of thousands of gene sequences in a grid on a suitable surface. The technology has made a novelty discovery since its development and obtained an increasing attention among researchers. The widespread of microarray technology is largely due to its ability to perform simultaneous analysis of thousands of genes in a massively parallel manner in one experiment. Hence, it provides valuable knowledge on gene interaction and function. The microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints. Therefore, the sample covariance matrix in Hotelling's T2 statistic is not positive definite and become singular, thus it cannot be inverted. In this research, the Hotelling's T2 statistic is combined with a shrinkage approach as an alternative estimation to estimate the covariance matrix to detect significant gene sets. The use of shrinkage covariance matrix overcomes the singularity problem by converting an unbiased to an improved biased estimator of covariance matrix. Robust trimmed mean is integrated into the shrinkage matrix to reduce the influence of outliers and consequently increases its efficiency. The performance of the proposed method is measured using several simulation designs. The results are expected to outperform existing techniques in many tested conditions.

  18. Contracted time and expanded space: The impact of circumnavigation on judgements of space and time.

    PubMed

    Brunec, Iva K; Javadi, Amir-Homayoun; Zisch, Fiona E L; Spiers, Hugo J

    2017-09-01

    The ability to estimate distance and time to spatial goals is fundamental for survival. In cases where a region of space must be navigated around to reach a location (circumnavigation), the distance along the path is greater than the straight-line Euclidean distance. To explore how such circumnavigation impacts on estimates of distance and time, we tested participants on their ability to estimate travel time and Euclidean distance to learned destinations in a virtual town. Estimates for approximately linear routes were compared with estimates for routes requiring circumnavigation. For all routes, travel times were significantly underestimated, and Euclidean distances overestimated. For routes requiring circumnavigation, travel time was further underestimated and the Euclidean distance further overestimated. Thus, circumnavigation appears to enhance existing biases in representations of travel time and distance. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  19. Loss of ability to work and ability to live independently in Parkinson's disease.

    PubMed

    Jasinska-Myga, Barbara; Heckman, Michael G; Wider, Christian; Putzke, John D; Wszolek, Zbigniew K; Uitti, Ryan J

    2012-02-01

    Ability to work and live independently is of particular concern for patients with Parkinson's disease (PD). We studied a series of PD patients able to work or live independently at baseline, and evaluated potential risk factors for two separate outcomes: loss of ability to work and loss of ability to live independently. The series comprised 495 PD patients followed prospectively. Ability to work and ability to live independently were based on clinical interview and examination. Cox regression models adjusted for age and disease duration were used to evaluate associations of baseline characteristics with loss of ability to work and loss of ability to live independently. Higher UPDRS dyskinesia score, UPDRS instability score, UPDRS total score, Hoehn and Yahr stage, and presence of intellectual impairment at baseline were all associated with increased risk of future loss of ability to work and loss of ability to live independently (P ≤ 0.0033). Five years after initial visit, for patients ≤70 years of age with a disease duration ≤4 years at initial visit, 88% were still able to work and 90% to live independently. These estimates worsened as age and disease duration at initial visit increased; for patients >70 years of age with a disease duration >4 years, estimates at 5 years were 43% able to work and 57% able to live independently. The information provided in this study can offer useful information for PD patients in preparing for future ability to perform activities of daily living. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Latest NASA Instrument Cost Model (NICM): Version VI

    NASA Technical Reports Server (NTRS)

    Mrozinski, Joe; Habib-Agahi, Hamid; Fox, George; Ball, Gary

    2014-01-01

    The NASA Instrument Cost Model, NICM, is a suite of tools which allow for probabilistic cost estimation of NASA's space-flight instruments at both the system and subsystem level. NICM also includes the ability to perform cost by analogy as well as joint confidence level (JCL) analysis. The latest version of NICM, Version VI, was released in Spring 2014. This paper will focus on the new features released with NICM VI, which include: 1) The NICM-E cost estimating relationship, which is applicable for instruments flying on Explorer-like class missions; 2) The new cluster analysis ability which, alongside the results of the parametric cost estimation for the user's instrument, also provides a visualization of the user's instrument's similarity to previously flown instruments; and 3) includes new cost estimating relationships for in-situ instruments.

  1. Building and Retaining the Career Force: New Procedures for Accessing and Assigning Army Enlisted Personnel: Annual Report 1990 Fiscal Year

    DTIC Science & Technology

    1992-05-01

    researched, valid measure of general cognitive abilities. However, many critical Army tasks appear to require psychomotor and perceptual skills for their...temperament (achievement, discipline, stress toler- ance), psychomotor ability (e.g., eye-hand coordination), and spatial ability to job performance...answered: (1) What combinations of aptitude, temperament, psychomotor ability, and spatial ability, measured at or before entry into the Army, best

  2. Usefulness of combining gadolinium-ethoxybenzyl-diethylenetriamine pentaacetic acid-enhanced magnetic resonance imaging and contrast-enhanced ultrasound for diagnosing the macroscopic classification of small hepatocellular carcinoma.

    PubMed

    Kobayashi, Tomoki; Aikata, Hiroshi; Hatooka, Masahiro; Morio, Kei; Morio, Reona; Kan, Hiromi; Fujino, Hatsue; Fukuhara, Takayuki; Masaki, Keiichi; Ohno, Atsushi; Naeshiro, Noriaki; Nakahara, Takashi; Honda, Yohji; Murakami, Eisuke; Kawaoka, Tomokazu; Tsuge, Masataka; Hiramatsu, Akira; Imamura, Michio; Kawakami, Yoshiiku; Hyogo, Hideyuki; Takahashi, Shoichi; Chayama, Kazuaki

    2015-11-01

    Non-simple nodules in hepatocellular carcinoma (HCC) correlate with poor prognosis. Therefore, we examined the diagnostic ability of gadolinium-ethoxybenzyl-diethylenetriamine pentaacetic acid-enhanced magnetic resonance imaging (EOB-MRI) and contrast-enhanced ultrasound (CEUS) for diagnosing the macroscopic classification of small HCCs. A total of 85 surgically resected nodules (≤30 mm) were analyzed. HCCs were pathologically classified as simple nodular (SN) and non-SN. By evaluating hepatobiliary phase (HBP) of EOB-MRI and Kupffer phase of CEUS, the diagnostic abilities of both modalities to correctly distinguish between SN and non-SN were compared. Forty-six nodules were diagnosed as SN and the remaining 39 nodules as non-SN. The area under the ROC curve (AUROCs, 95% confidence interval) for the diagnosis of non-SN were EOB-MRI, 0.786 (0.682-0.890): CEUS, 0.784 (0.679-0.889), in combination, 0.876 (0.792-0.959). The sensitivity, specificity, and accuracy were 64.1%, 95.7%, and 81.2% in EOB-MRI, 56.4%, 97.8%, and 78.8% in CEUS, and 84.6%, 95.7%, and 90.6% in combination, respectively. High diagnostic ability was obtained when diagnosed in both modalities combined. The sensitivity was especially statistically significant compared to CEUS. Combined diagnosis by EOB-MRI and CEUS can provide high-quality imaging assessment for determining non-SN in small HCCs. • Non-SN has a higher frequency of MVI and intrahepatic metastasis than SN. • Macroscopic classification is useful to choose the treatment strategy for small HCCs. • Diagnostic ability for macroscopic findings of EOB-MRI and CEUS were statistically equal. • The diagnosis of macroscopic findings by individual modality has limitations. • Combined diagnosis of EOB-MRI and CEUS provides high diagnostic ability.

  3. Early Word Decoding Ability as a Longitudinal Predictor of Academic Performance

    ERIC Educational Resources Information Center

    Nordström, Thomas; Jacobson, Christer; Söderberg, Pernilla

    2016-01-01

    This study, using a longitudinal design with a Swedish cohort of young readers, investigates if children's early word decoding ability in second grade can predict later academic performance. In an effort to estimate the unique effect of early word decoding (grade 2) with academic performance (grade 9), gender and non-verbal cognitive ability were…

  4. Competitive Abilities in Experimental Microcosms Are Accurately Predicted by a Demographic Index for R*

    PubMed Central

    Murrell, Ebony G.; Juliano, Steven A.

    2012-01-01

    Resource competition theory predicts that R*, the equilibrium resource amount yielding zero growth of a consumer population, should predict species' competitive abilities for that resource. This concept has been supported for unicellular organisms, but has not been well-tested for metazoans, probably due to the difficulty of raising experimental populations to equilibrium and measuring population growth rates for species with long or complex life cycles. We developed an index (Rindex) of R* based on demography of one insect cohort, growing from egg to adult in a non-equilibrium setting, and tested whether Rindex yielded accurate predictions of competitive abilities using mosquitoes as a model system. We estimated finite rate of increase (λ′) from demographic data for cohorts of three mosquito species raised with different detritus amounts, and estimated each species' Rindex using nonlinear regressions of λ′ vs. initial detritus amount. All three species' Rindex differed significantly, and accurately predicted competitive hierarchy of the species determined in simultaneous pairwise competition experiments. Our Rindex could provide estimates and rigorous statistical comparisons of competitive ability for organisms for which typical chemostat methods and equilibrium population conditions are impractical. PMID:22970128

  5. Unraveling the barriers to reconceptualization of the problem in chronic pain: the actual and perceived ability of patients and health professionals to understand the neurophysiology.

    PubMed

    Moseley, Lorimer

    2003-05-01

    To identify why reconceptualization of the problem is difficult in chronic pain, this study aimed to evaluate whether (1) health professionals and patients can understand currently accurate information about the neurophysiology of pain and (2) health professionals accurately estimate the ability of patients to understand the neurophysiology of pain. Knowledge tests were completed by 276 patients with chronic pain and 288 professionals either before (untrained) or after (trained) education about the neurophysiology of pain. Professionals estimated typical patient performance on the test. Untrained participants performed poorly (mean +/- standard deviation, 55% +/- 19% and 29% +/- 12% for professionals and patients, respectively), compared to their trained counterparts (78% +/- 21% and 61% +/- 19%, respectively). The estimated patient score (46% +/- 18%) was less than the actual patient score (P <.005). The results suggest that professionals and patients can understand the neurophysiology of pain but professionals underestimate patients' ability to understand. The implications are that (1) a poor knowledge of currently accurate information about pain and (2) the underestimation of patients' ability to understand currently accurate information about pain represent barriers to reconceptualization of the problem in chronic pain within the clinical and lay arenas.

  6. A compatible exon-exon junction database for the identification of exon skipping events using tandem mass spectrum data.

    PubMed

    Mo, Fan; Hong, Xu; Gao, Feng; Du, Lin; Wang, Jun; Omenn, Gilbert S; Lin, Biaoyang

    2008-12-16

    Alternative splicing is an important gene regulation mechanism. It is estimated that about 74% of multi-exon human genes have alternative splicing. High throughput tandem (MS/MS) mass spectrometry provides valuable information for rapidly identifying potentially novel alternatively-spliced protein products from experimental datasets. However, the ability to identify alternative splicing events through tandem mass spectrometry depends on the database against which the spectra are searched. We wrote scripts in perl, Bioperl, mysql and Ensembl API and built a theoretical exon-exon junction protein database to account for all possible combinations of exons for a gene while keeping the frame of translation (i.e., keeping only in-phase exon-exon combinations) from the Ensembl Core Database. Using our liver cancer MS/MS dataset, we identified a total of 488 non-redundant peptides that represent putative exon skipping events. Our exon-exon junction database provides the scientific community with an efficient means to identify novel alternatively spliced (exon skipping) protein isoforms using mass spectrometry data. This database will be useful in annotating genome structures using rapidly accumulating proteomics data.

  7. Performance Analysis of Physical Layer Security of Opportunistic Scheduling in Multiuser Multirelay Cooperative Networks

    PubMed Central

    Shim, Kyusung; Do, Nhu Tri; An, Beongku

    2017-01-01

    In this paper, we study the physical layer security (PLS) of opportunistic scheduling for uplink scenarios of multiuser multirelay cooperative networks. To this end, we propose a low-complexity, yet comparable secrecy performance source relay selection scheme, called the proposed source relay selection (PSRS) scheme. Specifically, the PSRS scheme first selects the least vulnerable source and then selects the relay that maximizes the system secrecy capacity for the given selected source. Additionally, the maximal ratio combining (MRC) technique and the selection combining (SC) technique are considered at the eavesdropper, respectively. Investigating the system performance in terms of secrecy outage probability (SOP), closed-form expressions of the SOP are derived. The developed analysis is corroborated through Monte Carlo simulation. Numerical results show that the PSRS scheme significantly improves the secure ability of the system compared to that of the random source relay selection scheme, but does not outperform the optimal joint source relay selection (OJSRS) scheme. However, the PSRS scheme drastically reduces the required amount of channel state information (CSI) estimations compared to that required by the OJSRS scheme, specially in dense cooperative networks. PMID:28212286

  8. A Multiple-star Combined Solution Program - Application to the Population II Binary μ Cas

    NASA Astrophysics Data System (ADS)

    Gudehus, D. H.

    2001-05-01

    A multiple-star combined-solution computer program which can simultaneously fit astrometric, speckle, and spectroscopic data, and solve for the orbital parameters, parallax, proper motion, and masses has been written and is now publicly available. Some features of the program are the ability to scale the weights at run time, hold selected parameters constant, handle up to five spectroscopic subcomponents for the primary and the secondary each, account for the light travel time across the system, account for apsidal motion, plot the results, and write the residuals in position to a standard file for further analysis. The spectroscopic subcomponent data can be represented by reflex velocities and/or by independent measurements. A companion editing program which can manage the data files is included in the package. The program has been applied to the Population II binary μ Cas to derive improved masses and an estimate of the primordial helium abundance. The source code, executables, sample data files, and documentation for OpenVMS and Unix, including Linux, are available at http://www.chara.gsu.edu/\\rlap\\ \\ gudehus/binary.html.

  9. Application of random seismic inversion method based on tectonic model in thin sand body research

    NASA Astrophysics Data System (ADS)

    Dianju, W.; Jianghai, L.; Qingkai, F.

    2017-12-01

    The oil and gas exploitation at Songliao Basin, Northeast China have already progressed to the period with high water production. The previous detailed reservoir description that based on seismic image, sediment core, borehole logging has great limitations in small scale structural interpretation and thin sand body characterization. Thus, precise guidance for petroleum exploration is badly in need of a more advanced method. To do so, we derived the method of random seismic inversion constrained by tectonic model.It can effectively improve the depicting ability of thin sand bodies, combining numerical simulation techniques, which can credibly reducing the blindness of reservoir analysis from the whole to the local and from the macroscopic to the microscopic. At the same time, this can reduce the limitations of the study under the constraints of different geological conditions of the reservoir, accomplish probably the exact estimation for the effective reservoir. Based on the research, this paper has optimized the regional effective reservoir evaluation and the productive location adjustment of applicability, combined with the practical exploration and development in Aonan oil field.

  10. Expanded uncertainty estimation methodology in determining the sandy soils filtration coefficient

    NASA Astrophysics Data System (ADS)

    Rusanova, A. D.; Malaja, L. D.; Ivanov, R. N.; Gruzin, A. V.; Shalaj, V. V.

    2018-04-01

    The combined standard uncertainty estimation methodology in determining the sandy soils filtration coefficient has been developed. The laboratory researches were carried out which resulted in filtration coefficient determination and combined uncertainty estimation obtaining.

  11. Nonparametric estimation of median survival times with applications to multi-site or multi-center studies.

    PubMed

    Rahbar, Mohammad H; Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C

    2018-01-01

    We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study.

  12. Nonparametric estimation of median survival times with applications to multi-site or multi-center studies

    PubMed Central

    Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C.

    2018-01-01

    We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study. PMID:29772007

  13. An Analysis of Content Knowledge and Cognitive Abilities as Factors That Are Associated with Algebra Performance

    ERIC Educational Resources Information Center

    McLean, Tamika Ann

    2017-01-01

    The current study investigated college students' content knowledge and cognitive abilities as factors associated with their algebra performance, and examined how combinations of content knowledge and cognitive abilities related to their algebra performance. Specifically, the investigation examined the content knowledge factors of computational…

  14. Soil and nutrient retention in winter-flooded ricefields with implications for watershed management

    USGS Publications Warehouse

    Manley, S.W.; Kaminski, R.M.; Rodrigue, P.B.; Dewey, J.C.; Schoenholtz, S.H.; Gerard, P.D.; Reinecke, K.J.

    2009-01-01

    The ability of water resources to support aquatic life and human needs depends, in part, on reducing nonpoint source pollution amid contemporary agricultural practices. Winter retention of shallow water on rice and other agricultural fields is an accepted management practice for wildlife conservation; however, soil and water conservation benefits are not well documented. We evaluated the ability of four post-harvest ricefield treatment combinations (stubble-flooded, stubble-open, disked-flooded and disked-open) to abate nonpoint source exports into watersheds of the Mississippi Alluvial Valley. Total suspended solid exports were 1,121 kg ha-1 (1,000 lb ac-1) from disked-open fields where rice stubble was disked after harvest and fields were allowed to drain, compared with 35 kg ha-1 (31 lb ac-1) from stubble-flooded fields where stubble was left standing after harvest and fields captured rainfall from November 1 to March 1. Estimates of total suspended solid exports from ricefields based on Landsat imagery and USDA crop data are 0.43 and 0.40 Mg km-2 day-1 in the Big Sunflower and L'Anguille watersheds, respectively. Estimated reductions in total suspended solid exports from ricefields into the Big Sunflower and L'Anguille water-sheds range from 26% to 64% under hypothetical scenarios in which 65% to 100% of the rice production area is managed to capture winter rainfall. Winter ricefield management reduced nonpoint source export by decreasing concentrations of solids and nutrients in, and reducing runoff volume from, ricefields in the Mississippi Alluvial Valley.

  15. Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters

    ERIC Educational Resources Information Center

    Hoshino, Takahiro; Shigemasu, Kazuo

    2008-01-01

    The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…

  16. Modal parameter identification based on combining transmissibility functions and blind source separation techniques

    NASA Astrophysics Data System (ADS)

    Araújo, Iván Gómez; Sánchez, Jesús Antonio García; Andersen, Palle

    2018-05-01

    Transmissibility-based operational modal analysis is a recent and alternative approach used to identify the modal parameters of structures under operational conditions. This approach is advantageous compared with traditional operational modal analysis because it does not make any assumptions about the excitation spectrum (i.e., white noise with a flat spectrum). However, common methodologies do not include a procedure to extract closely spaced modes with low signal-to-noise ratios. This issue is relevant when considering that engineering structures generally have closely spaced modes and that their measured responses present high levels of noise. Therefore, to overcome these problems, a new combined method for modal parameter identification is proposed in this work. The proposed method combines blind source separation (BSS) techniques and transmissibility-based methods. Here, BSS techniques were used to recover source signals, and transmissibility-based methods were applied to estimate modal information from the recovered source signals. To achieve this combination, a new method to define a transmissibility function was proposed. The suggested transmissibility function is based on the relationship between the power spectral density (PSD) of mixed signals and the PSD of signals from a single source. The numerical responses of a truss structure with high levels of added noise and very closely spaced modes were processed using the proposed combined method to evaluate its ability to identify modal parameters in these conditions. Colored and white noise excitations were used for the numerical example. The proposed combined method was also used to evaluate the modal parameters of an experimental test on a structure containing closely spaced modes. The results showed that the proposed combined method is capable of identifying very closely spaced modes in the presence of noise and, thus, may be potentially applied to improve the identification of damping ratios.

  17. Evaluation of self-reported work ability and usefulness of interventions among sick-listed patients.

    PubMed

    Wåhlin, Charlotte; Ekberg, Kerstin; Persson, Jan; Bernfort, Lars; Öberg, Birgitta

    2013-03-01

    To describe the types of intervention offered, to investigate the relationship between the type of intervention given, patient-reported usefulness of interventions and the effect on self-reported work ability in a cohort of sick-listed patients with musculoskeletal disorders (MSD) or mental disorders (MD). A prospective cohort study was performed including 810 newly sick-listed patients (MSD 62 % and MD 38 %). The baseline questionnaire included sociodemographic characteristics and measures of work ability. The 3-month follow-up questionnaire included measures of work ability, type of intervention received, and judgment of usefulness. Twenty-five percent received medical intervention modalities (MI) only, 45 % received a combination of medical and rehabilitative intervention modalities (CRI) and 31 % received work-related interventions combined with medical or rehabilitative intervention modalities (WI). Behavioural treatments were more common for patients with MD compared with MSD and exercise therapy were more common for patients with MSD. The most prevalent workplace interventions were adjustment of work tasks or the work environment. Among patients with MD, WI was found to be useful and improved work ability significantly more compared with only MI or CRI. For patients with MSD, no significant differences in improved work ability were found between interventions. Patients with MD who received a combination of work-related and clinical interventions reported best usefulness and best improvement in work ability. There was no difference in improvements in work ability between rehabilitation methods in the MSD group. There seems to be a gap between scientific evidence and praxis behaviour in the rehabilitation process. Unimodal rehabilitation was widely applied in the early rehabilitation process, a multimodal treatment approach was rare and only one-third received work-related interventions. It remains a challenge to understand who needs what type of intervention.

  18. Combining Genome-Wide Information with a Functional Structural Plant Model to Simulate 1-Year-Old Apple Tree Architecture.

    PubMed

    Migault, Vincent; Pallas, Benoît; Costes, Evelyne

    2016-01-01

    In crops, optimizing target traits in breeding programs can be fostered by selecting appropriate combinations of architectural traits which determine light interception and carbon acquisition. In apple tree, architectural traits were observed to be under genetic control. However, architectural traits also result from many organogenetic and morphological processes interacting with the environment. The present study aimed at combining a FSPM built for apple tree, MAppleT, with genetic determinisms of architectural traits, previously described in a bi-parental population. We focused on parameters related to organogenesis (phyllochron and immediate branching) and morphogenesis processes (internode length and leaf area) during the first year of tree growth. Two independent datasets collected in 2004 and 2007 on 116 genotypes, issued from a 'Starkrimson' × 'Granny Smith' cross, were used. The phyllochron was estimated as a function of thermal time and sylleptic branching was modeled subsequently depending on phyllochron. From a genetic map built with SNPs, marker effects were estimated on four MAppleT parameters with rrBLUP, using 2007 data. These effects were then considered in MAppleT to simulate tree development in the two climatic conditions. The genome wide prediction model gave consistent estimations of parameter values with correlation coefficients between observed values and estimated values from SNP markers ranging from 0.79 to 0.96. However, the accuracy of the prediction model following cross validation schemas was lower. Three integrative traits (the number of leaves, trunk length, and number of sylleptic laterals) were considered for validating MAppleT simulations. In 2007 climatic conditions, simulated values were close to observations, highlighting the correct simulation of genetic variability. However, in 2004 conditions which were not used for model calibration, the simulations differed from observations. This study demonstrates the possibility of integrating genome-based information in a FSPM for a perennial fruit tree. It also showed that further improvements are required for improving the prediction ability. Especially temperature effect should be extended and other factors taken into account for modeling GxE interactions. Improvements could also be expected by considering larger populations and by testing other genome wide prediction models. Despite these limitations, this study opens new possibilities for supporting plant breeding by in silico evaluations of the impact of genotypic polymorphisms on plant integrative phenotypes.

  19. Combination of rapamycin and garlic-derived S-allylmercaptocysteine induces colon cancer cell apoptosis and suppresses tumor growth in xenograft nude mice through autophagy/p62/Nrf2 pathway.

    PubMed

    Li, Siying; Yang, Guang; Zhu, Xiaosong; Cheng, Lin; Sun, Yueyue; Zhao, Zhongxi

    2017-09-01

    The natural plant-derived product S-allylmercapto-cysteine (SAMC) has been studied in cancer therapy as a single and combination chemotherapeutic agent. The present study was employed to verify the combination use of SAMC and rapamycin that is the mTOR inhibitor with anticancer ability but has limited efficacy due to drug resistance, and to explore the underlying mechanisms. We combined rapamycin and SAMC for colorectal cancer treatment in the HCT‑116 cancer cells and a xenograft murine model. The in vivo study was established by xenografting HCT‑116 cells in BALB/c nude mice. It was found that the combination therapy had enhanced tumor-suppressing ability with the upregulation of the Bax/Bcl-2 ratio as a consequence of activated apoptosis, inhibition of autophagic activity and prevention of Akt phosphorylation. The rapamycin and SAMC combination activated antioxidant transcription expressions of Nrf2 and downstream gene NQO1. Concomitantly, autophagosome cargo p62 was downregulated, indicating that the p62 played a negative-regulatory role between Nrf2 and autophagy. Our results show that the combination of SAMC and rapamycin enhanced the anticancer ability, which could be used for the treatment of colorectal cancer. The underling mechanism of autophagy/p62/Nrf2 pathway discovered may provide a new direction for drug development, especially for traditional Chinese medicines.

  20. On the relationship between ecosystem-scale hyperspectral reflectance and CO2 exchange in European mountain grasslands

    NASA Astrophysics Data System (ADS)

    Balzarolo, M.; Vescovo, L.; Hammerle, A.; Gianelle, D.; Papale, D.; Tomelleri, E.; Wohlfahrt, G.

    2015-05-01

    In this paper we explore the skill of hyperspectral reflectance measurements and vegetation indices (VIs) derived from these in estimating carbon dioxide (CO2) fluxes of grasslands. Hyperspectral reflectance data, CO2 fluxes and biophysical parameters were measured at three grassland sites located in European mountain regions using standardized protocols. The relationships between CO2 fluxes, ecophysiological variables, traditional VIs and VIs derived using all two-band combinations of wavelengths available from the whole hyperspectral data space were analysed. We found that VIs derived from hyperspectral data generally explained a large fraction of the variability in the investigated dependent variables but differed in their ability to estimate midday and daily average CO2 fluxes and various derived ecophysiological parameters. Relationships between VIs and CO2 fluxes and ecophysiological parameters were site-specific, likely due to differences in soils, vegetation parameters and environmental conditions. Chlorophyll and water-content-related VIs explained the largest fraction of variability in most of the dependent variables. Band selection based on a combination of a genetic algorithm with random forests (GA-rF) confirmed that it is difficult to select a universal band region suitable across the investigated ecosystems. Our findings have major implications for upscaling terrestrial CO2 fluxes to larger regions and for remote- and proximal-sensing sampling and analysis strategies and call for more cross-site synthesis studies linking ground-based spectral reflectance with ecosystem-scale CO2 fluxes.

  1. Study on the description method of upper limb's muscle force levels during simulated in-orbit operations

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Li, DongXu; Liu, ZhiZhen; Liu, Liang

    2013-03-01

    The dexterous upper limb serves as the most important tool for astronauts to implement in-orbit experiments and operations. This study developed a simulated weightlessness experiment and invented new measuring equipment to quantitatively evaluate the muscle ability of the upper limb. Isometric maximum voluntary contractions (MVCs) and surface electromyography (sEMG) signals of right-handed pushing at the three positions were measured for eleven subjects. In order to enhance the comprehensiveness and accuracy of muscle force assessment, the study focused on signal processing techniques. We applied a combination method, which consists of time-, frequency-, and bi-frequency-domain analyses. Time- and frequency-domain analyses estimated the root mean square (RMS) and median frequency (MDF) of sEMG signals, respectively. Higher order spectra (HOS) of bi-frequency domain evaluated the maximum bispectrum amplitude ( B max), Gaussianity level (Sg) and linearity level (S l ) of sEMG signals. Results showed that B max, S l , and RMS values all increased as force increased. MDF and Sg values both declined as force increased. The research demonstrated that the combination method is superior to the conventional time- and frequency-domain analyses. The method not only described sEMG signal amplitude and power spectrum, but also deeper characterized phase coupling information and non-Gaussianity and non-linearity levels of sEMG, compared to two conventional analyses. The finding from the study can aid ergonomist to estimate astronaut muscle performance, so as to optimize in-orbit operation efficacy and minimize musculoskeletal injuries.

  2. Modeling Hawaiian ecosystem degradation due to invasive plants under current and future climates

    USGS Publications Warehouse

    Vorsino, Adam E.; Fortini, Lucas B.; Amidon, Fred A.; Miller, Stephen E.; Jacobi, James D.; Price, Jonathan P.; `Ohukani`ohi`a Gon, Sam; Koob, Gregory A.

    2014-01-01

    Occupation of native ecosystems by invasive plant species alters their structure and/or function. In Hawaii, a subset of introduced plants is regarded as extremely harmful due to competitive ability, ecosystem modification, and biogeochemical habitat degradation. By controlling this subset of highly invasive ecosystem modifiers, conservation managers could significantly reduce native ecosystem degradation. To assess the invasibility of vulnerable native ecosystems, we selected a proxy subset of these invasive plants and developed robust ensemble species distribution models to define their respective potential distributions. The combinations of all species models using both binary and continuous habitat suitability projections resulted in estimates of species richness and diversity that were subsequently used to define an invasibility metric. The invasibility metric was defined from species distribution models with 0.8; True Skill Statistic >0.75) as evaluated per species. Invasibility was further projected onto a 2100 Hawaii regional climate change scenario to assess the change in potential habitat degradation. The distribution defined by the invasibility metric delineates areas of known and potential invasibility under current climate conditions and, when projected into the future, estimates potential reductions in native ecosystem extent due to climate-driven invasive incursion. We have provided the code used to develop these metrics to facilitate their wider use (Code S1). This work will help determine the vulnerability of native-dominated ecosystems to the combined threats of climate change and invasive species, and thus help prioritize ecosystem and species management actions.

  3. Combining ecosystem services assessment with structured decision making to support ecological restoration planning.

    PubMed

    Martin, David M; Mazzotta, Marisa; Bousquin, Justin

    2018-04-10

    Accounting for ecosystem services in environmental decision making is an emerging research topic. Modern frameworks for ecosystem services assessment emphasize evaluating the social benefits of ecosystems, in terms of who benefits and by how much, to aid in comparing multiple courses of action. Structured methods that use decision analytic-approaches are emerging for the practice of ecological restoration. In this article, we combine ecosystem services assessment with structured decision making to estimate and evaluate measures of the potential benefits of ecological restoration with a case study in the Woonasquatucket River watershed, Rhode Island, USA. We partnered with a local watershed management organization to analyze dozens of candidate wetland restoration sites for their abilities to supply five ecosystem services-flood water retention, scenic landscapes, learning opportunities, recreational opportunities, and birds. We developed 22 benefit indicators related to the ecosystem services as well as indicators for social equity and reliability that benefits will sustain in the future. We applied conceptual modeling and spatial analysis to estimate indicator values for each candidate restoration site. Lastly, we developed a decision support tool to score and aggregate the values for the organization to screen the restoration sites. Results show that restoration sites in urban areas can provide greater social benefits than sites in less urban areas. Our research approach is general and can be used to investigate other restoration planning studies that perform ecosystem services assessment and fit into a decision-making process.

  4. Numerically stable algorithm for combining census and sample estimates with the multivariate composite estimator

    Treesearch

    R. L. Czaplewski

    2009-01-01

    The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...

  5. Surfactant biocatalyst for remediation of recalcitrant organics and heavy metals

    DOEpatents

    Brigmon, Robin L [North Augusta, SC; Story, Sandra [Greenville, SC; Altman, Denis J [Evans, GA; Berry, Christopher J [Aiken, SC

    2011-03-15

    Novel strains of isolated and purified bacteria have been identified which have the ability to degrade petroleum hydrocarbons including a variety of PAHs. Several isolates also exhibit the ability to produce a biosurfactant. The combination of the biosurfactant-producing ability along with the ability to degrade PAHs enhances the efficiency with which PAHs may be degraded. Additionally, the biosurfactant also provides an additional ability to bind heavy metal ions for removal from a soil or aquatic environment.

  6. Surfactant biocatalyst for remediation of recalcitrant organics and heavy metals

    DOEpatents

    Brigmon, Robin L [North Augusta, SC; Story, Sandra [Greenville, SC; Altman, Denis [Evans, GA; Berry, Christopher J [Aiken, SC

    2009-01-06

    Novel strains of isolated and purified bacteria have been identified which have the ability to degrade petroleum hydrocarbons including a variety of PAHs. Several isolates also exhibit the ability to produce a biosurfactant. The combination of the biosurfactant-producing ability along with the ability to degrade PAHs enhances the efficiency with which PAHs may be degraded. Additionally, the biosurfactant also provides an additional ability to bind heavy metal ions for removal from a soil or aquatic environment.

  7. Surfactant biocatalyst for remediation of recalcitrant organics and heavy metals

    DOEpatents

    Brigmon, Robin L [North Augusta, SC; Story, Sandra [Greenville, SC; Altman, Denis J [Evans, GA; Berry, Christopher J [Aiken, SC

    2011-05-03

    Novel strains of isolated and purified bacteria have been identified which have the ability to degrade petroleum hydrocarbons including a variety of PAHs. Several isolates also exhibit the ability to produce a biosurfactant. The combination of the biosurfactant-producing ability along with the ability to degrade PAHs enhances the efficiency with which PAHs may be degraded. Additionally, the biosurfactant also provides an additional ability to bind heavy metal ions for removal from a soil or aquatic environment.

  8. Surfactant biocatalyst for remediation of recalcitrant organics and heavy metals

    DOEpatents

    Brigmon, Robin L [North Augusta, SC; Story, Sandra [Greenville, SC; Altman,; Denis, J [Evans, GA; Berry, Christopher J [Aiken, SC

    2011-03-29

    Novel strains of isolated and purified bacteria have been identified which have the ability to degrade petroleum hydrocarbons including a variety of PAHs. Several isolates also exhibit the ability to produce a biosurfactant. The combination of the biosurfactant-producing ability along with the ability to degrade PAHs enhances the efficiency with which PAHs may be degraded. Additionally, the biosurfactant also provides an additional ability to bind heavy metal ions for removal from a soil or aquatic environment.

  9. Combining individual participant and aggregated data in a meta-analysis with correlational studies.

    PubMed

    Pigott, Terri; Williams, Ryan; Polanin, Joshua

    2012-12-01

    This paper presents methods for combining individual participant data (IPD) with aggregated study level data (AD) in a meta-analysis of correlational studies. Although medical researchers have employed IPD in a wide range of studies, only a single example exists in the social sciences. New policies at the National Science Foundation requiring grantees to submit data archiving plans may increase social scientists' access to individual level data that could be combined with traditional meta-analysis. The methods presented here extend prior work on IPD to meta-analyses using correlational studies. The examples presented illustrate the synthesis of publicly available national datasets in education with aggregated study data from a meta-analysis examining the correlation of socioeconomic status measures and academic achievement. The major benefit of the inclusion of the individual level is that both within-study and between-study interactions among moderators of effect size can be estimated. Given the potential growth in data archives in the social sciences, we should see a corresponding increase in the ability to synthesize IPD and AD in a single meta-analysis, leading to a more complete understanding of how within-study and between-study moderators relate to effect size. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.

  10. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models

    PubMed Central

    Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.

    2016-01-01

    Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906

  11. Validation of Ocean Color Satellite Data Products in Under Sampled Marine Areas. Chapter 6

    NASA Technical Reports Server (NTRS)

    Subramaniam, Ajit; Hood, Raleigh R.; Brown, Christopher W.; Carpenter, Edward J.; Capone, Douglas G.

    2001-01-01

    The planktonic marine cyanobacterium, Trichodesmium sp., is broadly distributed throughout the oligotrophic marine tropical and sub-tropical oceans. Trichodesmium, which typically occurs in macroscopic bundles or colonies, is noteworthy for its ability to form large surface aggregations and to fix dinitrogen gas. The latter is important because primary production supported by N2 fixation can result in a net export of carbon from the surface waters to deep ocean and may therefore play a significant role in the global carbon cycle. However, information on the distribution and density of Trichodesmium from shipboard measurements through the oligotrophic oceans is very sparse. Such estimates are required to quantitatively estimate total global rates of N2 fixation. As a result current global rate estimates are highly uncertain. Thus in order to understand the broader biogeochemical importance of Trichodesmium and N2 fixation in the oceans, we need better methods to estimate the global temporal and spatial variability of this organism. One approach that holds great promise is satellite remote sensing. Satellite ocean color sensors are ideal instruments for estimating global phytoplankton biomass, especially that due to episodic blooms, because they provide relatively high frequency synoptic information over large areas. Trichodesmium has a combination of specific ultrastructural and biochemical features that lend themselves to identification of this organism by remote sensing. Specifically, these features are high backscatter due to the presence of gas vesicles, and absorption and fluorescence of phycoerythrin. The resulting optical signature is relatively unique and should be detectable with satellite ocean color sensors such as the Sea-Viewing Wide Field-of-view Sensor (SeaWiFS).

  12. The components of working memory updating: an experimental decomposition and individual differences.

    PubMed

    Ecker, Ullrich K H; Lewandowsky, Stephan; Oberauer, Klaus; Chee, Abby E H

    2010-01-01

    Working memory updating (WMU) has been identified as a cognitive function of prime importance for everyday tasks and has also been found to be a significant predictor of higher mental abilities. Yet, little is known about the constituent processes of WMU. We suggest that operations required in a typical WMU task can be decomposed into 3 major component processes: retrieval, transformation, and substitution. We report a large-scale experiment that instantiated all possible combinations of those 3 component processes. Results show that the 3 components make independent contributions to updating performance. We additionally present structural equation models that link WMU task performance and working memory capacity (WMC) measures. These feature the methodological advancement of estimating interindividual covariation and experimental effects on mean updating measures simultaneously. The modeling results imply that WMC is a strong predictor of WMU skills in general, although some component processes-in particular, substitution skills-were independent of WMC. Hence, the reported predictive power of WMU measures may rely largely on common WM functions also measured in typical WMC tasks, although substitution skills may make an independent contribution to predicting higher mental abilities. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  13. Development of techniques for the analysis of isoflavones in soy foods and nutraceuticals.

    PubMed

    Dentith, Susan; Lockwood, Brian

    2008-05-01

    For over 20 years, soy isoflavones have been investigated for their ability to prevent a wide range of cancers and cardiovascular problems, and numerous other disease states. This research is underpinned by the ability of researchers to analyse isoflavones in various forms in a range of raw materials and biological fluids. This review summarizes the techniques recently used in their analysis. The speed of high performance liquid chromatography analysis has been improved, allowing analysis of more samples, and increasing the sensitivity of detection techniques allows quantification of isoflavones down to nanomoles per litre levels in biological fluids. The combination of high-performance liquid chromatography with immunoassay has allowed identification and estimation of low-level soy isoflavones. The use of soy isoflavone supplements has shown an increase in their circulating levels in plasma and urine, aiding investigation of their biological effects. The significance of the metabolite equol has spurned research into new areas, and recently the specific enantiomers have been studied. High-performance liquid chromatography, capillary electrophoresis and gas chromatography are widely used with a range of detection systems. Increasingly, immunoassay is being used because of its high sensitivity and low cost.

  14. Estimation of sleep status in sleep apnea patients using a novel head actigraphy technique.

    PubMed

    Hummel, Richard; Bradley, T Douglas; Fernie, Geoff R; Chang, S J Isaac; Alshaer, Hisham

    2015-01-01

    Polysomnography is a comprehensive modality for diagnosing sleep apnea (SA), but it is expensive and not widely available. Several technologies have been developed for portable diagnosis of SA in the home, most of which lack the ability to detect sleep status. Wrist actigraphy (accelerometry) has been adopted to cover this limitation. However, head actigraphy has not been systematically evaluated for this purpose. Therefore, the aim of this study was to evaluate the ability of head actigraphy to detect sleep/wake status. We obtained full overnight 3-axis head accelerometry data from 75 sleep apnea patient recordings. These were split into training and validation groups (2:1). Data were preprocessed and 5 features were extracted. Different feature combinations were fed into 3 different classifiers, namely support vector machine, logistic regression, and random forests, each of which was trained and validated on a different subgroup. The random forest algorithm yielded the highest performance, with an area under the receiver operating characteristic (ROC) curve of 0.81 for detection of sleep status. This shows that this technique has a very good performance in detecting sleep status in SA patients despite the specificities in this population, such as respiration related movements.

  15. Economic evaluation in short bowel syndrome (SBS): an algorithm to estimate utility scores for a patient-reported SBS-specific quality of life scale (SBS-QoL™).

    PubMed

    Lloyd, Andrew; Kerr, Cicely; Breheny, Katie; Brazier, John; Ortiz, Aurora; Borg, Emma

    2014-03-01

    Condition-specific preference-based measures can offer utility data where they would not otherwise be available or where generic measures may lack sensitivity, although they lack comparability across conditions. This study aimed to develop an algorithm for estimating utilities from the short bowel syndrome health-related quality of life scale (SBS-QoL™). SBS-QoL™ items were selected based on factor and item performance analysis of a European SBS-QoL™ dataset and consultation with 3 SBS clinical experts. Six-dimension health states were developed using 8 SBS-QoL™ items (2 dimensions combined 2 SBS-QoL™ items). SBS health states were valued by a UK general population sample (N = 250) using the lead-time time trade-off method. Preference weights or 'utility decrements' for each severity level of each dimension were estimated by regression models and used to develop the scoring algorithm. Mean utilities for the SBS health states ranged from -0.46 (worst health state, very much affected on all dimensions) to 0.92 (best health state, not at all affected on all dimensions). The random effects model with maximum likelihood estimation regression had the best predictive ability and lowest root mean squared error and mean absolute error, and was used to develop the scoring algorithm. The preference-weighted scoring algorithm for the SBS-QoL™ developed is able to estimate a wide range of utility values from patient-level SBS-QoL™ data. This allows estimation of SBS HRQL impact for the purpose of economic evaluation of SBS treatment benefits.

  16. Human behavioral complexity peaks at age 25

    PubMed Central

    Brugger, Peter

    2017-01-01

    Random Item Generation tasks (RIG) are commonly used to assess high cognitive abilities such as inhibition or sustained attention. They also draw upon our approximate sense of complexity. A detrimental effect of aging on pseudo-random productions has been demonstrated for some tasks, but little is as yet known about the developmental curve of cognitive complexity over the lifespan. We investigate the complexity trajectory across the lifespan of human responses to five common RIG tasks, using a large sample (n = 3429). Our main finding is that the developmental curve of the estimated algorithmic complexity of responses is similar to what may be expected of a measure of higher cognitive abilities, with a performance peak around 25 and a decline starting around 60, suggesting that RIG tasks yield good estimates of such cognitive abilities. Our study illustrates that very short strings of, i.e., 10 items, are sufficient to have their complexity reliably estimated and to allow the documentation of an age-dependent decline in the approximate sense of complexity. PMID:28406953

  17. Between-Site Differences in the Scale of Dispersal and Gene Flow in Red Oak

    PubMed Central

    Moran, Emily V.; Clark, James S.

    2012-01-01

    Background Nut-bearing trees, including oaks (Quercus spp.), are considered to be highly dispersal limited, leading to concerns about their ability to colonize new sites or migrate in response to climate change. However, estimating seed dispersal is challenging in species that are secondarily dispersed by animals, and differences in disperser abundance or behavior could lead to large spatio-temporal variation in dispersal ability. Parentage and dispersal analyses combining genetic and ecological data provide accurate estimates of current dispersal, while spatial genetic structure (SGS) can shed light on past patterns of dispersal and establishment. Methodology and Principal Findings In this study, we estimate seed and pollen dispersal and parentage for two mixed-species red oak populations using a hierarchical Bayesian approach. We compare these results to those of a genetic ML parentage model. We also test whether observed patterns of SGS in three size cohorts are consistent with known site history and current dispersal patterns. We find that, while pollen dispersal is extensive at both sites, the scale of seed dispersal differs substantially. Parentage results differ between models due to additional data included in Bayesian model and differing genotyping error assumptions, but both indicate between-site dispersal differences. Patterns of SGS in large adults, small adults, and seedlings are consistent with known site history (farmed vs. selectively harvested), and with long-term differences in seed dispersal. This difference is consistent with predator/disperser satiation due to higher acorn production at the low-dispersal site. While this site-to-site variation results in substantial differences in asymptotic spread rates, dispersal for both sites is substantially lower than required to track latitudinal temperature shifts. Conclusions Animal-dispersed trees can exhibit considerable spatial variation in seed dispersal, although patterns may be surprisingly constant over time. However, even under favorable conditions, migration in heavy-seeded species is likely to lag contemporary climate change. PMID:22563504

  18. Heart Rate Variability Can Be Used to Estimate Sleepiness-related Decrements in Psychomotor Vigilance during Total Sleep Deprivation

    PubMed Central

    Chua, Eric Chern-Pin; Tan, Wen-Qi; Yeo, Sing-Chen; Lau, Pauline; Lee, Ivan; Mien, Ivan Ho; Puvanendran, Kathiravelu; Gooley, Joshua J.

    2012-01-01

    Study Objectives: To assess whether changes in psychomotor vigilance during sleep deprivation can be estimated using heart rate variability (HRV). Design: HRV, ocular, and electroencephalogram (EEG) measures were compared for their ability to predict lapses on the Psychomotor Vigilance Task (PVT). Setting: Chronobiology and Sleep Laboratory, Duke-NUS Graduate Medical School Singapore. Participants: Twenty-four healthy Chinese men (mean age ± SD = 25.9 ± 2.8 years). Interventions: Subjects were kept awake continuously for 40 hours under constant environmental conditions. Every 2 hours, subjects completed a 10-minute PVT to assess their ability to sustain visual attention. Measurements and Results: During each PVT, we examined the electrocardiogram (ECG), EEG, and percentage of time that the eyes were closed (PERCLOS). Similar to EEG power density and PERCLOS measures, the time course of ECG RR-interval power density in the 0.02- 0.08-Hz range correlated with the 40-hour profile of PVT lapses. Based on receiver operating characteristic curves, RR-interval power density performed as well as EEG power density at identifying a sleepiness-related increase in PVT lapses above threshold. RR-interval power density (0.02-0.08 Hz) also classified subject performance with sensitivity and specificity similar to that of PERCLOS. Conclusions: The ECG carries information about a person's vigilance state. Hence, HRV measures could potentially be used to predict when an individual is at increased risk of attentional failure. Our results suggest that HRV monitoring, either alone or in combination with other physiologic measures, could be incorporated into safety devices to warn drowsy operators when their performance is impaired. Citation: Chua ECP; Tan WQ; Yeo SC; Lau P; Lee I; Mien IH; Puvanendran K; Gooley JJ. Heart rate variability can be used to estimate sleepiness-related decrements in psychomotor vigilance during total sleep deprivation. SLEEP 2012;35(3):325-334. PMID:22379238

  19. Parental selection of hybrid breeding based on maternal and paternal inheritance of traits in rapeseed (Brassica napus L.).

    PubMed

    Xing, Nailin; Fan, Chuchuan; Zhou, Yongming

    2014-01-01

    Parental selection is crucial for hybrid breeding, but the methods available for such a selection are not very effective. In this study, a 6×6 incomplete diallel cross was designed using 12 rapeseed germplasms, and a total of 36 hybrids together with their parental lines were planted in 4 environments. Four yield-related traits and seed oil content (OC) were evaluated. Genetic distance (GD) was estimated with 359 simple sequence repeats (SSRs) markers. Heterosis levels, general combining ability (GCA) and specific combining ability (SCA) were evaluated. GD was found to have a significant correlation with better-parent heterosis (BPH) of thousand seed weight (TSW), SCA of seeds per silique (SS), TSW, and seed yield per plant (SY), while SCA showed a statistically significant correlation with heterosis levels of all traits at 1% significance level. Statistically significant correlations were also observed between GCA of maternal or paternal parents and heterosis levels of different traits except for SS. Interestingly, maternal (TSW, SS, and OC) and paternal (siliques per plant (SP) and SY) inheritance of traits was detected using contribution ratio of maternal and paternal GCA variance as well as correlations between GCA and heterosis levels. Phenotype and heterosis levels of all the traits except TSW of hybrids were significantly correlated with the average performance of parents. The correlations between SS and SP, SP and OC, and SY and OC were statistically significant in hybrids but not in parents. Potential applications of parental selection in hybrid breeding were discussed.

  20. Impact of indoor surface material on perceived air quality.

    PubMed

    Senitkova, I

    2014-03-01

    The material combination impact on perceived indoor air quality for various surface interior materials is presented in this paper. The chemical analysis and sensory assessments identifies health adverse of indoor air pollutants (TVOCs). In this study, emissions and odors from different common indoor surface materials were investigated in glass test chamber under standardized conditions. Chemical measurements (TVOC concentration) and sensory assessments (odor intensity, air acceptability) were done after building materials exposure to standardized conditions. The results of the chemical and sensory assessment of individual materials and their combinations are compared and discussed within the paper. The using possibility of individual material surface sorption ability was investigated. The knowledge of targeted sorption effects can be used in the interior design phase. The results demonstrate the various sorption abilities of various indoor materials as well as the various sorption abilities of the same indoor material in various combinations. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Antibacterial activity and dentin bonding ability of combined use of Clearfil SE Protect and sodium hypochlorite.

    PubMed

    Muratovska, Ilijana; Kitagawa, Haruaki; Hirose, Nanako; Kitagawa, Ranna; Imazato, Satoshi

    2018-02-08

    The aim of this study was to evaluate the antibacterial activity and dentin bonding ability of a commercial self-etch adhesive Clearfil SE Protect (Kuraray Noritake Dental, Tokyo, Japan) in combination with sodium hypochlorite (NaOCl). Agar disc diffusion tests and measurement of minimum inhibitory/bactericidal concentrations (MIC/MBC) against Streptococcus mutans were performed to evaluate antibacterial effects. The mixture solution of 5.25% NaOCl and the primer of Clearfil SE Protect demonstrated less antibacterial activity than primer only. In microtensile bond strength tests using non-carious human molars, pretreatment with 5.25% NaOCl aqueous solution had no influence on the bond strength of Clearfil SE Protect. These results indicate that pretreatment with NaOCl does not influence the bonding ability of Clearfil SE Protect, while their combined use does not enhance cavity disinfecting effects.

  2. Effects of visual cues of object density on perception and anticipatory control of dexterous manipulation.

    PubMed

    Crajé, Céline; Santello, Marco; Gordon, Andrew M

    2013-01-01

    Anticipatory force planning during grasping is based on visual cues about the object's physical properties and sensorimotor memories of previous actions with grasped objects. Vision can be used to estimate object mass based on the object size to identify and recall sensorimotor memories of previously manipulated objects. It is not known whether subjects can use density cues to identify the object's center of mass (CM) and create compensatory moments in an anticipatory fashion during initial object lifts to prevent tilt. We asked subjects (n = 8) to estimate CM location of visually symmetric objects of uniform densities (plastic or brass, symmetric CM) and non-uniform densities (mixture of plastic and brass, asymmetric CM). We then asked whether subjects can use density cues to scale fingertip forces when lifting the visually symmetric objects of uniform and non-uniform densities. Subjects were able to accurately estimate an object's center of mass based on visual density cues. When the mass distribution was uniform, subjects could scale their fingertip forces in an anticipatory fashion based on the estimation. However, despite their ability to explicitly estimate CM location when object density was non-uniform, subjects were unable to scale their fingertip forces to create a compensatory moment and prevent tilt on initial lifts. Hefting object parts in the hand before the experiment did not affect this ability. This suggests a dichotomy between the ability to accurately identify the object's CM location for objects with non-uniform density cues and the ability to utilize this information to correctly scale their fingertip forces. These results are discussed in the context of possible neural mechanisms underlying sensorimotor integration linking visual cues and anticipatory control of grasping.

  3. A Steady-State Kalman Predictor-Based Filtering Strategy for Non-Overlapping Sub-Band Spectral Estimation

    PubMed Central

    Li, Zenghui; Xu, Bin; Yang, Jian; Song, Jianshe

    2015-01-01

    This paper focuses on suppressing spectral overlap for sub-band spectral estimation, with which we can greatly decrease the computational complexity of existing spectral estimation algorithms, such as nonlinear least squares spectral analysis and non-quadratic regularized sparse representation. Firstly, our study shows that the nominal ability of the high-order analysis filter to suppress spectral overlap is greatly weakened when filtering a finite-length sequence, because many meaningless zeros are used as samples in convolution operations. Next, an extrapolation-based filtering strategy is proposed to produce a series of estimates as the substitutions of the zeros and to recover the suppression ability. Meanwhile, a steady-state Kalman predictor is applied to perform a linearly-optimal extrapolation. Finally, several typical methods for spectral analysis are applied to demonstrate the effectiveness of the proposed strategy. PMID:25609038

  4. Location and Navigation with Ultra-Wideband Signals

    DTIC Science & Technology

    2012-06-07

    Coherent vs. Noncoherent Combination 26 F Ranging with Multi-Band UWB Signals: Random Phase Ratation 29 F.1 MB-OFDM System Model...adopted to combine the channel information from subbands: the coherent combining and the noncoherent combining. For the coherent combining, estimates of...channel frequency response coefficients for all subbands are jointly used to estimate the time domain channel with Eq. (33). For the noncoherent

  5. A new combined approach on Hurst exponent estimate and its applications in realized volatility

    NASA Astrophysics Data System (ADS)

    Luo, Yi; Huang, Yirong

    2018-02-01

    The purpose of this paper is to propose a new estimator of Hurst exponent based on the combined information of the conventional rescaled range methods. We demonstrate the superiority of the proposed estimator by Monte Carlo simulations, and the applications in estimating the Hurst exponent of daily volatility series in Chinese stock market. Moreover, we indicate the impact of the type of estimator and structural break on the estimating results of Hurst exponent.

  6. Improving Reasoning Skills in Secondary History Education by Working Memory Training

    ERIC Educational Resources Information Center

    Ariës, Roel Jacobus; Groot, Wim; van den Brink, Henriette Maassen

    2015-01-01

    Secondary school pupils underachieve in tests in which reasoning abilities are required. Brain-based training of working memory (WM) may improve reasoning abilities. In this study, we use a brain-based training programme based on historical content to enhance reasoning abilities in history courses. In the first experiment, a combined intervention…

  7. Do Interests and Cognitive Abilities Help Explain College Major Choice Equally Well for Women and Men?

    ERIC Educational Resources Information Center

    Passler, Katja; Hell, Benedikt

    2012-01-01

    The present study examines whether vocational interests, measured by Holland's RIASEC model, and objectively assessed cognitive abilities, were useful in discriminating among various major categories for a sample of 1990 German university students. Interests and specific abilities, in combination, significantly discriminated among major categories…

  8. An Evaluation of Empirical Bayes' Estimation of Value- Added Teacher Performance Measures. Working Paper #31. Revised

    ERIC Educational Resources Information Center

    Guarino, Cassandra M.; Maxfield, Michelle; Reckase, Mark D.; Thompson, Paul; Wooldridge, Jeffrey M.

    2014-01-01

    Empirical Bayes' (EB) estimation is a widely used procedure to calculate teacher value-added. It is primarily viewed as a way to make imprecise estimates more reliable. In this paper we review the theory of EB estimation and use simulated data to study its ability to properly rank teachers. We compare the performance of EB estimators with that of…

  9. Asymmetry hidden in birds’ tracks reveals wind, heading, and orientation ability over the ocean

    PubMed Central

    Goto, Yusuke; Yoda, Ken; Sato, Katsufumi

    2017-01-01

    Numerous flying and swimming animals constantly need to control their heading (that is, their direction of orientation) in a flow to reach their distant destination. However, animal orientation in a flow has yet to be satisfactorily explained because it is difficult to directly measure animal heading and flow. We constructed a new animal movement model based on the asymmetric distribution of the GPS (Global Positioning System) track vector along its mean vector, which might be caused by wind flow. This statistical model enabled us to simultaneously estimate animal heading (navigational decision-making) and ocean wind information over the range traversed by free-ranging birds. We applied this method to the tracking data of homing seabirds. The wind flow estimated by the model was consistent with the spatiotemporally coarse wind information provided by an atmospheric simulation model. The estimated heading information revealed that homing seabirds could head in a direction different from that leading to the colony to offset wind effects and to enable them to eventually move in the direction they intended to take, even though they are over the open sea where visual cues are unavailable. Our results highlight the utility of combining large data sets of animal movements with the “inverse problem approach,” enabling unobservable causal factors to be estimated from the observed output data. This approach potentially initiates a new era of analyzing animal decision-making in the field. PMID:28959724

  10. Microbial risk assessment of drinking water based on hydrodynamic modelling of pathogen concentrations in source water.

    PubMed

    Sokolova, Ekaterina; Petterson, Susan R; Dienus, Olaf; Nyström, Fredrik; Lindgren, Per-Eric; Pettersson, Thomas J R

    2015-09-01

    Norovirus contamination of drinking water sources is an important cause of waterborne disease outbreaks. Knowledge on pathogen concentrations in source water is needed to assess the ability of a drinking water treatment plant (DWTP) to provide safe drinking water. However, pathogen enumeration in source water samples is often not sufficient to describe the source water quality. In this study, the norovirus concentrations were characterised at the contamination source, i.e. in sewage discharges. Then, the transport of norovirus within the water source (the river Göta älv in Sweden) under different loading conditions was simulated using a hydrodynamic model. Based on the estimated concentrations in source water, the required reduction of norovirus at the DWTP was calculated using quantitative microbial risk assessment (QMRA). The required reduction was compared with the estimated treatment performance at the DWTP. The average estimated concentration in source water varied between 4.8×10(2) and 7.5×10(3) genome equivalents L(-1); and the average required reduction by treatment was between 7.6 and 8.8 Log10. The treatment performance at the DWTP was estimated to be adequate to deal with all tested loading conditions, but was heavily dependent on chlorine disinfection, with the risk of poor reduction by conventional treatment and slow sand filtration. To our knowledge, this is the first article to employ discharge-based QMRA, combined with hydrodynamic modelling, in the context of drinking water. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Estimating methane gas production in peat soils of the Florida Everglades using hydrogeophysical methods

    NASA Astrophysics Data System (ADS)

    Wright, William; Comas, Xavier

    2016-04-01

    The spatial and temporal variability in production and release of greenhouse gases (such as methane) in peat soils remains uncertain, particularly for low-latitude peatlands like the Everglades. Ground penetrating radar (GPR) is a hydrogeophysical tool that has been successfully used in the last decade to noninvasively investigate carbon dynamics in peat soils; however, application in subtropical systems is almost non-existent. This study is based on four field sites in the Florida Everglades, where changes in gas content within the soil are monitored using time-lapse GPR measurements and gas releases are monitored using gas traps. A weekly methane gas production rate is estimated using a mass balance approach, considering gas content estimated from GPR, gas release from gas traps and incorporating rates of diffusion, and methanotrophic consumption from previous studies. Resulting production rates range between 0.02 and 0.47 g CH4 m-2 d-1, falling within the range reported in literature. This study shows the potential of combining GPR with gas traps to monitor gas dynamics in peat soils of the Everglades and estimate methane gas production. We also show the enhanced ability of certain peat soils to store gas when compared to others, suggesting that physical properties control biogenic gas storage in the Everglades peat soils. Better understanding biogenic methane gas dynamics in peat soils has implications regarding the role of wetlands in the global carbon cycle, particularly under a climate change scenario.

  12. Application of data fusion modeling (DFM) to site characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, D.W.; Gibbs, B.P.; Jones, W.F.

    1996-01-01

    Subsurface characterization is faced with substantial uncertainties because the earth is very heterogeneous, and typical data sets are fragmented and disparate. DFM removes many of the data limitations of current methods to quantify and reduce uncertainty for a variety of data types and models. DFM is a methodology to compute hydrogeological state estimates and their uncertainties from three sources of information: measured data, physical laws, and statistical models for spatial heterogeneities. The benefits of DFM are savings in time and cost through the following: the ability to update models in real time to help guide site assessment, improved quantification ofmore » uncertainty for risk assessment, and improved remedial design by quantifying the uncertainty in safety margins. A Bayesian inverse modeling approach is implemented with a Gauss Newton method where spatial heterogeneities are viewed as Markov random fields. Information from data, physical laws, and Markov models is combined in a Square Root Information Smoother (SRIS). Estimates and uncertainties can be computed for heterogeneous hydraulic conductivity fields in multiple geological layers from the usually sparse hydraulic conductivity data and the often more plentiful head data. An application of DFM to the Old Burial Ground at the DOE Savannah River Site will be presented. DFM estimates and quantifies uncertainty in hydrogeological parameters using variably saturated flow numerical modeling to constrain the estimation. Then uncertainties are propagated through the transport modeling to quantify the uncertainty in tritium breakthrough curves at compliance points.« less

  13. Application of data fusion modeling (DFM) to site characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, D.W.; Gibbs, B.P.; Jones, W.F.

    1996-12-31

    Subsurface characterization is faced with substantial uncertainties because the earth is very heterogeneous, and typical data sets are fragmented and disparate. DFM removes many of the data limitations of current methods to quantify and reduce uncertainty for a variety of data types and models. DFM is a methodology to compute hydrogeological state estimates and their uncertainties from three sources of information: measured data, physical laws, and statistical models for spatial heterogeneities. The benefits of DFM are savings in time and cost through the following: the ability to update models in real time to help guide site assessment, improved quantification ofmore » uncertainty for risk assessment, and improved remedial design by quantifying the uncertainty in safety margins. A Bayesian inverse modeling approach is implemented with a Gauss Newton method where spatial heterogeneities are viewed as Markov random fields. Information from data, physical laws, and Markov models is combined in a Square Root Information Smoother (SRIS). Estimates and uncertainties can be computed for heterogeneous hydraulic conductivity fields in multiple geological layers from the usually sparse hydraulic conductivity data and the often more plentiful head data. An application of DFM to the Old Burial Ground at the DOE Savannah River Site will be presented. DFM estimates and quantifies uncertainty in hydrogeological parameters using variably saturated flow numerical modeling to constrain the estimation. Then uncertainties are propagated through the transport modeling to quantify the uncertainty in tritium breakthrough curves at compliance points.« less

  14. Laboratory evaluation of a field-portable sealed source X-ray fluorescence spectrometer for determination of metals in air filter samples.

    PubMed

    Lawryk, Nicholas J; Feng, H Amy; Chen, Bean T

    2009-07-01

    Recent advances in field-portable X-ray fluorescence (FP XRF) spectrometer technology have made it a potentially valuable screening tool for the industrial hygienist to estimate worker exposures to airborne metals. Although recent studies have shown that FP XRF technology may be better suited for qualitative or semiquantitative analysis of airborne lead in the workplace, these studies have not extensively addressed its ability to measure other elements. This study involved a laboratory-based evaluation of a representative model FP XRF spectrometer to measure elements commonly encountered in workplace settings that may be collected on air sample filter media, including chromium, copper, iron, manganese, nickel, lead, and zinc. The evaluation included assessments of (1) response intensity with respect to location on the probe window, (2) limits of detection for five different filter media, (3) limits of detection as a function of analysis time, and (4) bias, precision, and accuracy estimates. Teflon, polyvinyl chloride, polypropylene, and mixed cellulose ester filter media all had similarly low limits of detection for the set of elements examined. Limits of detection, bias, and precision generally improved with increasing analysis time. Bias, precision, and accuracy estimates generally improved with increasing element concentration. Accuracy estimates met the National Institute for Occupational Safety and Health criterion for nearly all the element and concentration combinations. Based on these results, FP XRF spectrometry shows potential to be useful in the assessment of worker inhalation exposures to other metals in addition to lead.

  15. Tuning in to Another Person's Action Capabilities: Perceiving Maximal Jumping-Reach Height from Walking Kinematics

    ERIC Educational Resources Information Center

    Ramenzoni, Veronica; Riley, Michael A.; Davis, Tehran; Shockley, Kevin; Armstrong, Rachel

    2008-01-01

    Three experiments investigated the ability to perceive the maximum height to which another actor could jump to reach an object. Experiment 1 determined the accuracy of estimates for another actor's maximal reach-with-jump height and compared these estimates to estimates of the actor's standing maximal reaching height and to estimates of the…

  16. Combined effects of climate change and bank stabilization on shallow water habitats of chinook salmon.

    PubMed

    Jorgensen, Jeffrey C; McClure, Michelle M; Sheer, Mindi B; Munn, Nancy L

    2013-12-01

    Significant challenges remain in the ability to estimate habitat change under the combined effects of natural variability, climate change, and human activity. We examined anticipated effects on shallow water over low-sloped beaches to these combined effects in the lower Willamette River, Oregon, an area highly altered by development. A proposal to stabilize some shoreline with large rocks (riprap) would alter shallow water areas, an important habitat for threatened Chinook salmon (Oncorhynchus tshawytscha), and would be subject to U.S. Endangered Species Act-mandated oversight. In the mainstem, subyearling Chinook salmon appear to preferentially occupy these areas, which fluctuate with river stages. We estimated effects with a geospatial model and projections of future river flows. Recent (1999-2009) median river stages during peak subyearling occupancy (April-June) maximized beach shallow water area in the lower mainstem. Upstream shallow water area was maximized at lower river stages than have occurred recently. Higher river stages in April-June, resulting from increased flows predicted for the 2080s, decreased beach shallow water area 17-32%. On the basis of projected 2080s flows, more than 15% of beach shallow water area was displaced by the riprap. Beach shallow water area lost to riprap represented up to 1.6% of the total from the mouth to 12.9 km upstream. Reductions in shallow water area could restrict salmon feeding, resting, and refuge from predators and potentially reduce opportunities for the expression of the full range of life-history strategies. Although climate change analyses provided useful information, detailed analyses are prohibitive at the project scale for the multitude of small projects reviewed annually. The benefits of our approach to resource managers include a wider geographic context for reviewing similar small projects in concert with climate change, an approach to analyze cumulative effects of similar actions, and estimation of the actions' long-term effects. Efectos Combinados del Cambio Climático y la Estabilización de Bordes de Ríos Hábitats de Aguas Poco Profundas del Salmón Chinook. Conservation Biology © 2013 Society for Conservation Biology No claim to original US government works.

  17. Developing combination immunotherapies for type 1 diabetes: recommendations from the ITN–JDRF Type 1 Diabetes Combination Therapy Assessment Group

    PubMed Central

    Matthews, J B; Staeva, T P; Bernstein, P L; Peakman, M; von Herrath, M

    2010-01-01

    Like many other complex human disorders of unknown aetiology, autoimmune-mediated type 1 diabetes may ultimately be controlled via a therapeutic approach that combines multiple agents, each with differing modes of action. The numerous advantages of such a strategy include the ability to minimize toxicities and realize synergies to enhance and prolong efficacy. The recognition that combinations might offer far-reaching benefits, at a time when few single agents have yet proved themselves in well-powered trials, represents a significant challenge to our ability to conceive and implement rational treatment designs. As a first step in this process, the Immune Tolerance Network, in collaboration with the Juvenile Diabetes Research Foundation, convened a Type 1 Diabetes Combination Therapy Assessment Group, the recommendations of which are discussed in this Perspective paper. PMID:20629979

  18. Estimation of residual stresses in railroad commuter car wheels following manufacture

    DOT National Transportation Integrated Search

    2003-06-01

    A computer simulation of the manufacturing process of railroad car wheels is described to determine the residual stresses in the wheel following fabrication. Knowledge of, and the ability to predict, these stresses is useful in assessing the ability ...

  19. A comparison of the abilities of the USLE-M, RUSLE2 and WEPP to model event erosion from bare fallow areas.

    PubMed

    Kinnell, P I A

    2017-10-15

    Traditionally, the Universal Soil Loss Equation (USLE) and the revised version of it (RUSLE) have been applied to predicting the long term average soil loss produced by rainfall erosion in many parts of the world. Overtime, it has been recognized that there is a need to predict soil losses over shorter time scales and this has led to the development of WEPP and RUSLE2 which can be used to predict soil losses generated by individual rainfall events. Data currently exists that enables the RUSLE2, WEPP and the USLE-M to estimate historic soil losses from bare fallow runoff and soil loss plots recorded in the USLE database. Comparisons of the abilities of the USLE-M and RUSLE2 to estimate event soil losses from bare fallow were undertaken under circumstances where both models produced the same total soil loss as observed for sets of erosion events on 4 different plots at 4 different locations. Likewise, comparisons of the abilities of the USLE-M and WEPP to model event soil loss from bare fallow were undertaken for sets of erosion events on 4 plots at 4 different locations. Despite being calibrated specifically for each plot, WEPP produced the worst estimates of event soil loss for all the 4 plots. Generally, the USLE-M using measured runoff to calculate the product of the runoff ratio, storm kinetic energy and the maximum 30-minute rainfall intensity produced the best estimates. As to be expected, ability of the USLE-M to estimate event soil loss was reduced when runoff predicted by either RUSLE2 or WEPP was used. Despite this, the USLE-M using runoff predicted by WEPP estimated event soil loss better than WEPP. RUSLE2 also outperformed WEPP. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Estimation abilities of large numerosities in Kindergartners

    PubMed Central

    Mejias, Sandrine; Schiltz, Christine

    2013-01-01

    The approximate number system (ANS) is thought to be a building block for the elaboration of formal mathematics. However, little is known about how this core system develops and if it can be influenced by external factors at a young age (before the child enters formal numeracy education). The purpose of this study was to examine numerical magnitude representations of 5–6 year old children at 2 different moments of Kindergarten considering children's early number competence as well as schools' socio-economic index (SEI). This study investigated estimation abilities of large numerosities using symbolic and non-symbolic output formats (8–64). In addition, we assessed symbolic and non-symbolic early number competence (1–12) at the end of the 2nd (N = 42) and the 3rd (N = 32) Kindergarten grade. By letting children freely produce estimates we observed surprising estimation abilities at a very young age (from 5 year on) extending far beyond children's symbolic explicit knowledge. Moreover, the time of testing has an impact on the ANS accuracy since 3rd Kindergarteners were more precise in both estimation tasks. Additionally, children who presented better exact symbolic knowledge were also those with the most refined ANS. However, this was true only for 3rd Kindergarteners who were a few months from receiving math instructions. In a similar vein, higher SEI positively impacted only the oldest children's estimation abilities whereas it played a role for exact early number competences already in 2nd and 3rd graders. Our results support the view that approximate numerical representations are linked to exact number competence in young children before the start of formal math education and might thus serve as building blocks for mathematical knowledge. Since this core number system was also sensitive to external components such as the SEI this implies that it can most probably be targeted and refined through specific educational strategies from preschool on. PMID:24009591

  1. Number Sense and Mathematics: Which, When and How?

    PubMed Central

    2017-01-01

    Individual differences in number sense correlate with mathematical ability and performance, although the presence and strength of this relationship differs across studies. Inconsistencies in the literature may stem from heterogeneity of number sense and mathematical ability constructs. Sample characteristics may also play a role as changes in the relationship between number sense and mathematics may differ across development and cultural contexts. In this study, 4,984 16-year-old students were assessed on estimation ability, one aspect of number sense. Estimation was measured using 2 different tasks: number line and dot-comparison. Using cognitive and achievement data previously collected from these students at ages 7, 9, 10, 12, and 14, the study explored for which of the measures and when in development these links are observed, and how strong these links are and how much these links are moderated by other cognitive abilities. The 2 number sense measures correlated modestly with each other (r = .22), but moderately with mathematics at age 16. Both measures were also associated with earlier mathematics; but this association was uneven across development and was moderated by other cognitive abilities. PMID:28758784

  2. [Shaping ability of two nickel-titanium rotary systems in simulated S-shaped canals].

    PubMed

    Luo, Hong-xia; Huang, Ding-ming; Zhang, Fu-hua; Tan, Hong; Zhou, Xue-dong

    2008-01-01

    To evaluate the shaping ability of two nickel-titanium rotary systems (ProTaper and Hero642) in simulated S-shaped canals. Thirty simulated S-shaped canals were randomly divided into three groups and prepared by ProTaper, Hero642, ProTaper combined with Hero642 respectively. All the canals were scanned before and after instrumentation, and the amount of material removed in the inner and outer wall and the canal width after instrumentation were measured with a computer image analysis program. There was significant difference in the amount of material removed at the inner side of apical curvature and outer side of apex between ProTaper combined with Hero642 and ProTaper files (P < 0.05) at the same tip size. The inner and outer wall of the canals were evenly prepared by ProTaper combined with Hero642, and the taper of canals were better than those prepared by Hero642. ProTaper combined with Hero 642 had better shaping ability to maintain the original shape and could create good taper canals in the simulated S-shaped canal model.

  3. Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

    NASA Technical Reports Server (NTRS)

    Mahmoud, Saad; Hi, Jianjun

    2012-01-01

    The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.

  4. Cross Time-Frequency Analysis for Combining Information of Several Sources: Application to Estimation of Spontaneous Respiratory Rate from Photoplethysmography

    PubMed Central

    Peláez-Coca, M. D.; Orini, M.; Lázaro, J.; Bailón, R.; Gil, E.

    2013-01-01

    A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG) signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}%) and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%). The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration. PMID:24363777

  5. Association between imagined and actual functional reach (FR): a comparison of young and older adults.

    PubMed

    Gabbard, Carl; Cordova, Alberto

    2013-01-01

    Recent studies indicate that the ability to mentally represent action using motor imagery declines with advanced age (>64 years). As the ability to represent action declines, the elderly may experience increasing difficulty with movement planning and execution. Here, we determined the association between estimation of reach via use of motor imagery and actual FR. Young adults (M=22 years) and older adults (M=66 years) estimated reach while standing with targets randomly presented in peripersonal (within actual reach) and extrapersonal (beyond reach) space. Imagined responses were compared to the individual's scaled maximum reach. FR, also while standing, was assessed using the standardized Functional Reach Test (FRT). Results for total score estimation accuracy showed that there was no difference for age; however, results for mean bias and distribution of error revealed that the older group underestimated while the younger group overestimated. In reference to FR, younger adults outperformed older adults (30 versus 14in.) and most prominent, only the younger group showed a significant relationship between estimation and FR. In addition to gaining insight to the effects of advanced age on the ability to mentally represent action and its association with movement execution, these results although preliminary, may have clinical implications based on the question of whether motor imagery training could improve movement estimations and how that might affect actual reach. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  6. Combined statistical analyses for long-term stability data with multiple storage conditions: a simulation study.

    PubMed

    Almalik, Osama; Nijhuis, Michiel B; van den Heuvel, Edwin R

    2014-01-01

    Shelf-life estimation usually requires that at least three registration batches are tested for stability at multiple storage conditions. The shelf-life estimates are often obtained by linear regression analysis per storage condition, an approach implicitly suggested by ICH guideline Q1E. A linear regression analysis combining all data from multiple storage conditions was recently proposed in the literature when variances are homogeneous across storage conditions. The combined analysis is expected to perform better than the separate analysis per storage condition, since pooling data would lead to an improved estimate of the variation and higher numbers of degrees of freedom, but this is not evident for shelf-life estimation. Indeed, the two approaches treat the observed initial batch results, the intercepts in the model, and poolability of batches differently, which may eliminate or reduce the expected advantage of the combined approach with respect to the separate approach. Therefore, a simulation study was performed to compare the distribution of simulated shelf-life estimates on several characteristics between the two approaches and to quantify the difference in shelf-life estimates. In general, the combined statistical analysis does estimate the true shelf life more consistently and precisely than the analysis per storage condition, but it did not outperform the separate analysis in all circumstances.

  7. Predicting the Ability of Marine Mammal Populations to Compensate for Behavioral Disturbances

    DTIC Science & Technology

    2014-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Predicting the Ability of Marine Mammal Populations to...determine the ability of marine mammal populations to respond to behavioral disturbances. These tools are to be generic and applicable in a wide range...scale consequences. OBJECTIVES • Develop simple, generic measures that allow the estimation of marine mammal populations and individuals to

  8. The relative nature of fertilization success: Implications for the study of post-copulatory sexual selection

    PubMed Central

    2008-01-01

    Background The determination of genetic variation in sperm competitive ability is fundamental to distinguish between post-copulatory sexual selection models based on good-genes vs compatible genes. The sexy-sperm and the good-sperm hypotheses for the evolution of polyandry require additive (intrinsic) effects of genes influencing sperm competitiveness, whereas the genetic incompatibility hypothesis invokes non-additive genetic effects. A male's sperm competitive ability is typically estimated from his fertilization success, a measure that is dependent on the ability of rival sperm competitors to fertilize the ova. It is well known that fertilization success may be conditional to genotypic interactions among males as well as between males and females. However, the consequences of effects arising from the random sampling of sperm competitors upon the estimation of genetic variance in sperm competitiveness have been overlooked. Here I perform simulations of mating trials performed in the context of sibling analysis to investigate whether the ability to detect additive genetic variance underlying the sperm competitiveness phenotype is hindered by the relative nature of fertilization success measurements. Results Fertilization success values render biased sperm competitive ability values. Furthermore, asymmetries among males in the errors committed when estimating sperm competitive abilities are likely to exist as long as males exhibit variation in sperm competitiveness. Critically, random effects arising from the relative nature of fertilization success lead to an underestimation of underlying additive genetic variance in sperm competitive ability. Conclusion The results show that, regardless of the existence of genotypic interactions affecting the output of sperm competition, fertilization success is not a perfect predictor of sperm competitive ability because of the stochasticity of the background used to obtain fertilization success measures. Random effects need to be considered in the debate over the maintenance of genetic variation in sperm competitiveness, and when testing good-genes and compatible-genes processes as explanations of polyandrous behaviour using repeatability/heritability data in sperm competitive ability. These findings support the notion that the genetic incompatibility hypothesis needs to be treated as an alternative hypothesis, rather than a null hypothesis, in studies that fail to detect intrinsic sire effects on the sperm competitiveness phenotype. PMID:18474087

  9. Effect of mechanical properties on erosion resistance of ductile materials

    NASA Astrophysics Data System (ADS)

    Levin, Boris Feliksovih

    Solid particle erosion (SPE) resistance of ductile Fe, Ni, and Co-based alloys as well as commercially pure Ni and Cu was studied. A model for SPE behavior of ductile materials is presented. The model incorporates the mechanical properties of the materials at the deformation conditions associated with SPE process, as well as the evolution of these properties during the erosion induced deformation. An erosion parameter was formulated based on consideration of the energy loss during erosion, and incorporates the material's hardness and toughness at high strain rates. The erosion model predicts that materials combining high hardness and toughness can exhibit good erosion resistance. To measure mechanical properties of materials, high strain rate compression tests using Hopkinson bar technique were conducted at strain rates similar to those during erosion. From these tests, failure strength and strain during erosion were estimated and used to calculate toughness of the materials. The proposed erosion parameter shows good correlation with experimentally measured erosion rates for all tested materials. To analyze subsurface deformation during erosion, microhardness and nanoindentation tests were performed on the cross-sections of the eroded materials and the size of the plastically deformed zone and the increase in materials hardness due to erosion were determined. A nanoindentation method was developed to estimate the restitution coefficient within plastically deformed regions of the eroded samples which provides a measure of the rebounding ability of a material during particle impact. An increase in hardness near the eroded surface led to an increase in restitution coefficient. Also, the stress rates imposed below the eroded surface were comparable to those measured during high strain-rate compression tests (10sp3-10sp4 ssp{-1}). A new parameter, "area under the microhardness curve" was developed that represents the ability of a material to absorb impact energy. By incorporating this parameter into a new erosion model, good correlation was observed with experimentally measured erosion rates. An increase in area under the microhardness curve led to an increase in erosion resistance. It was shown that an increase in hardness below the eroded surface occurs mainly due to the strain-rate hardening effect. Strain-rate sensitivities of tested materials were estimated from the nanoindentation tests and showed a decrease with an increase in materials hardness. Also, materials combining high hardness and strain-rate sensitivity may offer good erosion resistance. A methodology is presented to determine the proper mechanical properties to incorporate into the erosion parameter based on the physical model of the erosion mechanism in ductile materials.

  10. Multiyear high-resolution carbon exchange over European croplands from the integration of observed crop yields into CarbonTracker Europe

    NASA Astrophysics Data System (ADS)

    Combe, Marie; Vilà-Guerau de Arellano, Jordi; de Wit, Allard; Peters, Wouter

    2016-04-01

    Carbon exchange over croplands plays an important role in the European carbon cycle over daily-to-seasonal time scales. Not only do crops occupy one fourth of the European land area, but their photosynthesis and respiration are large and affect CO2 mole fractions at nearly every atmospheric CO2 monitoring site. A better description of this crop carbon exchange in our CarbonTracker Europe data assimilation system - which currently treats crops as unmanaged grasslands - could strongly improve its ability to constrain terrestrial carbon fluxes. Available long-term observations of crop yield, harvest, and cultivated area allow such improvements, when combined with the new crop-modeling framework we present. This framework can model the carbon fluxes of 10 major European crops at high spatial and temporal resolution, on a 12x12 km grid and 3-hourly time-step. The development of this framework is threefold: firstly, we optimize crop growth using the process-based WOrld FOod STudies (WOFOST) agricultural crop growth model. Simulated yields are downscaled to match regional crop yield observations from the Statistical Office of the European Union (EUROSTAT) by estimating a yearly regional parameter for each crop species: the yield gap factor. This step allows us to better represent crop phenology, to reproduce the observed multiannual European crop yields, and to construct realistic time series of the crop carbon fluxes (gross primary production, GPP, and autotrophic respiration, Raut) on a fine spatial and temporal resolution. Secondly, we combine these GPP and Raut fluxes with a simple soil respiration model to obtain the total ecosystem respiration (TER) and net ecosystem exchange (NEE). And thirdly, we represent the horizontal transport of carbon that follows crop harvest and its back-respiration into the atmosphere during harvest consumption. We distribute this carbon using observations of the density of human and ruminant populations from EUROSTAT. We assess the model's ability to represent the seasonal GPP, TER and NEE fluxes using observations at 6 European FluxNet winter wheat and grain maize sites and compare it with the fluxes of the current terrestrial carbon cycle model of CarbonTracker Europe: the Simple Biosphere - Carnegie-Ames-Stanford Approach (SiBCASA) model. We find that the new model framework provides a detailed, realistic, and strongly observation-driven estimate of carbon exchange over European croplands. Its products will be made available to the scientific community through the ICOS Carbon Portal, and serve as a new cropland component in CarbonTracker Europe flux estimates.

  11. The combining of multiple hemispheric resources in learning-disabled and skilled readers' recall of words: a test of three information-processing models.

    PubMed

    Swanson, H L

    1987-01-01

    Three theoretical models (additive, independence, maximum rule) that characterize and predict the influence of independent hemispheric resources on learning-disabled and skilled readers' simultaneous processing were tested. Predictions related to word recall performance during simultaneous encoding conditions (dichotic listening task) were made from unilateral (dichotic listening task) presentations. The maximum rule model best characterized both ability groups in that simultaneous encoding produced no better recall than unilateral presentations. While the results support the hypothesis that both ability groups use similar processes in the combining of hemispheric resources (i.e., weak/dominant processing), ability group differences do occur in the coordination of such resources.

  12. Is there an advanced aging effect on the ability to mentally represent action?

    PubMed

    Gabbard, Carl; Caçola, Priscila; Cordova, Alberto

    2011-01-01

    Motor programming theory suggests that an integral component in an effective outcome is an adequate action (mental) representation of the movements; a representation reflected in the ability to use motor imagery. Recent reports show a decline with advanced age (>64 years) using a variety of motor simulation tasks. Here, we examined the possible effects of advanced age on motor imagery ability in the context of estimation of reachability--that is, estimating whether an object is within reach or out of grasp. Thirty young adults (mean age: 20) and 23 older adults (mean age: 77) were instructed to estimate, using motor imagery, whether randomly presented targets in peripersonal (within actual reach) and extrapersonal (beyond reach) space were within or out of reach of their dominant limb while seated. Results indicated that the younger group was significantly more accurate than the older adults, p < 0.001. Whereas both groups made more errors in extrapersonal space, the values were significantly higher for the older group; that is, they overestimated to a greater extent. In summary, these findings add to the general notion that there is a decline in the ability to mentally represent action with advanced age. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  13. Scanning electron acoustic microscopy of indentation-induced cracks and residual stresses in ceramics

    NASA Technical Reports Server (NTRS)

    Cantrell, John H.; Qian, Menglu; Ravichandran, M. V.; Knowles, K. M.

    1990-01-01

    The ability of scanning electron acoustic microscopy (SEAM) to characterize ceramic materials is assessed. SEAM images of Vickers indentations in SiC whisker-reinforced alumina clearly reveal not only the radial cracks, the length of which can be used to estimate the fracture toughness of the material, but also reveal strong contrast, interpreted as arising from the combined effects of lateral cracks and the residual stress field left in the SiC whisker-reinforced alumina by the indenter. The strong contrast is removed after the material is heat treated at 1000 C to relieve the residual stresses around the indentations. A comparison of these observations with SEAM and reflected polarized light observations of Vickers indentations in soda-lime glass both before and after heat treatment confirms the interpretation of the strong contrast.

  14. Research requirements for emergency power to permit hover-one-engine-inoperative helicopter operation

    NASA Technical Reports Server (NTRS)

    Yost, J. H.

    1976-01-01

    The research and technology demonstration requirements to achieve emergency-power capability for a civil helicopter are documented. The goal for emergency power is the ability to hover with one engine inoperative, transition to minimum-power forward flight, and continue to a safe landing where emergency power may or may not be required. The best method to obtain emergency power is to augment the basic engine power by increasing the engine's speed and turbine-inlet temperature, combined with water-alcohol injection at the engine inlet. Other methods, including turbine boost power and flywheel energy, offer potential for obtaining emergency power for minimum time durations. Costs and schedules are estimated for a research and development program to bring emergency power through a hardware-demonstration test. Interaction of engine emergency-power capability with other helicopter systems is examined.

  15. A piloted simulator evaluation of a ground-based 4-D descent advisor algorithm

    NASA Technical Reports Server (NTRS)

    Davis, Thomas J.; Green, Steven M.; Erzberger, Heinz

    1990-01-01

    A ground-based, four dimensional (4D) descent-advisor algorithm is under development at NASA-Ames. The algorithm combines detailed aerodynamic, propulsive, and atmospheric models with an efficient numerical integration scheme to generate 4D descent advisories. The ability is investigated of the 4D descent advisor algorithm to provide adequate control of arrival time for aircraft not equipped with on-board 4D guidance systems. A piloted simulation was conducted to determine the precision with which the descent advisor could predict the 4D trajectories of typical straight-in descents flown by airline pilots under different wind conditions. The effects of errors in the estimation of wind and initial aircraft weight were also studied. A description of the descent advisor as well as the result of the simulation studies are presented.

  16. FAST TRACK COMMUNICATION A DFT + DMFT approach for nanosystems

    NASA Astrophysics Data System (ADS)

    Turkowski, Volodymyr; Kabir, Alamgir; Nayyar, Neha; Rahman, Talat S.

    2010-11-01

    We propose a combined density-functional-theory-dynamical-mean-field-theory (DFT + DMFT) approach for reliable inclusion of electron-electron correlation effects in nanosystems. Compared with the widely used DFT + U approach, this method has several advantages, the most important of which is that it takes into account dynamical correlation effects. The formalism is illustrated through different calculations of the magnetic properties of a set of small iron clusters (number of atoms 2 <= N <= 5). It is shown that the inclusion of dynamical effects leads to a reduction in the cluster magnetization (as compared to results from DFT + U) and that, even for such small clusters, the magnetization values agree well with experimental estimations. These results justify confidence in the ability of the method to accurately describe the magnetic properties of clusters of interest to nanoscience.

  17. Which Observatories have the Clearest Skies? A Comparative Analysis of 2004 as Seen by the Night Sky Live Global Network of CONCAMs

    NASA Astrophysics Data System (ADS)

    Pereira, W. E.; Muzzin, V.; Merlo, M.; Shamir, L.; Nemiroff, R. J.; Night Sky Live Collaboration

    2004-12-01

    Nearly identical fisheye CONCAMs are now deployed at many major observatories as part of the Night Sky Live (NSL) global network and return real-time data to http://NightSkyLive.net . Combined, these images create a unique ability to assess and compare the relative ground-truth clarity of the skies above these observatories every few minutes. To this end, data and images from CONCAMs are used to estimate the fraction of time that stars are detectable in at least half the sky for each month of 2004. This preliminary comparison was done by visual inspection of on-line archived CONCAM images. Sites involved include Mauna Kea (Hawaii), Haleakala (Hawaii), Siding Spring (Australia), Canary Islands (Spain), Kitt Peak (Arizona), Cerro Pachon (Chile), Wise (Israel), and Sutherland (South Africa).

  18. Investigation of the Muffling Problem for Airplane Engines

    NASA Technical Reports Server (NTRS)

    Upton, G B; Gage, V R

    1920-01-01

    The experimentation presented in this report falls in two divisions: first, the determination of the relation between back pressure in the exhaust line and consequent power loss, for various combinations of speed and throttle positions of the engine; second, the construction and trial of muffler designs covering both type and size. Report deals with experiments in the development of a muffler designed on the principle which will give the maximum muffling effect with a minimum loss of power. The main body of the work has been done on a Curtiss OX eight-cylinder airplane engine, 4 by 5 inches, rated 70 horsepower at 1,200 revolutions per minute. For estimation of the muffling ability and suppression of "bark" of individual exhausts, the "Ingeco" stationary, single cylinder, 5 1/2 by 10 inch, throttling governed gasoline engine, and occasionally other engines were used.

  19. Estimating thermal performance curves from repeated field observations

    USGS Publications Warehouse

    Childress, Evan; Letcher, Benjamin H.

    2017-01-01

    Estimating thermal performance of organisms is critical for understanding population distributions and dynamics and predicting responses to climate change. Typically, performance curves are estimated using laboratory studies to isolate temperature effects, but other abiotic and biotic factors influence temperature-performance relationships in nature reducing these models' predictive ability. We present a model for estimating thermal performance curves from repeated field observations that includes environmental and individual variation. We fit the model in a Bayesian framework using MCMC sampling, which allowed for estimation of unobserved latent growth while propagating uncertainty. Fitting the model to simulated data varying in sampling design and parameter values demonstrated that the parameter estimates were accurate, precise, and unbiased. Fitting the model to individual growth data from wild trout revealed high out-of-sample predictive ability relative to laboratory-derived models, which produced more biased predictions for field performance. The field-based estimates of thermal maxima were lower than those based on laboratory studies. Under warming temperature scenarios, field-derived performance models predicted stronger declines in body size than laboratory-derived models, suggesting that laboratory-based models may underestimate climate change effects. The presented model estimates true, realized field performance, avoiding assumptions required for applying laboratory-based models to field performance, which should improve estimates of performance under climate change and advance thermal ecology.

  20. Correlator data analysis for the array feed compensation system

    NASA Technical Reports Server (NTRS)

    Iijima, B.; Fort, D.; Vilnrotter, V.

    1994-01-01

    The real-time array feed compensation system is currently being evaluated at DSS 13. This system recovers signal-to-noise ratio (SNR) loss due to mechanical antenna deformations by using an array of seven Ka-band (33.7-GHz) horns to collect the defocused signal fields. The received signals are downconverted and digitized, in-phase and quadrature samples are generated, and combining weights are applied before the samples are recombined. It is shown that when optimum combining weights are employed, the SNR of the combined signal approaches the sum of the channel SNR's. The optimum combining weights are estimated directly from the signals in each channel by the Real-Time Block 2 (RTB2) correlator; since it was designed for very-long-baseline interferometer (VLBI) applications, it can process broadband signals as well as tones to extract the required weight estimates. The estimation algorithms for the optimum combining weights are described for tones and broadband sources. Data recorded in correlator output files can also be used off-line to estimate combiner performance by estimating the SNR in each channel, which was done for data taken during a Jupiter track at DSS 13.

  1. Individual snag detection using neighborhood attribute filtered airborne lidar data

    Treesearch

    Brian M. Wing; Martin W. Ritchie; Kevin Boston; Warren B. Cohen; Michael J. Olsen

    2015-01-01

    The ability to estimate and monitor standing dead trees (snags) has been difficult due to their irregular and sparse distribution, often requiring intensive sampling methods to obtain statistically significant estimates. This study presents a new method for estimating and monitoring snags using neighborhood attribute filtered airborne discrete-return lidar data. The...

  2. Estimating effective roughness parameters of the L-MEB model for soil moisture retrieval using passive microwave observations from SMAPVEX12

    USDA-ARS?s Scientific Manuscript database

    Although there have been efforts to improve existing soil moisture retrieval algorithms, the ability to estimate soil moisture from passive microwave observations is still hampered by problems in accurately modeling the observed microwave signal. This paper focuses on the estimation of effective sur...

  3. Stroke survivors over-estimate their medication self-administration (MSA) ability, predicting memory loss.

    PubMed

    Barrett, A M; Galletta, Elizabeth E; Zhang, Jun; Masmela, Jenny R; Adler, Uri S

    2014-01-01

    Medication self-administration (MSA) may be cognitively challenging after stroke, but guidelines are currently lacking for identifying high-functioning stroke survivors who may have difficulty with this task. Complicating this matter, stroke survivors may not be aware of their cognitive problems (cognitive anosognosia) and may over-estimate their MSA competence. The authors wished to evaluate medication self-administration and MSA self-awareness in 24 consecutive acute stroke survivors undergoing inpatient rehabilitation, to determine if they would over-estimate their medication self-administration and if this predicted memory disorder. Stroke survivors were tested on the Hopkins Medication Schedule and also their memory, naming mood and dexterity were evaluated, comparing their performance to 17 matched controls. The anosognosia ratio indicated MSA over-estimation in stroke survivors compared with controls--no other over-estimation errors were noted relative to controls. A strong correlation was observed between over-estimation of MSA ability and verbal memory deficit, suggesting that formally assessing MSA and MSA self-awareness may help detect cognitive deficits. Assessing medication self-administration and MSA self-awareness may be useful in rehabilitation and successful community-return after stroke.

  4. Characterizing Detrended Fluctuation Analysis of multifractional Brownian motion

    NASA Astrophysics Data System (ADS)

    Setty, V. A.; Sharma, A. S.

    2015-02-01

    The Hurst exponent (H) is widely used to quantify long range dependence in time series data and is estimated using several well known techniques. Recognizing its ability to remove trends the Detrended Fluctuation Analysis (DFA) is used extensively to estimate a Hurst exponent in non-stationary data. Multifractional Brownian motion (mBm) broadly encompasses a set of models of non-stationary data exhibiting time varying Hurst exponents, H(t) as against a constant H. Recently, there has been a growing interest in time dependence of H(t) and sliding window techniques have been used to estimate a local time average of the exponent. This brought to fore the ability of DFA to estimate scaling exponents in systems with time varying H(t) , such as mBm. This paper characterizes the performance of DFA on mBm data with linearly varying H(t) and further test the robustness of estimated time average with respect to data and technique related parameters. Our results serve as a bench-mark for using DFA as a sliding window estimator to obtain H(t) from time series data.

  5. Haptic control with environment force estimation for telesurgery.

    PubMed

    Bhattacharjee, Tapomayukh; Son, Hyoung Il; Lee, Doo Yong

    2008-01-01

    Success of telesurgical operations depends on better position tracking ability of the slave device. Improved position tracking of the slave device can lead to safer and less strenuous telesurgical operations. The two-channel force-position control architecture is widely used for better position tracking ability. This architecture requires force sensors for direct force feedback. Force sensors may not be a good choice in the telesurgical environment because of the inherent noise, and limitation in the deployable place and space. Hence, environment force estimation is developed using the concept of the robot function parameter matrix and a recursive least squares method. Simulation results show efficacy of the proposed method. The slave device successfully tracks the position of the master device, and the estimation error quickly becomes negligible.

  6. A Longitudinal Field Study Comparing a Multiplicative and an Additive Model of Motivation and Ability. Technical Report No. 11.

    ERIC Educational Resources Information Center

    Barrett, Gerald V.; And Others

    The relative contribution of motivation to ability measures in predicting performance criteria of sales personnel from successive fiscal periods was investigated. In this context, the merits of a multiplicative and additive combination of motivation and ability measures were examined. The relationship between satisfaction and motivation and…

  7. Memory Abilities in Children with Mathematical Difficulties: Comorbid Language Difficulties Matter

    ERIC Educational Resources Information Center

    Reimann, Giselle; Gut, Janine; Frischknecht, Marie-Claire; Grob, Alexander

    2013-01-01

    The present study investigated cognitive abilities in children with difficulties in mathematics only (n = 48, M = 8 years and 5 months), combined mathematical and language difficulty (n = 27, M = 8 years and 1 month) and controls (n = 783, M = 7 years and 11 months). Cognitive abilities were measured with seven subtests, tapping visual perception,…

  8. Exploring genotypic variations for improved oil content and healthy fatty acids composition in rapeseed (Brassica napus L.).

    PubMed

    Ishaq, Muhammad; Razi, Raziuddin; Khan, Sabaz Ali

    2017-04-01

    Development of new genotypes having high oil content and desirable levels of fatty acid compositions is a major objective of rapeseed breeding programmes. In the current study combining ability was determined for oil, protein, glucosinolates and various fatty acids content using 8 × 8 full diallel in rapeseed (Brassica napus). Highly significant genotypic differences were observed for oil, protein, glucosinolates, oleic acid, linolenic acid and erucic acid content. Mean squares due to general combining ability (GCA), specific combining ability (SCA) and reciprocal combining ability (RCA) were highly significant (P ≤ 0.01) for biochemical traits. Parental line AUP-17 for high oil content and low glucosinolates, genotype AUP-2 for high protein and oleic acids, and AUP-18 for low lenolenic and erucic acid were best general combiners. Based on desirable SCA effects, F 1 hybrids AUP-17 × AUP-20; AUP-2 × AUP-8; AUP-7 × AUP-14; AUP-2 × AUP-9; AUP-7 × AUP-14 and AUP-2 × AUP-9 were found superior involving at least one best general combiner. F 1 hybrids AUP-17 × AUP-20 (for oil content); AUP-2 × AUP-8 (for protein content); AUP-7 × AUP-14 (for glucosinolates); AUP-2 × AUP-9 (for oleic acid); AUP-7 × AUP-14 (for linolenic acid) and AUP-2 × AUP-9 (for erucic acid) were found superior involving at least one best general combiner. As reciprocal crosses of AUP-14 with AUP-7 and AUP-8 were superior had low × low and low × high GCA effects for glucosinolates and oleic acid, respectively therefore, these could be exploited in future rapeseed breeding programmes to develop new lines with good quality. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  9. Combining the boundary shift integral and tensor-based morphometry for brain atrophy estimation

    NASA Astrophysics Data System (ADS)

    Michalkiewicz, Mateusz; Pai, Akshay; Leung, Kelvin K.; Sommer, Stefan; Darkner, Sune; Sørensen, Lauge; Sporring, Jon; Nielsen, Mads

    2016-03-01

    Brain atrophy from structural magnetic resonance images (MRIs) is widely used as an imaging surrogate marker for Alzheimers disease. Their utility has been limited due to the large degree of variance and subsequently high sample size estimates. The only consistent and reasonably powerful atrophy estimation methods has been the boundary shift integral (BSI). In this paper, we first propose a tensor-based morphometry (TBM) method to measure voxel-wise atrophy that we combine with BSI. The combined model decreases the sample size estimates significantly when compared to BSI and TBM alone.

  10. Predicting Bobsled Pushing Ability from Various Combine Testing Events.

    PubMed

    Tomasevicz, Curtis L; Ransone, Jack W; Bach, Christopher W

    2018-03-12

    The requisite combination of speed, power, and strength necessary for a bobsled push athlete coupled with the difficulty in directly measuring pushing ability makes selecting effective push crews challenging. Current practices by USA Bobsled and Skeleton (USABS) utilize field combine testing to assess and identify specifically selected performance variables in an attempt to best predict push performance abilities. Combine data consisting of 11 physical performance variables were collected from 75 subjects across two winter Olympic qualification years (2009 and 2013). These variables were sprints of 15-, 30-, and 60 m, a flying 30 m sprint, a standing broad jump, a shot toss, squat, power clean, body mass, and dry-land brake and side bobsled pushes. Discriminant Analysis (DA) in addition to Principle Component Analysis (PCA) was used to investigate two cases (Case 1: Olympians vs. non-Olympians; Case 2: National Team vs. non-National Team). Using these 11 variables, DA led to a classification rule that proved capable of identifying Olympians from non-Olympians and National Team members from non-National Team members with 9.33% and 14.67% misclassification rates, respectively. The PCA was used to find similar test variables within the combine that provided redundant or useless data. After eliminating the unnecessary variables, DA on the new combinations showed that 8 (Case 1) and 20 (Case 2) other combinations with fewer performance variables yielded misclassification rates as low as 6.67% and 13.33% respectively. Utilizing fewer performance variables can allow governing bodies in many other sports to create more appropriate combine testing that maximize accuracy, while minimizing irrelevant and redundant strategies.

  11. Standard cost elements for technology programs

    NASA Technical Reports Server (NTRS)

    Christensen, Carisa B.; Wagenfuehrer, Carl

    1992-01-01

    The suitable structure for an effective and accurate cost estimate for general purposes is discussed in the context of a NASA technology program. Cost elements are defined for research, management, and facility-construction portions of technology programs. Attention is given to the mechanisms for insuring the viability of spending programs, and the need for program managers is established for effecting timely fund disbursement. Formal, structures, and intuitive techniques are discussed for cost-estimate development, and cost-estimate defensibility can be improved with increased documentation. NASA policies for cash management are examined to demonstrate the importance of the ability to obligate funds and the ability to cost contracted funds. The NASA approach to consistent cost justification is set forth with a list of standard cost-element definitions. The cost elements reflect the three primary concerns of cost estimates: the identification of major assumptions, the specification of secondary analytic assumptions, and the status of program factors.

  12. Success in everyday physics: The role of personality and academic variables

    NASA Astrophysics Data System (ADS)

    Norvilitis, Jill M.; Reid, Howard M.; Norvilitis, Bret M.

    2002-05-01

    Two studies examined students' intuitive physics ability and characteristics associated with physics competence. In Study 1, although many students did well on a physics quiz, more than 25% of students performed below levels predicted by chance. Better performance on the physics quiz was related to physics grades, highest level of math taken, and students' perceived scholastic competence, but was not related to a number of other hypothesized personality variables. Study 2 further explored personality and academic variables and also examined students' awareness of their own physics ability. Results indicate that the personality variables were again unrelated to ability, but narcissism may be related to subjects' estimates of knowledge. Also, academic variables and how important students think it is to understand the physical world are related to both measured and estimated physics proficiency.

  13. Effects of a kindergarten-based, family-involved intervention on motor performance ability in 3- to 6-year-old children: the ToyBox-study.

    PubMed

    Birnbaum, Julia; Geyer, Christine; Kirchberg, Franca; Manios, Yannis; Koletzko, Berthold

    2017-02-01

    This study targeted to examine the effect of the ToyBox-intervention, a kindergarten-based, family-involved intervention, aiming to improve preschooler's energy-related behaviours (e.g., physical activity) on motor performance ability. Physical activity sessions, classroom activities, environmental changes and tools for parents were the components of the 1-year intervention. The intervention and control were cluster-randomised, and children's anthropometry and two motor test items (jumping from side to side, JSS and standing long jump, SLJ) were assessed. A total of 1293 (4.6 ± 0.69 years; 52% boys) from 45 kindergartens in Germany were included (intervention, n = 863; control, n = 430). The effect was assessed using generalised estimating equation. The intervention group showed a better improvement in JSS (Estimate 2.19 jumps, P = 0.01) and tended to improve better in SLJ (Estimate 2.73 cm, P = 0.08). The intervention was more effective in boys with respect to SLJ (P of interaction effect = 0.01). Children aged <4.5 years did not show a significant benefit while older children improved (JSS, Estimate 3.38 jumps, P = 0.004; SLJ, Estimate 4.18 cm, P = 0.04). Children with low socio-economic status improved in JSS (Estimate 5.98 jumps, P = 0.0001). The ToyBox-intervention offers an effective strategy to improve specific components of motor performance ability in early childhood. Future programmes should consider additional strategies specifically targeting girls and younger aged children. BMI: body mass index; SES: socio-economic status; JSS: jumping from side to side; SLJ: standing long jump; SD: standard deviation; GEE: generalised estimating equation.

  14. Aqueous and Tissue Residue-Based Interspecies Correlation Estimation Models Provide Conservative Hazard Estimates for Aromatic Compounds

    EPA Science Inventory

    Interspecies correlation estimation (ICE) models were developed for 30 nonpolar aromatic compounds to allow comparison of prediction accuracy between 2 data compilation approaches. Type 1 models used data combined across studies, and type 2 models used data combined only within s...

  15. Store turnover as a predictor of food and beverage provider turnover and associated dietary intake estimates in very remote Indigenous communities.

    PubMed

    Wycherley, Thomas; Ferguson, Megan; O'Dea, Kerin; McMahon, Emma; Liberato, Selma; Brimblecombe, Julie

    2016-12-01

    Determine how very-remote Indigenous community (RIC) food and beverage (F&B) turnover quantities and associated dietary intake estimates derived from only stores, compare with values derived from all community F&B providers. F&B turnover quantity and associated dietary intake estimates (energy, micro/macronutrients and major contributing food types) were derived from 12-months transaction data of all F&B providers in three RICs (NT, Australia). F&B turnover quantities and dietary intake estimates from only stores (plus only the primary store in multiple-store communities) were expressed as a proportion of complete F&B provider turnover values. Food types and macronutrient distribution (%E) estimates were quantitatively compared. Combined stores F&B turnover accounted for the majority of F&B quantity (98.1%) and absolute dietary intake estimates (energy [97.8%], macronutrients [≥96.7%] and micronutrients [≥83.8%]). Macronutrient distribution estimates from combined stores and only the primary store closely aligned complete provider estimates (≤0.9% absolute). Food types were similar using combined stores, primary store or complete provider turnover. Evaluating combined stores F&B turnover represents an efficient method to estimate total F&B turnover quantity and associated dietary intake in RICs. In multiple-store communities, evaluating only primary store F&B turnover provides an efficient estimate of macronutrient distribution and major food types. © 2016 Public Health Association of Australia.

  16. An empirical comparative study on biological age estimation algorithms with an application of Work Ability Index (WAI).

    PubMed

    Cho, Il Haeng; Park, Kyung S; Lim, Chang Joo

    2010-02-01

    In this study, we described the characteristics of five different biological age (BA) estimation algorithms, including (i) multiple linear regression, (ii) principal component analysis, and somewhat unique methods developed by (iii) Hochschild, (iv) Klemera and Doubal, and (v) a variant of Klemera and Doubal's method. The objective of this study is to find the most appropriate method of BA estimation by examining the association between Work Ability Index (WAI) and the differences of each algorithm's estimates from chronological age (CA). The WAI was found to be a measure that reflects an individual's current health status rather than the deterioration caused by a serious dependency with the age. Experiments were conducted on 200 Korean male participants using a BA estimation system developed principally under the concept of non-invasive, simple to operate and human function-based. Using the empirical data, BA estimation as well as various analyses including correlation analysis and discriminant function analysis was performed. As a result, it had been confirmed by the empirical data that Klemera and Doubal's method with uncorrelated variables from principal component analysis produces relatively reliable and acceptable BA estimates. 2009 Elsevier Ireland Ltd. All rights reserved.

  17. Estimation of dietary nutritional content using an online system with ability to assess the dieticians' accuracy.

    PubMed

    Aoki, Takeshi; Nakai, Shigeru; Yamauchi, Kazunobu

    2006-01-01

    We developed an online system for estimating dietary nutritional content. It also had the function of assessing the accuracy of the participating dieticians and ranking their performance. People who wished to have their meal estimated (i.e. clients) submitted images of their meal taken by digital camera to the server via the Internet, and dieticians estimated the nutritional content (i.e. calorie and protein content). The system assessed the accuracy of the dieticians and if it was satisfactory, the results were sent to the client. Clients received details of the calorie and protein content of their meals within 24 h by email. A total of 93 dieticians (71 students and 22 licensed practitioners) used the system. A two-way analysis of variance showed that there was a significant variation (P=0.004) among dieticians in their ability to estimate both calorie and protein content. There was a significant difference in values of both calorie (P=0.02) and protein (P<0.001) estimation accuracy between student dieticians and licensed dieticians. The estimation accuracy of the licensed nutritionists was 85% (SD 10) for calorie content and 78% (SD 17) for protein content.

  18. The relationship between physical activity and work ability - A cross-sectional study of teachers.

    PubMed

    Grabara, Małgorzata; Nawrocka, Agnieszka; Powerska-Didkowska, Aneta

    2018-01-01

    To assess relationship between physical activity (PA) and perceived work ability amongst teachers from the Upper Silesia, Poland. The study involved 171 teachers (129 women, 42 men) of primary and secondary schools of the Upper Silesia, Poland. Physical education teachers were excluded from the study. The level of PA was estimated using the International Physical Activity Questionnaire short version, and perceived work ability was estimated using Work Ability Index (WAI). Male teachers had significantly higher levels of vigorous-intensity PA, moderateintensity PA, and total weekly PA than female teachers. The recommendations of the World Health Organization (WHO) met 46% of studied women and 74% of men. Work ability did not differ between male and female teachers. Work ability was related to age, body mass index (BMI), and PA (vigorous-intensity PA, moderate-intensity PA, total weekly PA). The female teachers with excellent or good WAI had significantly higher levels of vigorous-intensity PA, moderate-intensity PA and total weekly PA than female teachers with moderate or poor WAI. The teachers involving in high or moderate intensity PA could improve their work ability. Further studies should focus on relation between physical activity and work ability among teachers of various age and seniority, from both, urban and rural schools. Int J Occup Med Environ Health 2018;31(1):1-9. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  19. [Effects of Acupuncture Intervention Combined with Rehabilitation on Standing-balance-walking Ability in Stroke Patients].

    PubMed

    Chu, Jia-mei; Bao, Ye-hua; Zhu, Min

    2015-12-01

    To observe the influence of acupuncture stimulation of lateral side of Tianzhu (para-BL 10), electroacupuncture (EA) stimulation of scalp-point Balance Area (MS 14), Motor Area (MS 6) and body acupoints combined with rehabilitation training on standing-balance and walking ability in stroke patients. A total of 145 stroke inpatients were randomly assigned to rehabilitation group (n=48), routine acupuncture group (n=49) and para-BL10 group (n = 48). Patients of the rehabilitation group received balance training and routine rehabilitation training treatment, those of the routine acupuncture group received acupuncture stimulation of scalp-points (MS 14, MS 6), body acupoints, balance training and routine rehabilitation training,and those of the para-BL10 group received acupuncture stimulation of lateral side of BL 10 combined with scalp-points of MS 14 and MS 6 and body acupoints, and balance training and routine rehabilitation training. The treatment was conducted once daily, 5 times per week, 8 weeks altogether. The patients' balancing function, lower-limb motor function and walking ability were assessed using Berg Balance Scale (BBS), Sheikh Trunk Control Ability Scale(STCAS), Fugl-Meyer Assessment Scale (FMAS), and Holden Functional Ambulation Classification (FAC), respectively. After 4 and 8 weeks' treatment, the scores of BBS, STCAS, FMAS and FAC in patients of the rehabilitation, routine acupuncture and para-BL10 groups were significantly increased and 10 meters-walking time obviously reduced in comparison with pre-treatment in the same one group (P<0.01). The effects of acupuncture stimulation of para-BL 10 were considerably better than both rehabilitation and routine acupuncture groups in raising BBS, STCAS, FMAS and FAC scores and in reducing 10 m-walking time (P<0.05). Acupuncture stimulation of lateral side of BL 10 combined with scalp-points has a significant benefit for stroke patients in standing-balance ability and walking ability.

  20. Predictive validity of the Work Ability Index and its individual items in the general population.

    PubMed

    Lundin, Andreas; Leijon, Ola; Vaez, Marjan; Hallgren, Mats; Torgén, Margareta

    2017-06-01

    This study assesses the predictive ability of the full Work Ability Index (WAI) as well as its individual items in the general population. The Work, Health and Retirement Study (WHRS) is a stratified random national sample of 25-75-year-olds living in Sweden in 2000 that received a postal questionnaire ( n = 6637, response rate = 53%). Current and subsequent sickness absence was obtained from registers. The ability of the WAI to predict long-term sickness absence (LTSA; ⩾ 90 consecutive days) during a period of four years was analysed by logistic regression, from which the Area Under the Receiver Operating Characteristic curve (AUC) was computed. There were 313 incident LTSA cases among 1786 employed individuals. The full WAI had acceptable ability to predict LTSA during the 4-year follow-up (AUC = 0.79; 95% CI 0.76 to 0.82). Individual items were less stable in their predictive ability. However, three of the individual items: current work ability compared with lifetime best, estimated work impairment due to diseases, and number of diagnosed current diseases, exceeded AUC > 0.70. Excluding the WAI item on number of days on sickness absence did not result in an inferior predictive ability of the WAI. The full WAI has acceptable predictive validity, and is superior to its individual items. For public health surveys, three items may be suitable proxies of the full WAI; current work ability compared with lifetime best, estimated work impairment due to diseases, and number of current diseases diagnosed by a physician.

Top