Sample records for model analyses show

  1. Multi-Scale Modeling to Improve Single-Molecule, Single-Cell Experiments

    NASA Astrophysics Data System (ADS)

    Munsky, Brian; Shepherd, Douglas

    2014-03-01

    Single-cell, single-molecule experiments are producing an unprecedented amount of data to capture the dynamics of biological systems. When integrated with computational models, observations of spatial, temporal and stochastic fluctuations can yield powerful quantitative insight. We concentrate on experiments that localize and count individual molecules of mRNA. These high precision experiments have large imaging and computational processing costs, and we explore how improved computational analyses can dramatically reduce overall data requirements. In particular, we show how analyses of spatial, temporal and stochastic fluctuations can significantly enhance parameter estimation results for small, noisy data sets. We also show how full probability distribution analyses can constrain parameters with far less data than bulk analyses or statistical moment closures. Finally, we discuss how a systematic modeling progression from simple to more complex analyses can reduce total computational costs by orders of magnitude. We illustrate our approach using single-molecule, spatial mRNA measurements of Interleukin 1-alpha mRNA induction in human THP1 cells following stimulation. Our approach could improve the effectiveness of single-molecule gene regulation analyses for many other process.

  2. Comparison of Numerical Analyses with a Static Load Test of a Continuous Flight Auger Pile

    NASA Astrophysics Data System (ADS)

    Hoľko, Michal; Stacho, Jakub

    2014-12-01

    The article deals with numerical analyses of a Continuous Flight Auger (CFA) pile. The analyses include a comparison of calculated and measured load-settlement curves as well as a comparison of the load distribution over a pile's length. The numerical analyses were executed using two types of software, i.e., Ansys and Plaxis, which are based on FEM calculations. Both types of software are different from each other in the way they create numerical models, model the interface between the pile and soil, and use constitutive material models. The analyses have been prepared in the form of a parametric study, where the method of modelling the interface and the material models of the soil are compared and analysed. Our analyses show that both types of software permit the modelling of pile foundations. The Plaxis software uses advanced material models as well as the modelling of the impact of groundwater or overconsolidation. The load-settlement curve calculated using Plaxis is equal to the results of a static load test with a more than 95 % degree of accuracy. In comparison, the load-settlement curve calculated using Ansys allows for the obtaining of only an approximate estimate, but the software allows for the common modelling of large structure systems together with a foundation system.

  3. Impact of covariate models on the assessment of the air pollution-mortality association in a single- and multipollutant context.

    PubMed

    Sacks, Jason D; Ito, Kazuhiko; Wilson, William E; Neas, Lucas M

    2012-10-01

    With the advent of multicity studies, uniform statistical approaches have been developed to examine air pollution-mortality associations across cities. To assess the sensitivity of the air pollution-mortality association to different model specifications in a single and multipollutant context, the authors applied various regression models developed in previous multicity time-series studies of air pollution and mortality to data from Philadelphia, Pennsylvania (May 1992-September 1995). Single-pollutant analyses used daily cardiovascular mortality, fine particulate matter (particles with an aerodynamic diameter ≤2.5 µm; PM(2.5)), speciated PM(2.5), and gaseous pollutant data, while multipollutant analyses used source factors identified through principal component analysis. In single-pollutant analyses, risk estimates were relatively consistent across models for most PM(2.5) components and gaseous pollutants. However, risk estimates were inconsistent for ozone in all-year and warm-season analyses. Principal component analysis yielded factors with species associated with traffic, crustal material, residual oil, and coal. Risk estimates for these factors exhibited less sensitivity to alternative regression models compared with single-pollutant models. Factors associated with traffic and crustal material showed consistently positive associations in the warm season, while the coal combustion factor showed consistently positive associations in the cold season. Overall, mortality risk estimates examined using a source-oriented approach yielded more stable and precise risk estimates, compared with single-pollutant analyses.

  4. Partial differential equation techniques for analysing animal movement: A comparison of different methods.

    PubMed

    Wang, Yi-Shan; Potts, Jonathan R

    2017-03-07

    Recent advances in animal tracking have allowed us to uncover the drivers of movement in unprecedented detail. This has enabled modellers to construct ever more realistic models of animal movement, which aid in uncovering detailed patterns of space use in animal populations. Partial differential equations (PDEs) provide a popular tool for mathematically analysing such models. However, their construction often relies on simplifying assumptions which may greatly affect the model outcomes. Here, we analyse the effect of various PDE approximations on the analysis of some simple movement models, including a biased random walk, central-place foraging processes and movement in heterogeneous landscapes. Perhaps the most commonly-used PDE method dates back to a seminal paper of Patlak from 1953. However, our results show that this can be a very poor approximation in even quite simple models. On the other hand, more recent methods, based on transport equation formalisms, can provide more accurate results, as long as the kernel describing the animal's movement is sufficiently smooth. When the movement kernel is not smooth, we show that both the older and newer methods can lead to quantitatively misleading results. Our detailed analysis will aid future researchers in the appropriate choice of PDE approximation for analysing models of animal movement. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. The Neo Personality Inventory-Revised: Factor Structure and Gender Invariance from Exploratory Structural Equation Modeling Analyses in a High-Stakes Setting

    ERIC Educational Resources Information Center

    Furnham, Adrian; Guenole, Nigel; Levine, Stephen Z.; Chamorro-Premuzic, Tomas

    2013-01-01

    This study presents new analyses of NEO Personality Inventory-Revised (NEO-PI-R) responses collected from a large British sample in a high-stakes setting. The authors show the appropriateness of the five-factor model underpinning these responses in a variety of new ways. Using the recently developed exploratory structural equation modeling (ESEM)…

  6. Competing risk models in reliability systems, a weibull distribution model with bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, Ismed; Satria Gondokaryono, Yudi

    2016-02-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.

  7. Software Quality Evaluation Models Applicable in Health Information and Communications Technologies. A Review of the Literature.

    PubMed

    Villamor Ordozgoiti, Alberto; Delgado Hito, Pilar; Guix Comellas, Eva María; Fernandez Sanchez, Carlos Manuel; Garcia Hernandez, Milagros; Lluch Canut, Teresa

    2016-01-01

    Information and Communications Technologies in healthcare has increased the need to consider quality criteria through standardised processes. The aim of this study was to analyse the software quality evaluation models applicable to healthcare from the perspective of ICT-purchasers. Through a systematic literature review with the keywords software, product, quality, evaluation and health, we selected and analysed 20 original research papers published from 2005-2016 in health science and technology databases. The results showed four main topics: non-ISO models, software quality evaluation models based on ISO/IEC standards, studies analysing software quality evaluation models, and studies analysing ISO standards for software quality evaluation. The models provide cost-efficiency criteria for specific software, and improve use outcomes. The ISO/IEC25000 standard is shown as the most suitable for evaluating the quality of ICTs for healthcare use from the perspective of institutional acquisition.

  8. Likelihood ratio decisions in memory: three implied regularities.

    PubMed

    Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T

    2009-06-01

    We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.

  9. The Evaluation of Bivariate Mixed Models in Meta-analyses of Diagnostic Accuracy Studies with SAS, Stata and R.

    PubMed

    Vogelgesang, Felicitas; Schlattmann, Peter; Dewey, Marc

    2018-05-01

    Meta-analyses require a thoroughly planned procedure to obtain unbiased overall estimates. From a statistical point of view not only model selection but also model implementation in the software affects the results. The present simulation study investigates the accuracy of different implementations of general and generalized bivariate mixed models in SAS (using proc mixed, proc glimmix and proc nlmixed), Stata (using gllamm, xtmelogit and midas) and R (using reitsma from package mada and glmer from package lme4). Both models incorporate the relationship between sensitivity and specificity - the two outcomes of interest in meta-analyses of diagnostic accuracy studies - utilizing random effects. Model performance is compared in nine meta-analytic scenarios reflecting the combination of three sizes for meta-analyses (89, 30 and 10 studies) with three pairs of sensitivity/specificity values (97%/87%; 85%/75%; 90%/93%). The evaluation of accuracy in terms of bias, standard error and mean squared error reveals that all implementations of the generalized bivariate model calculate sensitivity and specificity estimates with deviations less than two percentage points. proc mixed which together with reitsma implements the general bivariate mixed model proposed by Reitsma rather shows convergence problems. The random effect parameters are in general underestimated. This study shows that flexibility and simplicity of model specification together with convergence robustness should influence implementation recommendations, as the accuracy in terms of bias was acceptable in all implementations using the generalized approach. Schattauer GmbH.

  10. Assessing potential effects of highway runoff on receiving-water quality at selected sites in Oregon with the Stochastic Empirical Loading and Dilution Model (SELDM)

    USGS Publications Warehouse

    Risley, John C.; Granato, Gregory E.

    2014-01-01

    6. An analysis of the use of grab sampling and nonstochastic upstream modeling methods was done to evaluate the potential effects on modeling outcomes. Additional analyses using surrogate water-quality datasets for the upstream basin and highway catchment were provided for six Oregon study sites to illustrate the risk-based information that SELDM will produce. These analyses show that the potential effects of highway runoff on receiving-water quality downstream of the outfall depends on the ratio of drainage areas (dilution), the quality of the receiving water upstream of the highway, and the concentration of the criteria of the constituent of interest. These analyses also show that the probability of exceeding a water-quality criterion may depend on the input statistics used, thus careful selection of representative values is important.

  11. WEPP and ANN models for simulating soil loss and runoff in a semi-arid Mediterranean region.

    PubMed

    Albaradeyia, Issa; Hani, Azzedine; Shahrour, Isam

    2011-09-01

    This paper presents the use of both the Water Erosion Prediction Project (WEPP) and the artificial neural network (ANN) for the prediction of runoff and soil loss in the central highland mountainous of the Palestinian territories. Analyses show that the soil erosion is highly dependent on both the rainfall depth and the rainfall event duration rather than on the rainfall intensity as mostly mentioned in the literature. The results obtained from the WEPP model for the soil loss and runoff disagree with the field data. The WEPP underestimates both the runoff and soil loss. Analyses conducted with the ANN agree well with the observation. In addition, the global network models developed using the data of all the land use type show a relatively unbiased estimation for both runoff and soil loss. The study showed that the ANN model could be used as a management tool for predicting runoff and soil loss.

  12. Using Toulmin analysis to analyse an instructor's proof presentation in abstract algebra

    NASA Astrophysics Data System (ADS)

    Fukawa-connelly, Timothy

    2014-01-01

    This paper provides a method for analysing undergraduate teaching of proof-based courses using Toulmin's model (1969) of argumentation. It presents a case study of one instructor's presentation of proofs. The analysis shows that the instructor presents different levels of detail in different proofs; thus, the students have an inconsistent set of written models for their work. Similarly, the analysis shows that the details the instructor says aloud differ from what she writes down. Although her verbal commentary provides additional detail and appears to have pedagogical value, for instance, by modelling thinking that supports proof writing, this value might be better realized if she were to change her teaching practices.

  13. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    NASA Astrophysics Data System (ADS)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  14. Random regression analyses using B-splines to model growth of Australian Angus cattle

    PubMed Central

    Meyer, Karin

    2005-01-01

    Regression on the basis function of B-splines has been advocated as an alternative to orthogonal polynomials in random regression analyses. Basic theory of splines in mixed model analyses is reviewed, and estimates from analyses of weights of Australian Angus cattle from birth to 820 days of age are presented. Data comprised 84 533 records on 20 731 animals in 43 herds, with a high proportion of animals with 4 or more weights recorded. Changes in weights with age were modelled through B-splines of age at recording. A total of thirteen analyses, considering different combinations of linear, quadratic and cubic B-splines and up to six knots, were carried out. Results showed good agreement for all ages with many records, but fluctuated where data were sparse. On the whole, analyses using B-splines appeared more robust against "end-of-range" problems and yielded more consistent and accurate estimates of the first eigenfunctions than previous, polynomial analyses. A model fitting quadratic B-splines, with knots at 0, 200, 400, 600 and 821 days and a total of 91 covariance components, appeared to be a good compromise between detailedness of the model, number of parameters to be estimated, plausibility of results, and fit, measured as residual mean square error. PMID:16093011

  15. Flutter suppression via piezoelectric actuation

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer

    1991-01-01

    Experimental flutter results obtained from wind tunnel tests of a two degree of freedom wind tunnel model are presented for the open and closed loop systems. The wind tunnel model is a two degree of freedom system which is actuated by piezoelectric plates configured as bimorphs. The model design was based on finite element structural analyses and flutter analyses. A control law was designed based on a discrete system model; gain feedback of strain measurements was utilized in the control task. The results show a 21 pct. increase in the flutter speed.

  16. Analyses for Debonding of Stitched Composite Sandwich Structures Using Improved Constitutive Models

    NASA Technical Reports Server (NTRS)

    Glaessgen, E. H.; Sleight, D. W.; Krishnamurthy, T.; Raju, I. S.

    2001-01-01

    A fracture mechanics analysis based on strain energy release rates is used to study the effect of stitching in bonded sandwich beam configurations. Finite elements are used to model the configurations. The stitches were modeled as discrete nonlinear spring elements with a compliance determined by experiment. The constitutive models were developed using the results of flatwise tension tests from sandwich material rather than monolithic material. The analyses show that increasing stitch stiffness, stitch density and debond length decrease strain energy release rates for a fixed applied load.

  17. Observing the atmosphere in moisture space

    NASA Astrophysics Data System (ADS)

    Schulz, Hauke; Stevens, Bjorn

    2017-04-01

    Processes behind convective aggregation have mostly been analysed and identified on the basis of relatively idealized cloud resolving model studies. Relatively little effort has been spent on using observations to test or quantify the findings coming from the models. In 2010 the Barbados Cloud Observatory (BCO) was established on Barbados, which is on the edge of the ITCZ, in part to test hypotheses such as those emerging form the analysis of cloud resolving models. To better test ideas related to the driving forces of convective aggregation, we analyse BCO measurements to identify the processes changing the moist static energy flux, in moisture space, i.e., as a function of rank column water vapour. Similar approaches are used to analyse cloud resolving models. We composite five years of cloud- and water-vapor profiles, from a cloud radar, and Raman water vapour lidar to construct the structure of the observed atmosphere in moisture space. The data show both agreement and disagreement with the models: radiative transfer calculations of the cross-section reveal a strong anomalous radiative cooling in the boundary layer at the dry end of the moisture space. We show that the radiation, mainly in the long-wave, implies a shallow circulation. This circulation agrees generally with supplementary used reanalysis datasets, but the strength and extent vary more markedly across the analyses. Consistent with the modelling, the implied radiative driven circulation supports the aggregation process by importing net moist static energy into the moist regimes.

  18. The Living Cell as a Multi-agent Organisation: A Compositional Organisation Model of Intracellular Dynamics

    NASA Astrophysics Data System (ADS)

    Jonker, C. M.; Snoep, J. L.; Treur, J.; Westerhoff, H. V.; Wijngaards, W. C. A.

    Within the areas of Computational Organisation Theory and Artificial Intelligence, techniques have been developed to simulate and analyse dynamics within organisations in society. Usually these modelling techniques are applied to factories and to the internal organisation of their process flows, thus obtaining models of complex organisations at various levels of aggregation. The dynamics in living cells are often interpreted in terms of well-organised processes, a bacterium being considered a (micro)factory. This suggests that organisation modelling techniques may also benefit their analysis. Using the example of Escherichia coli it is shown how indeed agent-based organisational modelling techniques can be used to simulate and analyse E.coli's intracellular dynamics. Exploiting the abstraction levels entailed by this perspective, a concise model is obtained that is readily simulated and analysed at the various levels of aggregation, yet shows the cell's essential dynamic patterns.

  19. General quantitative genetic methods for comparative biology: phylogenies, taxonomies and multi-trait models for continuous and categorical characters.

    PubMed

    Hadfield, J D; Nakagawa, S

    2010-03-01

    Although many of the statistical techniques used in comparative biology were originally developed in quantitative genetics, subsequent development of comparative techniques has progressed in relative isolation. Consequently, many of the new and planned developments in comparative analysis already have well-tested solutions in quantitative genetics. In this paper, we take three recent publications that develop phylogenetic meta-analysis, either implicitly or explicitly, and show how they can be considered as quantitative genetic models. We highlight some of the difficulties with the proposed solutions, and demonstrate that standard quantitative genetic theory and software offer solutions. We also show how results from Bayesian quantitative genetics can be used to create efficient Markov chain Monte Carlo algorithms for phylogenetic mixed models, thereby extending their generality to non-Gaussian data. Of particular utility is the development of multinomial models for analysing the evolution of discrete traits, and the development of multi-trait models in which traits can follow different distributions. Meta-analyses often include a nonrandom collection of species for which the full phylogenetic tree has only been partly resolved. Using missing data theory, we show how the presented models can be used to correct for nonrandom sampling and show how taxonomies and phylogenies can be combined to give a flexible framework with which to model dependence.

  20. Zero adjusted models with applications to analysing helminths count data.

    PubMed

    Chipeta, Michael G; Ngwira, Bagrey M; Simoonga, Christopher; Kazembe, Lawrence N

    2014-11-27

    It is common in public health and epidemiology that the outcome of interest is counts of events occurrence. Analysing these data using classical linear models is mostly inappropriate, even after transformation of outcome variables due to overdispersion. Zero-adjusted mixture count models such as zero-inflated and hurdle count models are applied to count data when over-dispersion and excess zeros exist. Main objective of the current paper is to apply such models to analyse risk factors associated with human helminths (S. haematobium) particularly in a case where there's a high proportion of zero counts. The data were collected during a community-based randomised control trial assessing the impact of mass drug administration (MDA) with praziquantel in Malawi, and a school-based cross sectional epidemiology survey in Zambia. Count data models including traditional (Poisson and negative binomial) models, zero modified models (zero inflated Poisson and zero inflated negative binomial) and hurdle models (Poisson logit hurdle and negative binomial logit hurdle) were fitted and compared. Using Akaike information criteria (AIC), the negative binomial logit hurdle (NBLH) and zero inflated negative binomial (ZINB) showed best performance in both datasets. With regards to zero count capturing, these models performed better than other models. This paper showed that zero modified NBLH and ZINB models are more appropriate methods for the analysis of data with excess zeros. The choice between the hurdle and zero-inflated models should be based on the aim and endpoints of the study.

  1. Coastal and river flood risk analyses for guiding economically optimal flood adaptation policies: a country-scale study for Mexico

    NASA Astrophysics Data System (ADS)

    Haer, Toon; Botzen, W. J. Wouter; van Roomen, Vincent; Connor, Harry; Zavala-Hidalgo, Jorge; Eilander, Dirk M.; Ward, Philip J.

    2018-06-01

    Many countries around the world face increasing impacts from flooding due to socio-economic development in flood-prone areas, which may be enhanced in intensity and frequency as a result of climate change. With increasing flood risk, it is becoming more important to be able to assess the costs and benefits of adaptation strategies. To guide the design of such strategies, policy makers need tools to prioritize where adaptation is needed and how much adaptation funds are required. In this country-scale study, we show how flood risk analyses can be used in cost-benefit analyses to prioritize investments in flood adaptation strategies in Mexico under future climate scenarios. Moreover, given the often limited availability of detailed local data for such analyses, we show how state-of-the-art global data and flood risk assessment models can be applied for a detailed assessment of optimal flood-protection strategies. Our results show that especially states along the Gulf of Mexico have considerable economic benefits from investments in adaptation that limit risks from both river and coastal floods, and that increased flood-protection standards are economically beneficial for many Mexican states. We discuss the sensitivity of our results to modelling uncertainties, the transferability of our modelling approach and policy implications. This article is part of the theme issue `Advances in risk assessment for climate change adaptation policy'.

  2. Impact of TRMM and SSM/I Rainfall Assimilation on Global Analysis and QPF

    NASA Technical Reports Server (NTRS)

    Hou, Arthur; Zhang, Sara; Reale, Oreste

    2002-01-01

    Evaluation of QPF skills requires quantitatively accurate precipitation analyses. We show that assimilation of surface rain rates derived from the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager and Special Sensor Microwave/Imager (SSM/I) improves quantitative precipitation estimates (QPE) and many aspects of global analyses. Short-range forecasts initialized with analyses with satellite rainfall data generally yield significantly higher QPF threat scores and better storm track predictions. These results were obtained using a variational procedure that minimizes the difference between the observed and model rain rates by correcting the moist physics tendency of the forecast model over a 6h assimilation window. In two case studies of Hurricanes Bonnie and Floyd, synoptic analysis shows that this procedure produces initial conditions with better-defined tropical storm features and stronger precipitation intensity associated with the storm.

  3. Toward a Model-Based Approach to Flight System Fault Protection

    NASA Technical Reports Server (NTRS)

    Day, John; Murray, Alex; Meakin, Peter

    2012-01-01

    Fault Protection (FP) is a distinct and separate systems engineering sub-discipline that is concerned with the off-nominal behavior of a system. Flight system fault protection is an important part of the overall flight system systems engineering effort, with its own products and processes. As with other aspects of systems engineering, the FP domain is highly amenable to expression and management in models. However, while there are standards and guidelines for performing FP related analyses, there are not standards or guidelines for formally relating the FP analyses to each other or to the system hardware and software design. As a result, the material generated for these analyses are effectively creating separate models that are only loosely-related to the system being designed. Development of approaches that enable modeling of FP concerns in the same model as the system hardware and software design enables establishment of formal relationships that has great potential for improving the efficiency, correctness, and verification of the implementation of flight system FP. This paper begins with an overview of the FP domain, and then continues with a presentation of a SysML/UML model of the FP domain and the particular analyses that it contains, by way of showing a potential model-based approach to flight system fault protection, and an exposition of the use of the FP models in FSW engineering. The analyses are small examples, inspired by current real-project examples of FP analyses.

  4. Causal Mediation Analysis of Survival Outcome with Multiple Mediators.

    PubMed

    Huang, Yen-Tsung; Yang, Hwai-I

    2017-05-01

    Mediation analyses have been a popular approach to investigate the effect of an exposure on an outcome through a mediator. Mediation models with multiple mediators have been proposed for continuous and dichotomous outcomes. However, development of multimediator models for survival outcomes is still limited. We present methods for multimediator analyses using three survival models: Aalen additive hazard models, Cox proportional hazard models, and semiparametric probit models. Effects through mediators can be characterized by path-specific effects, for which definitions and identifiability assumptions are provided. We derive closed-form expressions for path-specific effects for the three models, which are intuitively interpreted using a causal diagram. Mediation analyses using Cox models under the rare-outcome assumption and Aalen additive hazard models consider effects on log hazard ratio and hazard difference, respectively; analyses using semiparametric probit models consider effects on difference in transformed survival time and survival probability. The three models were applied to a hepatitis study where we investigated effects of hepatitis C on liver cancer incidence mediated through baseline and/or follow-up hepatitis B viral load. The three methods show consistent results on respective effect scales, which suggest an adverse estimated effect of hepatitis C on liver cancer not mediated through hepatitis B, and a protective estimated effect mediated through the baseline (and possibly follow-up) of hepatitis B viral load. Causal mediation analyses of survival outcome with multiple mediators are developed for additive hazard and proportional hazard and probit models with utility demonstrated in a hepatitis study.

  5. Shortwave forcing and feedbacks in Last Glacial Maximum and Mid-Holocene PMIP3 simulations.

    PubMed

    Braconnot, Pascale; Kageyama, Masa

    2015-11-13

    Simulations of the climates of the Last Glacial Maximum (LGM), 21 000 years ago, and of the Mid-Holocene (MH), 6000 years ago, allow an analysis of climate feedbacks in climate states that are radically different from today. The analyses of cloud and surface albedo feedbacks show that the shortwave cloud feedback is a major driver of differences between model results. Similar behaviours appear when comparing the LGM and MH simulated changes, highlighting the fingerprint of model physics. Even though the different feedbacks show similarities between the different climate periods, the fact that their relative strength differs from one climate to the other prevents a direct comparison of past and future climate sensitivity. The land-surface feedback also shows large disparities among models even though they all produce positive sea-ice and snow feedbacks. Models have very different sensitivities when considering the vegetation feedback. This feedback has a regional pattern that differs significantly between models and depends on their level of complexity and model biases. Analyses of the MH climate in two versions of the IPSL model provide further indication on the possibilities to assess the role of model biases and model physics on simulated climate changes using past climates for which observations can be used to assess the model results. © 2015 The Author(s).

  6. Adding thin-ideal internalization and impulsiveness to the cognitive-behavioral model of bulimic symptoms.

    PubMed

    Schnitzler, Caroline E; von Ranson, Kristin M; Wallace, Laurel M

    2012-08-01

    This study evaluated the cognitive-behavioral (CB) model of bulimia nervosa and an extension that included two additional maintaining factors - thin-ideal internalization and impulsiveness - in 327 undergraduate women. Participants completed measures of demographics, self-esteem, concern about shape and weight, dieting, bulimic symptoms, thin-ideal internalization, and impulsiveness. Both the original CB model and the extended model provided good fits to the data. Although structural equation modeling analyses suggested that the original CB model was most parsimonious, hierarchical regression analyses indicated that the additional variables accounted for significantly more variance. Additional analyses showed that the model fit could be improved by adding a path from concern about shape and weight, and deleting the path from dieting, to bulimic symptoms. Expanding upon the factors considered in the model may better capture the scope of variables maintaining bulimic symptoms in young women with a range of severity of bulimic symptoms. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Model-Driven Engineering of Machine Executable Code

    NASA Astrophysics Data System (ADS)

    Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira

    Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.

  8. Seal Analysis for the Ares-I Upper Stage Fuel Tank Manhole Cover

    NASA Technical Reports Server (NTRS)

    Phillips, Dawn R.; Wingate, Robert J.

    2010-01-01

    Techniques for studying the performance of Naflex pressure-assisted seals in the Ares-I Upper Stage liquid hydrogen tank manhole cover seal joint are explored. To assess the feasibility of using the identical seal design for the Upper Stage as was used for the Space Shuttle External Tank manhole covers, a preliminary seal deflection analysis using the ABAQUS commercial finite element software is employed. The ABAQUS analyses are performed using three-dimensional symmetric wedge finite element models. This analysis technique is validated by first modeling a heritage External Tank liquid hydrogen tank manhole cover joint and correlating the results to heritage test data. Once the technique is validated, the Upper Stage configuration is modeled. The Upper Stage analyses are performed at 1.4 times the expected pressure to comply with the Constellation Program factor of safety requirement on joint separation. Results from the analyses performed with the External Tank and Upper Stage models demonstrate the effects of several modeling assumptions on the seal deflection. The analyses for Upper Stage show that the integrity of the seal is successfully maintained.

  9. The Representation of Extra-tropical Cyclones in Recent Re-Analyses: ERA-Interim, NASA-MERRA, NCEP-CFS and JRA25

    NASA Astrophysics Data System (ADS)

    Hodges, K.

    2010-12-01

    Re-analyses are produced using a forecast model, data assimilation system and historical observations. Whilst the observations are common between the re-analyses the way they are assimilated and the forecast model used are often different between the re-analyses which can introduce uncertainty in the representation of particular phenomena between the re-analyses, for example the distribution and properties of weather systems. It is important to inter-compare re-analyses to determine the uncertainty in their representation of the atmosphere, its circulation and weather systems in order to have confidence in their use for studies of the atmosphere and validating climate models. The four recent re-analyses, ERA-Interim, NASA-MERRA, NCEP-CFS and JRA25 are explored and compared for the representation of synoptic scale extra-tropical cyclones. Previous studies of the older re-analyses. ERA40, NCEP-NCAR and DOE has shown that whilst in the NH there was relatively good agreement between the re-analyses in terms of the distribution and properties of extra-tropical cyclones, in the SH there was much larger uncertainty. The newest re-analyses are produced at much higher resolutions than previous re-analyses, in addition more modern data assimilation systems and forecast models have been used. Hence, it would be hoped that the representation of cyclones will be improved to the same extent as that seen in modern NWP systems. This study contrasts extra-tropical cyclones, their distribution and properties, between these new re-analyses and compares them with cyclones in the slightly older though lower resolution JRA25 re-analysis. Results will show that in general in the higher resolution re-analysis more cyclones are identified than in JRA25. In the NH the distribution of storms agrees as well if not better than was the case for the older re-analyses. However, it is in the SH that the largest improvement in agreement is seen for the distribution of storms. For ERA-Interim, NASA-MERRA and NCEP-CFS the agreement in the SH is almost as good as in the NH with the best agreement occurring between ERA-Interim and NCEP-CFS. However, the comparison with JRA25 shows the same level of uncertainty as seen with the older re-analyses. Determining the separation distances of storms using storm matching confirm these results. The biggest differences between the re-analyses occurs for the intensity of storms with the NASA-MERRA having consistently the strongest extreme storms in terms of pressure and winds and JRA25 the weakest, ERA-Interim and NCEP-CFS are very similar in this respect. Using vorticity as an intensity measure shows the greatest sensitivity and goes with resolution. If time permits a comparison of the structure of the storms will also be presented. The approach used only highlights the uncertainty between the re-analyses it does not say which one is right. To try to address this some early results of comparing the re-analyses directly with observations of low level winds from scatterometers in the vicinity of storms will be presented if time permits.

  10. Investigation of the mechanical behaviour of the foot skin.

    PubMed

    Fontanella, C G; Carniel, E L; Forestiero, A; Natali, A N

    2014-11-01

    The aim of this work was to provide computational tools for the characterization of the actual mechanical behaviour of foot skin, accounting for results from experimental testing and histological investigation. Such results show the typical features of skin mechanics, such as anisotropic configuration, almost incompressible behaviour, material and geometrical non linearity. The anisotropic behaviour is mainly determined by the distribution of collagen fibres along specific directions, usually identified as cleavage lines. To evaluate the biomechanical response of foot skin, a refined numerical model of the foot is developed. The overall mechanical behaviour of the skin is interpreted by a fibre-reinforced hyperelastic constitutive model and the orientation of the cleavage lines is implemented by a specific procedure. Numerical analyses that interpret typical loading conditions of the foot are performed. The influence of fibres orientation and distribution on skin mechanics is outlined also by a comparison with results using an isotropic scheme. A specific constitutive formulation is provided to characterize the mechanical behaviour of foot skin. The formulation is applied within a numerical model of the foot to investigate the skin functionality during typical foot movements. Numerical analyses developed accounting for the actual anisotropic configuration of the skin show lower maximum principal stress fields than results from isotropic analyses. The developed computational models provide reliable tools for the investigation of foot tissues functionality. Furthermore, the comparison between numerical results from anisotropic and isotropic models shows the optimal configuration of foot skin. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Regression and multivariate models for predicting particulate matter concentration level.

    PubMed

    Nazif, Amina; Mohammed, Nurul Izma; Malakahmad, Amirhossein; Abualqumboz, Motasem S

    2018-01-01

    The devastating health effects of particulate matter (PM 10 ) exposure by susceptible populace has made it necessary to evaluate PM 10 pollution. Meteorological parameters and seasonal variation increases PM 10 concentration levels, especially in areas that have multiple anthropogenic activities. Hence, stepwise regression (SR), multiple linear regression (MLR) and principal component regression (PCR) analyses were used to analyse daily average PM 10 concentration levels. The analyses were carried out using daily average PM 10 concentration, temperature, humidity, wind speed and wind direction data from 2006 to 2010. The data was from an industrial air quality monitoring station in Malaysia. The SR analysis established that meteorological parameters had less influence on PM 10 concentration levels having coefficient of determination (R 2 ) result from 23 to 29% based on seasoned and unseasoned analysis. While, the result of the prediction analysis showed that PCR models had a better R 2 result than MLR methods. The results for the analyses based on both seasoned and unseasoned data established that MLR models had R 2 result from 0.50 to 0.60. While, PCR models had R 2 result from 0.66 to 0.89. In addition, the validation analysis using 2016 data also recognised that the PCR model outperformed the MLR model, with the PCR model for the seasoned analysis having the best result. These analyses will aid in achieving sustainable air quality management strategies.

  12. Coastal and river flood risk analyses for guiding economically optimal flood adaptation policies: a country-scale study for Mexico.

    PubMed

    Haer, Toon; Botzen, W J Wouter; van Roomen, Vincent; Connor, Harry; Zavala-Hidalgo, Jorge; Eilander, Dirk M; Ward, Philip J

    2018-06-13

    Many countries around the world face increasing impacts from flooding due to socio-economic development in flood-prone areas, which may be enhanced in intensity and frequency as a result of climate change. With increasing flood risk, it is becoming more important to be able to assess the costs and benefits of adaptation strategies. To guide the design of such strategies, policy makers need tools to prioritize where adaptation is needed and how much adaptation funds are required. In this country-scale study, we show how flood risk analyses can be used in cost-benefit analyses to prioritize investments in flood adaptation strategies in Mexico under future climate scenarios. Moreover, given the often limited availability of detailed local data for such analyses, we show how state-of-the-art global data and flood risk assessment models can be applied for a detailed assessment of optimal flood-protection strategies. Our results show that especially states along the Gulf of Mexico have considerable economic benefits from investments in adaptation that limit risks from both river and coastal floods, and that increased flood-protection standards are economically beneficial for many Mexican states. We discuss the sensitivity of our results to modelling uncertainties, the transferability of our modelling approach and policy implications.This article is part of the theme issue 'Advances in risk assessment for climate change adaptation policy'. © 2018 The Author(s).

  13. Ambiguities in model-independent partial-wave analysis

    NASA Astrophysics Data System (ADS)

    Krinner, F.; Greenwald, D.; Ryabchikov, D.; Grube, B.; Paul, S.

    2018-06-01

    Partial-wave analysis is an important tool for analyzing large data sets in hadronic decays of light and heavy mesons. It commonly relies on the isobar model, which assumes multihadron final states originate from successive two-body decays of well-known undisturbed intermediate states. Recently, analyses of heavy-meson decays and diffractively produced states have attempted to overcome the strong model dependences of the isobar model. These analyses have overlooked that model-independent, or freed-isobar, partial-wave analysis can introduce mathematical ambiguities in results. We show how these ambiguities arise and present general techniques for identifying their presence and for correcting for them. We demonstrate these techniques with specific examples in both heavy-meson decay and pion-proton scattering.

  14. Modeling fish community dynamics in Florida Everglades: Role of temperature variation

    USGS Publications Warehouse

    Al-Rabai'ah, H. A.; Koh, H. L.; DeAngelis, Donald L.; Lee, Hooi-Ling

    2002-01-01

    The model shows that the temperature dependent starvation mortality is an important factor that influences fish population densities. It also shows high fish population densities at some temperature ranges when this consumption need is minimum. Several sensitivity analyses involving variations in temperature terms, food resources and water levels are conducted to ascertain the relative importance of temperature dependence terms.

  15. The geomagnetic jerk of 1969 and the DGRFs

    USGS Publications Warehouse

    Thompson, D.; Cain, J.C.

    1987-01-01

    Cubic spline fits to the DGRF/IGRF series indicate agreement with other analyses showing the 1969-1970 magnetic jerk in the h ??12 and g ??02 secular change coefficients, and agreement that the h ??11 term showed no sharp change. The variation of the g ??01 term is out of phase with other analyses indicating a likely error in its representation in the 1965-1975 interval. We recommend that future derivations of the 'definitive' geomagnetic reference models take into consideration the times of impulses or jerks so as to not be bound to a standard 5 year interval, and otherwise to make more considered analyses before adopting sets of coefficients. ?? 1987.

  16. Entrance and exit region friction factor models for annular seal analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Elrod, David Alan

    1988-01-01

    The Mach number definition and boundary conditions in Nelson's nominally-centered, annular gas seal analysis are revised. A method is described for determining the wall shear stress characteristics of an annular gas seal experimentally. Two friction factor models are developed for annular seal analysis; one model is based on flat-plate flow theory; the other uses empirical entrance and exit region friction factors. The friction factor predictions of the models are compared to experimental results. Each friction model is used in an annular gas seal analysis. The seal characteristics predicted by the two seal analyses are compared to experimental results and to the predictions of Nelson's analysis. The comparisons are for smooth-rotor seals with smooth and honeycomb stators. The comparisons show that the analysis which uses empirical entrance and exit region shear stress models predicts the static and stability characteristics of annular gas seals better than the other analyses. The analyses predict direct stiffness poorly.

  17. Genetic signatures of natural selection in a model invasive ascidian

    NASA Astrophysics Data System (ADS)

    Lin, Yaping; Chen, Yiyong; Yi, Changho; Fong, Jonathan J.; Kim, Won; Rius, Marc; Zhan, Aibin

    2017-03-01

    Invasive species represent promising models to study species’ responses to rapidly changing environments. Although local adaptation frequently occurs during contemporary range expansion, the associated genetic signatures at both population and genomic levels remain largely unknown. Here, we use genome-wide gene-associated microsatellites to investigate genetic signatures of natural selection in a model invasive ascidian, Ciona robusta. Population genetic analyses of 150 individuals sampled in Korea, New Zealand, South Africa and Spain showed significant genetic differentiation among populations. Based on outlier tests, we found high incidence of signatures of directional selection at 19 loci. Hitchhiking mapping analyses identified 12 directional selective sweep regions, and all selective sweep windows on chromosomes were narrow (~8.9 kb). Further analyses indentified 132 candidate genes under selection. When we compared our genetic data and six crucial environmental variables, 16 putatively selected loci showed significant correlation with these environmental variables. This suggests that the local environmental conditions have left significant signatures of selection at both population and genomic levels. Finally, we identified “plastic” genomic regions and genes that are promising regions to investigate evolutionary responses to rapid environmental change in C. robusta.

  18. Bayesian Unimodal Density Regression for Causal Inference

    ERIC Educational Resources Information Center

    Karabatsos, George; Walker, Stephen G.

    2011-01-01

    Karabatsos and Walker (2011) introduced a new Bayesian nonparametric (BNP) regression model. Through analyses of real and simulated data, they showed that the BNP regression model outperforms other parametric and nonparametric regression models of common use, in terms of predictive accuracy of the outcome (dependent) variable. The other,…

  19. Molecular classification of benign prostatic hyperplasia: A gene expression profiling study in a rat model.

    PubMed

    Hata, Junya; Satoh, Yuichi; Akaihata, Hidenori; Hiraki, Hiroyuki; Ogawa, Soichiro; Haga, Nobuhiro; Ishibashi, Kei; Aikawa, Ken; Kojima, Yoshiyuki

    2016-07-01

    To characterize the molecular features of benign prostatic hyperplasia by carrying out a gene expression profiling analysis in a rat model. Fetal urogenital sinus isolated from 20-day-old male rat embryo was implanted into a pubertal male rat ventral prostate. The implanted urogenital sinus grew time-dependently, and the pathological findings at 3 weeks after implantation showed epithelial hyperplasia as well as stromal hyperplasia. Whole-genome oligonucleotide microarray analysis utilizing approximately 30 000 oligonucleotide probes was carried out using prostate specimens during the prostate growth process (3 weeks after implantation). Microarray analyses showed 926 upregulated (>2-fold change, P < 0.01) and 3217 downregulated genes (<0.5-fold change, P < 0.01) in benign prostatic hyperplasia specimens compared with normal prostate. Gene ontology analyses of upregulated genes showed predominant genetic themes of involvement in development (162 genes, P = 2.01 × 10(-4) ), response to stimulus (163 genes, P = 7.37 × 10(-13) ) and growth (32 genes, P = 1.93 × 10(-5) ). When we used both normal prostate and non-transplanted urogenital sinuses as controls to identify benign prostatic hyperplasia-specific genes, 507 and 406 genes were upregulated and downregulated, respectively. Functional network and pathway analyses showed that genes associated with apoptosis modulation by heat shock protein 70, interleukin-1, interleukin-2 and interleukin-5 signaling pathways, KIT signaling pathway, and secretin-like G-protein-coupled receptors, class B, were relatively activated during the growth process in the benign prostatic hyperplasia specimens. In contrast, genes associated with cholesterol biosynthesis were relatively inactivated. Our microarray analyses of the benign prostatic hyperplasia model rat might aid in clarifying the molecular mechanism of benign prostatic hyperplasia progression, and identifying molecular targets for benign prostatic hyperplasia treatment. © 2016 The Japanese Urological Association.

  20. Calculations vs. measurements of remnant dose rates for SNS spent structures

    NASA Astrophysics Data System (ADS)

    Popova, I. I.; Gallmeier, F. X.; Trotter, S.; Dayton, M.

    2018-06-01

    Residual dose rate measurements were conducted on target vessel #13 and proton beam window #5 after extraction from their service locations. These measurements were used to verify calculation methods of radionuclide inventory assessment that are typically performed for nuclear waste characterization and transportation of these structures. Neutronics analyses for predicting residual dose rates were carried out using the transport code MCNPX and the transmutation code CINDER90. For transport analyses complex and rigorous geometry model of the structures and their surrounding are applied. The neutronics analyses were carried out using Bertini and CEM high energy physics models for simulating particles interaction. Obtained preliminary calculational results were analysed and compared to the measured dose rates and overall are showing good agreement with in 40% in average.

  1. Calculations vs. measurements of remnant dose rates for SNS spent structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popova, Irina I.; Gallmeier, Franz X.; Trotter, Steven M.

    Residual dose rate measurements were conducted on target vessel #13 and proton beam window #5 after extraction from their service locations. These measurements were used to verify calculation methods of radionuclide inventory assessment that are typically performed for nuclear waste characterization and transportation of these structures. Neutronics analyses for predicting residual dose rates were carried out using the transport code MCNPX and the transmutation code CINDER90. For transport analyses complex and rigorous geometry model of the structures and their surrounding are applied. The neutronics analyses were carried out using Bertini and CEM high energy physics models for simulating particles interaction.more » Obtained preliminary calculational results were analysed and compared to the measured dose rates and overall are showing good agreement with in 40% in average.« less

  2. Dynamical Analysis in the Mathematical Modelling of Human Blood Glucose

    ERIC Educational Resources Information Center

    Bae, Saebyok; Kang, Byungmin

    2012-01-01

    We want to apply the geometrical method to a dynamical system of human blood glucose. Due to the educational importance of model building, we show a relatively general modelling process using observational facts. Next, two models of some concrete forms are analysed in the phase plane by means of linear stability, phase portrait and vector…

  3. RY-Coding and Non-Homogeneous Models Can Ameliorate the Maximum-Likelihood Inferences From Nucleotide Sequence Data with Parallel Compositional Heterogeneity.

    PubMed

    Ishikawa, Sohta A; Inagaki, Yuji; Hashimoto, Tetsuo

    2012-01-01

    In phylogenetic analyses of nucleotide sequences, 'homogeneous' substitution models, which assume the stationarity of base composition across a tree, are widely used, albeit individual sequences may bear distinctive base frequencies. In the worst-case scenario, a homogeneous model-based analysis can yield an artifactual union of two distantly related sequences that achieved similar base frequencies in parallel. Such potential difficulty can be countered by two approaches, 'RY-coding' and 'non-homogeneous' models. The former approach converts four bases into purine and pyrimidine to normalize base frequencies across a tree, while the heterogeneity in base frequency is explicitly incorporated in the latter approach. The two approaches have been applied to real-world sequence data; however, their basic properties have not been fully examined by pioneering simulation studies. Here, we assessed the performances of the maximum-likelihood analyses incorporating RY-coding and a non-homogeneous model (RY-coding and non-homogeneous analyses) on simulated data with parallel convergence to similar base composition. Both RY-coding and non-homogeneous analyses showed superior performances compared with homogeneous model-based analyses. Curiously, the performance of RY-coding analysis appeared to be significantly affected by a setting of the substitution process for sequence simulation relative to that of non-homogeneous analysis. The performance of a non-homogeneous analysis was also validated by analyzing a real-world sequence data set with significant base heterogeneity.

  4. The net benefits of human-ignited wildfire forecasting: the case of Tribal land units in the United States

    PubMed Central

    Prestemon, Jeffrey P.; Butry, David T.; Thomas, Douglas S.

    2017-01-01

    Research shows that some categories of human-ignited wildfires might be forecastable, due to their temporal clustering, with the possibility that resources could be pre-deployed to help reduce the incidence of such wildfires. We estimated several kinds of incendiary and other human-ignited wildfire forecast models at the weekly time step for tribal land units in the United States, evaluating their forecast skill out of sample. Analyses show that an Autoregressive Conditional Poisson (ACP) model of both incendiary and non-incendiary human-ignited wildfires is more accurate out of sample compared to alternatives, and the simplest of the ACP models performed the best. Additionally, an ensemble of these and simpler, less analytically intensive approaches performed even better. Wildfire hotspot forecast models using all model types were evaluated in a simulation mode to assess the net benefits of forecasts in the context of law enforcement resource reallocations. Our analyses show that such hotspot tools could yield large positive net benefits for the tribes in terms of suppression expenditures averted for incendiary wildfires but that the hotspot tools were less likely to be beneficial for addressing outbreaks of non-incendiary human-ignited wildfires. PMID:28769549

  5. The net benefits of human-ignited wildfire forecasting: the case of Tribal land units in the United States.

    PubMed

    Prestemon, Jeffrey P; Butry, David T; Thomas, Douglas S

    2016-01-01

    Research shows that some categories of human-ignited wildfires might be forecastable, due to their temporal clustering, with the possibility that resources could be pre-deployed to help reduce the incidence of such wildfires. We estimated several kinds of incendiary and other human-ignited wildfire forecast models at the weekly time step for tribal land units in the United States, evaluating their forecast skill out of sample. Analyses show that an Autoregressive Conditional Poisson (ACP) model of both incendiary and non-incendiary human-ignited wildfires is more accurate out of sample compared to alternatives, and the simplest of the ACP models performed the best. Additionally, an ensemble of these and simpler, less analytically intensive approaches performed even better. Wildfire hotspot forecast models using all model types were evaluated in a simulation mode to assess the net benefits of forecasts in the context of law enforcement resource reallocations. Our analyses show that such hotspot tools could yield large positive net benefits for the tribes in terms of suppression expenditures averted for incendiary wildfires but that the hotspot tools were less likely to be beneficial for addressing outbreaks of non-incendiary human-ignited wildfires.

  6. Estimating animal resource selection from telemetry data using point process models

    USGS Publications Warehouse

    Johnson, Devin S.; Hooten, Mevin B.; Kuhn, Carey E.

    2013-01-01

    To demonstrate the analysis of telemetry data with the point process approach, we analysed a data set of telemetry locations from northern fur seals (Callorhinus ursinus) in the Pribilof Islands, Alaska. Both a space–time and an aggregated space-only model were fitted. At the individual level, the space–time analysis showed little selection relative to the habitat covariates. However, at the study area level, the space-only model showed strong selection relative to the covariates.

  7. Examining Moderation Analyses in Propensity Score Methods: Application to Depression and Substance Use

    PubMed Central

    Green, Kerry M.; Stuart, Elizabeth A.

    2014-01-01

    Objective This study provides guidance on how propensity score methods can be combined with moderation analyses (i.e., effect modification) to examine subgroup differences in potential causal effects in non-experimental studies. As a motivating example, we focus on how depression may affect subsequent substance use differently for men and women. Method Using data from a longitudinal community cohort study (N=952) of urban African Americans with assessments in childhood, adolescence, young adulthood and midlife, we estimate the influence of depression by young adulthood on substance use outcomes in midlife, and whether that influence varies by gender. We illustrate and compare five different techniques for estimating subgroup effects using propensity score methods, including separate propensity score models and matching for men and women, a joint propensity score model for men and women with matching separately and together by gender, and a joint male/female propensity score model that includes theoretically important gender interactions with matching separately and together by gender. Results Analyses showed that estimating separate models for men and women yielded the best balance and, therefore, is a preferred technique when subgroup analyses are of interest, at least in this data. Results also showed substance use consequences of depression but no significant gender differences. Conclusions It is critical to prespecify subgroup effects before the estimation of propensity scores and to check balance within subgroups regardless of the type of propensity score model used. Results also suggest that depression may affect multiple substance use outcomes in midlife for both men and women relatively equally. PMID:24731233

  8. Challenges and opportunities in analysing students modelling

    NASA Astrophysics Data System (ADS)

    Blanco-Anaya, Paloma; Justi, Rosária; Díaz de Bustamante, Joaquín

    2017-02-01

    Modelling-based teaching activities have been designed and analysed from distinct theoretical perspectives. In this paper, we use one of them - the model of modelling diagram (MMD) - as an analytical tool in a regular classroom context. This paper examines the challenges that arise when the MMD is used as an analytical tool to characterise the modelling process experienced by students working in small groups aiming at creating and testing a model of a sedimentary basin from the information provided. The study was conducted in a regular Biology and Geology classroom (16-17 years old students). Data was collected through video recording of the classes, along with written reports and the material models made by each group. The results show the complexity of adapting MMD at two levels: the group modelling and the actual requirements for the activity. Our main challenges were to gather the modelling process of each individual and the group, as well as to identify, from students' speech, which stage of modelling they were performing at a given time. When facing such challenges, we propose some changes in the MMD so that it can be properly used to analyse students performing modelling activities in groups.

  9. CyTOF workflow: differential discovery in high-throughput high-dimensional cytometry datasets

    PubMed Central

    Nowicka, Malgorzata; Krieg, Carsten; Weber, Lukas M.; Hartmann, Felix J.; Guglietta, Silvia; Becher, Burkhard; Levesque, Mitchell P.; Robinson, Mark D.

    2017-01-01

    High dimensional mass and flow cytometry (HDCyto) experiments have become a method of choice for high throughput interrogation and characterization of cell populations.Here, we present an R-based pipeline for differential analyses of HDCyto data, largely based on Bioconductor packages. We computationally define cell populations using FlowSOM clustering, and facilitate an optional but reproducible strategy for manual merging of algorithm-generated clusters. Our workflow offers different analysis paths, including association of cell type abundance with a phenotype or changes in signaling markers within specific subpopulations, or differential analyses of aggregated signals. Importantly, the differential analyses we show are based on regression frameworks where the HDCyto data is the response; thus, we are able to model arbitrary experimental designs, such as those with batch effects, paired designs and so on. In particular, we apply generalized linear mixed models to analyses of cell population abundance or cell-population-specific analyses of signaling markers, allowing overdispersion in cell count or aggregated signals across samples to be appropriately modeled. To support the formal statistical analyses, we encourage exploratory data analysis at every step, including quality control (e.g. multi-dimensional scaling plots), reporting of clustering results (dimensionality reduction, heatmaps with dendrograms) and differential analyses (e.g. plots of aggregated signals). PMID:28663787

  10. Statistical Selection of Biological Models for Genome-Wide Association Analyses.

    PubMed

    Bi, Wenjian; Kang, Guolian; Pounds, Stanley B

    2018-05-24

    Genome-wide association studies have discovered many biologically important associations of genes with phenotypes. Typically, genome-wide association analyses formally test the association of each genetic feature (SNP, CNV, etc) with the phenotype of interest and summarize the results with multiplicity-adjusted p-values. However, very small p-values only provide evidence against the null hypothesis of no association without indicating which biological model best explains the observed data. Correctly identifying a specific biological model may improve the scientific interpretation and can be used to more effectively select and design a follow-up validation study. Thus, statistical methodology to identify the correct biological model for a particular genotype-phenotype association can be very useful to investigators. Here, we propose a general statistical method to summarize how accurately each of five biological models (null, additive, dominant, recessive, co-dominant) represents the data observed for each variant in a GWAS study. We show that the new method stringently controls the false discovery rate and asymptotically selects the correct biological model. Simulations of two-stage discovery-validation studies show that the new method has these properties and that its validation power is similar to or exceeds that of simple methods that use the same statistical model for all SNPs. Example analyses of three data sets also highlight these advantages of the new method. An R package is freely available at www.stjuderesearch.org/site/depts/biostats/maew. Copyright © 2018. Published by Elsevier Inc.

  11. Accounting for genotype uncertainty in the estimation of allele frequencies in autopolyploids.

    PubMed

    Blischak, Paul D; Kubatko, Laura S; Wolfe, Andrea D

    2016-05-01

    Despite the increasing opportunity to collect large-scale data sets for population genomic analyses, the use of high-throughput sequencing to study populations of polyploids has seen little application. This is due in large part to problems associated with determining allele copy number in the genotypes of polyploid individuals (allelic dosage uncertainty-ADU), which complicates the calculation of important quantities such as allele frequencies. Here, we describe a statistical model to estimate biallelic SNP frequencies in a population of autopolyploids using high-throughput sequencing data in the form of read counts. We bridge the gap from data collection (using restriction enzyme based techniques [e.g. GBS, RADseq]) to allele frequency estimation in a unified inferential framework using a hierarchical Bayesian model to sum over genotype uncertainty. Simulated data sets were generated under various conditions for tetraploid, hexaploid and octoploid populations to evaluate the model's performance and to help guide the collection of empirical data. We also provide an implementation of our model in the R package polyfreqs and demonstrate its use with two example analyses that investigate (i) levels of expected and observed heterozygosity and (ii) model adequacy. Our simulations show that the number of individuals sampled from a population has a greater impact on estimation error than sequencing coverage. The example analyses also show that our model and software can be used to make inferences beyond the estimation of allele frequencies for autopolyploids by providing assessments of model adequacy and estimates of heterozygosity. © 2015 John Wiley & Sons Ltd.

  12. Are Different Students Expected to Learn Norms Differently in the Mathematics Classroom?

    ERIC Educational Resources Information Center

    Planas, Nuria; Gorgorio, Nuria

    2004-01-01

    We analyse social interactions during the first days of class in a secondary mathematics classroom (15 and 16-year-olds) with a high percentage of immigrant students. Our analyses show the co-existence of different models for both the interpretation and the use of classroom social norms and socio-mathematical norms. Valorising some behaviours over…

  13. Using Toulmin Analysis to Analyse an Instructor's Proof Presentation in Abstract Algebra

    ERIC Educational Resources Information Center

    Fukawa-Connelly, Timothy

    2014-01-01

    This paper provides a method for analysing undergraduate teaching of proof-based courses using Toulmin's model (1969) of argumentation. It presents a case study of one instructor's presentation of proofs. The analysis shows that the instructor presents different levels of detail in different proofs; thus, the students have an inconsistent set of…

  14. Multifractal analysis of a GCM climate

    NASA Astrophysics Data System (ADS)

    Carl, P.

    2003-04-01

    Multifractal analysis using the Wavelet Transform Modulus Maxima (WTMM) approach is being applied to the climate of a Mintz--Arakawa type, coarse resolution, two--layer AGCM. The model shows a backwards running period multiplication scenario throughout the northern summer, subsequent to a 'hard', subcritical Hopf bifurcation late in spring. This 'route out of chaos' (seen in cross sections of a toroidal phase space structure) is born in the planetary monsoon system which inflates the seasonal 'cycle' into these higher order structures and is blamed for the pronounced intraseasonal--to--centennial model climate variability. Previous analyses of the latter using advanced modal decompositions showed regularity based patterns in the time--frequency plane which are qualitatively similar to those obtained from the real world. The closer look here at the singularity structures, as a fundamental diagnostic supplement, aims at both more complete understanding (and quantification) of the model's qualitative dynamics and search for further tools of model intercomparison and verification in this respect. Analysing wavelet is the 10th derivative of the Gaussian which might suffice to suppress regular patterns in the data. Intraseasonal attractors, studied in time series of model precipitation over Central India, show shifting and braodening singularity spectra towards both more violent extreme events (premonsoon--monsoon transition) and weaker events (late summer to postmonsoon transition). Hints at a fractal basin boundary are found close to transition from period--2 to period--1 in the monsoon activity cycle. Interannual analyses are provided for runs with varied solar constants. To address the (in--)stationarity issue, first results are presented with a windowed multifractal analysis of longer--term runs ("singularity spectrogram").

  15. Retention of community college students in online courses

    NASA Astrophysics Data System (ADS)

    Krajewski, Sarah

    The issue of attrition in online courses at higher learning institutions remains a high priority in the United States. A recent rapid growth of online courses at community colleges has been instigated by student demand, as they meet the time constraints many nontraditional community college students have as a result of the need to work and care for dependents. Failure in an online course can cause students to become frustrated with the college experience, financially burdened, or to even give up and leave college. Attrition could be avoided by proper guidance of who is best suited for online courses. This study examined factors related to retention (i.e., course completion) and success (i.e., receiving a C or better) in an online biology course at a community college in the Midwest by operationalizing student characteristics (age, race, gender), student skills (whether or not the student met the criteria to be placed in an AFP course), and external factors (Pell recipient, full/part time status, first term) from the persistence model developed by Rovai. Internal factors from this model were not included in this study. Both univariate analyses and multivariate logistic regression were used to analyze the variables. Results suggest that race and Pell recipient were both predictive of course completion on univariate analyses. However, multivariate analyses showed that age, race, academic load and first term were predictive of completion and Pell recipient was no longer predictive. The univariate results for the C or better showed that age, race, Pell recipient, academic load, and meeting AFP criteria were predictive of success. Multivariate analyses showed that only age, race, and Pell recipient were significant predictors of success. Both regression models explained very little (<15%) of the variability within the outcome variables of retention and success. Therefore, although significant predictors were identified for course completion and retention, there are still many factors that remain unaccounted for in both regression models. Further research into the operationalization of Rovai's model, including internal factors, to predict completion and success is necessary.

  16. The Problem of Auto-Correlation in Parasitology

    PubMed Central

    Pollitt, Laura C.; Reece, Sarah E.; Mideo, Nicole; Nussey, Daniel H.; Colegrave, Nick

    2012-01-01

    Explaining the contribution of host and pathogen factors in driving infection dynamics is a major ambition in parasitology. There is increasing recognition that analyses based on single summary measures of an infection (e.g., peak parasitaemia) do not adequately capture infection dynamics and so, the appropriate use of statistical techniques to analyse dynamics is necessary to understand infections and, ultimately, control parasites. However, the complexities of within-host environments mean that tracking and analysing pathogen dynamics within infections and among hosts poses considerable statistical challenges. Simple statistical models make assumptions that will rarely be satisfied in data collected on host and parasite parameters. In particular, model residuals (unexplained variance in the data) should not be correlated in time or space. Here we demonstrate how failure to account for such correlations can result in incorrect biological inference from statistical analysis. We then show how mixed effects models can be used as a powerful tool to analyse such repeated measures data in the hope that this will encourage better statistical practices in parasitology. PMID:22511865

  17. Analysing and controlling the tax evasion dynamics via majority-vote model

    NASA Astrophysics Data System (ADS)

    Lima, F. W. S.

    2010-09-01

    Within the context of agent-based Monte-Carlo simulations, we study the well-known majority-vote model (MVM) with noise applied to tax evasion on simple square lattices, Voronoi-Delaunay random lattices, Barabasi-Albert networks, and Erdös-Rényi random graphs. In the order to analyse and to control the fluctuations for tax evasion in the economics model proposed by Zaklan, MVM is applied in the neighborhod of the noise critical qc to evolve the Zaklan model. The Zaklan model had been studied recently using the equilibrium Ising model. Here we show that the Zaklan model is robust because this can be studied using equilibrium dynamics of Ising model also through the nonequilibrium MVM and on various topologies cited above giving the same behavior regardless of dynamic or topology used here.

  18. Evidence of a major gene from Bayesian segregation analyses of liability to osteochondral diseases in pigs.

    PubMed

    Kadarmideen, Haja N; Janss, Luc L G

    2005-11-01

    Bayesian segregation analyses were used to investigate the mode of inheritance of osteochondral lesions (osteochondrosis, OC) in pigs. Data consisted of 1163 animals with OC and their pedigrees included 2891 animals. Mixed-inheritance threshold models (MITM) and several variants of MITM, in conjunction with Markov chain Monte Carlo methods, were developed for the analysis of these (categorical) data. Results showed major genes with significant and substantially higher variances (range 1.384-37.81), compared to the polygenic variance (sigmau2). Consequently, heritabilities for a mixed inheritance (range 0.65-0.90) were much higher than the heritabilities from the polygenes. Disease allele frequencies range was 0.38-0.88. Additional analyses estimating the transmission probabilities of the major gene showed clear evidence for Mendelian segregation of a major gene affecting osteochondrosis. The variants, MITM with informative prior on sigmau2, showed significant improvement in marginal distributions and accuracy of parameters. MITM with a "reduced polygenic model" for parameterization of polygenic effects avoided convergence problems and poor mixing encountered in an "individual polygenic model." In all cases, "shrinkage estimators" for fixed effects avoided unidentifiability for these parameters. The mixed-inheritance linear model (MILM) was also applied to all OC lesions and compared with the MITM. This is the first study to report evidence of major genes for osteochondral lesions in pigs; these results may also form a basis for underpinning the genetic inheritance of this disease in other animals as well as in humans.

  19. The UK Earth System Models Marine Biogeochemical Evaluation Toolkit, BGC-val

    NASA Astrophysics Data System (ADS)

    de Mora, Lee

    2017-04-01

    The Biogeochemical Validation toolkit, BGC-val, is a model and grid independent python-based marine model evaluation framework that automates much of the validation of the marine component of an Earth System Model. BGC-val was initially developed to be a flexible and extensible system to evaluate the spin up of the marine UK Earth System Model (UKESM). However, the grid-independence and flexibility means that it is straightforward to adapt the BGC-val framework to evaluate other marine models. In addition to the marine component of the UKESM, this toolkit has been adapted to compare multiple models, including models from the CMIP5 and iMarNet inter-comparison projects. The BGC-val toolkit produces multiple levels of analysis which are presented in a simple to use interactive html5 document. Level 1 contains time series analyses, showing the development over time of many important biogeochemical and physical ocean metrics, such as the Global primary production or the Drake passage current. The second level of BGC-val is an in-depth spatial analyses of a single point in time. This is a series of point to point comparison of model and data in various regions, such as a comparison of Surface Nitrate in the model vs data from the world ocean atlas. The third level analyses are specialised ad-hoc packages to go in-depth on a specific question, such as the development of Oxygen minimum zones in the Equatorial Pacific. In additional to the three levels, the html5 document opens with a Level 0 table showing a summary of the status of the model run. The beta version of this toolkit is available via the Plymouth Marine Laboratory Gitlab server and uses the BSD 3 clause license.

  20. Garrison's model of self-directed learning: preliminary validation and relationship to academic achievement.

    PubMed

    Abd-El-Fattah, Sabry M

    2010-11-01

    In this project, 119 undergraduates responded to a questionnaire tapping three psychological constructs implicated in Garrison's model of self-directed learning: self-management, self-monitoring, and motivation. Mediation analyses showed that these psychological constructs are interrelated and that motivation mediates the relationship between self-management and self-monitoring. Path modeling analyses revealed that self-management and self-monitoring significantly predicted academic achievement over two semesters with self-management being the strongest predictor. Motivation significantly predicted academic achievement over the second semester only. Implications of these findings for self-directed learning and academic achievement in a traditional classroom setting are discussed.

  1. Social Network Analysis and Nutritional Behavior: An Integrated Modeling Approach

    PubMed Central

    Senior, Alistair M.; Lihoreau, Mathieu; Buhl, Jerome; Raubenheimer, David; Simpson, Stephen J.

    2016-01-01

    Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent research combining state-space models of nutritional geometry with agent-based models (ABMs), show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit ABMs that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition). Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interactions in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments. PMID:26858671

  2. Failure analysis of broken pedicle screws on spinal instrumentation.

    PubMed

    Chen, Chen-Sheng; Chen, Wen-Jer; Cheng, Cheng-Kung; Jao, Shyh-Hua Eric; Chueh, Shan-Chang; Wang, Chang-Chih

    2005-07-01

    Revised spinal surgery is needed when there is a broken pedicle screw in the patient. This study investigated the pedicle screw breakage by conducting retrieval analyses of broken pedicle screws from 16 patients clinically and by performing stress analyses in the posterolateral fusion computationally using finite element (FE) models. Fracture surface of screws was studied by scanning electron microscope (SEM). The FE model of the posterolateral fusion with the screw showed that screws on the caudal side had larger axial stress than those on the cephalic side, supporting the clinical findings that 75% of the patients had the screw breakage on the caudal side. SEM fractography showed that all broken screws exhibited beach marks or striations on the fractured surface, indicating fatigue failure. Screws of patients with spinal fracture showed fatigue striations and final ductile fracture around the edge. Among the 16 patients who had broken pedicle screws 69% of them achieved bone union in the bone graft, showing that bone union in the bone graft did not warrant the prevention of screw breakage.

  3. Bayesian models for comparative analysis integrating phylogenetic uncertainty.

    PubMed

    de Villemereuil, Pierre; Wells, Jessie A; Edwards, Robert D; Blomberg, Simon P

    2012-06-28

    Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language.

  4. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    PubMed Central

    2012-01-01

    Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language. PMID:22741602

  5. Analysis hierarchical model for discrete event systems

    NASA Astrophysics Data System (ADS)

    Ciortea, E. M.

    2015-11-01

    The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.

  6. Gender Difference Added? Institutional Variations in the Gender Gap in First Class Degree Awards in Mathematical Sciences

    ERIC Educational Resources Information Center

    Simonite, Vanessa

    2005-01-01

    This article shows how multilevel modelling can be used to study institutional variations in the gender differences in achievement. The results presented are from analyses of the degree classifications of 22,433 individuals who graduated in mathematical sciences, from universities in the UK, between 1994/95 and 1999/2000. The analyses were…

  7. The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth

    ERIC Educational Resources Information Center

    Steyvers, Mark; Tenenbaum, Joshua B.

    2005-01-01

    We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of…

  8. Genetic signatures of natural selection in a model invasive ascidian

    PubMed Central

    Lin, Yaping; Chen, Yiyong; Yi, Changho; Fong, Jonathan J.; Kim, Won; Rius, Marc; Zhan, Aibin

    2017-01-01

    Invasive species represent promising models to study species’ responses to rapidly changing environments. Although local adaptation frequently occurs during contemporary range expansion, the associated genetic signatures at both population and genomic levels remain largely unknown. Here, we use genome-wide gene-associated microsatellites to investigate genetic signatures of natural selection in a model invasive ascidian, Ciona robusta. Population genetic analyses of 150 individuals sampled in Korea, New Zealand, South Africa and Spain showed significant genetic differentiation among populations. Based on outlier tests, we found high incidence of signatures of directional selection at 19 loci. Hitchhiking mapping analyses identified 12 directional selective sweep regions, and all selective sweep windows on chromosomes were narrow (~8.9 kb). Further analyses indentified 132 candidate genes under selection. When we compared our genetic data and six crucial environmental variables, 16 putatively selected loci showed significant correlation with these environmental variables. This suggests that the local environmental conditions have left significant signatures of selection at both population and genomic levels. Finally, we identified “plastic” genomic regions and genes that are promising regions to investigate evolutionary responses to rapid environmental change in C. robusta. PMID:28266616

  9. Computational study of Drucker-Prager plasticity of rock using microtomography

    NASA Astrophysics Data System (ADS)

    Liu, J.; Sarout, J.; Zhang, M.; Dautriat, J.; Veveakis, M.; Regenauer-Lieb, K.

    2016-12-01

    Understanding the physics of rocks is essential for the industry of mining and petroleum. Microtomography provides a new way to quantify the relationship between the microstructure and their mechanical and transport properties. Transport and elastic properties have been studied widely while plastic properties are still poorly understood. In this study, we analyse a synthetic sandstone sample for its up-scaled plastic properties from the micro-scale. The computations are based on the representative volume element (RVE). The mechanical RVE was determined by the upper and lower bound finite element computations of elasticity. By comparing with experimental curves, the parameters of the matrix (solid part), which consists of calcite-cemented quartz grains, were investigated and quite accurate values obtained. Analyses deduced the bulk properties of yield stress, cohesion and the angle of friction of the rock with pores. Computations of a series of models of volume-sizes from 240-cube to 400-cube showed almost overlapped stress-strain curves, suggesting that the mechanical RVE determined by elastic computations is valid for plastic yielding. Furthermore, a series of derivative models were created which have similar structure but different porosity values. The analyses of these models showed that yield stress, cohesion and the angle of friction linearly decrease with the porosity increasing in the range of porosity from 8% to 28%. The angle of friction decreases the fastest and cohesion shows the most stable along with porosity.

  10. Using Generalized Additive Models to Analyze Single-Case Designs

    ERIC Educational Resources Information Center

    Shadish, William; Sullivan, Kristynn

    2013-01-01

    Many analyses for single-case designs (SCDs)--including nearly all the effect size indicators-- currently assume no trend in the data. Regression and multilevel models allow for trend, but usually test only linear trend and have no principled way of knowing if higher order trends should be represented in the model. This paper shows how Generalized…

  11. Modeling and Analysis of Structural Dynamics for a One-Tenth Scale Model NGST Sunshield

    NASA Technical Reports Server (NTRS)

    Johnston, John; Lienard, Sebastien; Brodeur, Steve (Technical Monitor)

    2001-01-01

    New modeling and analysis techniques have been developed for predicting the dynamic behavior of the Next Generation Space Telescope (NGST) sunshield. The sunshield consists of multiple layers of pretensioned, thin-film membranes supported by deployable booms. Modeling the structural dynamic behavior of the sunshield is a challenging aspect of the problem due to the effects of membrane wrinkling. A finite element model of the sunshield was developed using an approximate engineering approach, the cable network method, to account for membrane wrinkling effects. Ground testing of a one-tenth scale model of the NGST sunshield were carried out to provide data for validating the analytical model. A series of analyses were performed to predict the behavior of the sunshield under the ground test conditions. Modal analyses were performed to predict the frequencies and mode shapes of the test article and transient response analyses were completed to simulate impulse excitation tests. Comparison was made between analytical predictions and test measurements for the dynamic behavior of the sunshield. In general, the results show good agreement with the analytical model correctly predicting the approximate frequency and mode shapes for the significant structural modes.

  12. Seal Joint Analysis and Design for the Ares-I Upper Stage LOX Tank

    NASA Technical Reports Server (NTRS)

    Phillips, Dawn R.; Wingate, Robert J.

    2011-01-01

    The sealing capability of the Ares-I Upper Stage liquid oxygen tank-to-sump joint is assessed by analyzing the deflections of the joint components. Analyses are performed using three-dimensional symmetric wedge finite element models and the ABAQUS commercial finite element software. For the pressure loads and feedline interface loads, the analyses employ a mixed factor of safety approach to comply with the Constellation Program factor of safety requirements. Naflex pressure-assisted seals are considered first because they have been used successfully in similar seal joints in the Space Shuttle External Tank. For the baseline sump seal joint configuration with a Naflex seal, the predicted joint opening greatly exceeds the seal design specification. Three redesign options of the joint that maintain the use of a Naflex seal are studied. The joint openings for the redesigned seal joints show improvement over the baseline configuration; however, these joint openings still exceed the seal design specification. RACO pressure-assisted seals are considered next because they are known to also be used on the Space Shuttle External Tank, and the joint opening allowable is much larger than the specification for the Naflex seals. The finite element models for the RACO seal analyses are created by modifying the models that were used for the Naflex seal analyses. The analyses show that the RACO seal may provide sufficient sealing capability for the sump seal joint. The results provide reasonable data to recommend the design change and plan a testing program to determine the capability of RACO seals in the Ares-I Upper Stage liquid oxygen tank sump seal joint.

  13. Supersonic unstalled flutter. [aerodynamic loading of thin airfoils induced by cascade motion

    NASA Technical Reports Server (NTRS)

    Adamczyk, J. J.; Goldstein, M. E.; Hartmann, M. J.

    1978-01-01

    Flutter analyses were developed to predict the onset of supersonic unstalled flutter of a cascade of two-dimensional airfoils. The first of these analyzes the onset of supersonic flutter at low levels of aerodynamic loading (i.e., backpressure), while the second examines the occurrence of supersonic flutter at moderate levels of aerodynamic loading. Both of these analyses are based on the linearized unsteady inviscid equations of gas dynamics to model the flow field surrounding the cascade. These analyses are utilized in a parametric study to show the effects of cascade geometry, inlet Mach number, and backpressure on the onset of single and multi degree of freedom unstalled supersonic flutter. Several of the results are correlated against experimental qualitative observation to validate the models.

  14. Posttest analysis of a 1:6-scale reinforced concrete reactor containment building

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weatherby, J.R.

    In an experiment conducted at Sandia National Laboratories, 1:6-scale model of a reinforced concrete light water reactor containment building was pressurized with nitrogen gas to more than three times its design pressure. The pressurization produced one large tear and several smaller tears in the steel liner plate that functioned as the primary pneumatic seal for the structure. The data collected from the overpressurization test have been used to evaluate and further refine methods of structural analysis that can be used to predict the performance of containment buildings under conditions produced by a severe accident. This report describes posttest finite elementmore » analyses of the 1:6-scale model tests and compares pretest predictions of the structural response to the experimental results. Strain and displacements calculated in axisymmetric finite element analyses of the 1:6-scale model are compared to strains and displacement measured in the experiment. Detailed analyses of the liner plate are also described in the report. The region of the liner surrounding the large tear was analyzed using two different two-dimensional finite elements model. The results from these analyzed indicate that the primary mechanisms that initiated the tear can be captured in a two- dimensional finite element model. Furthermore, the analyses show that studs used to anchor the liner to the concrete wall, played an important role in initiating the liner tear. Three-dimensional finite element analyses of liner plates loaded by studs are also presented. Results from the three-dimensional analyses are compared to results from two-dimensional analyses of the same problems. 12 refs., 56 figs., 1 tab.« less

  15. Super-volcanic investigations

    NASA Astrophysics Data System (ADS)

    Till, Christy B.; Pritchard, Matthew; Miller, Craig A.; Brugman, Karalee K.; Ryan-Davis, Juliet

    2018-04-01

    Multi-disciplinary analyses of Earth's most destructive volcanic systems show that continuous monitoring and an understanding of each volcano's quirks, rather than a single unified model, are key to generating accurate hazard assessments.

  16. Influence of l-pyroglutamic acid on the color formation process of non-enzymatic browning reactions.

    PubMed

    Wegener, Steffen; Kaufmann, Martin; Kroh, Lothar W

    2017-10-01

    Heating aqueous d-glucose model reactions with l-glutamine and l-alanine yielded similar colored solutions. However, size-exclusion chromatography (SEC) revealed that both non-enzymatic browning reactions proceeded differently. Due to a fast occurring cyclization of l-glutamine to pyroglutamic acid, the typical amino-carbonyl reaction was slowed down. However, l-glutamine and l-alanine model reactions showed the same browning index. Closer investigations could prove that l-pyroglutamic acid was able to influence non-enzymatic browning reactions. SEC analyses of d-glucose model reactions with and without l-pyroglutamic acid revealed an increase of low molecular colored compounds in the presence of l-pyroglutamic acid. Polarimetric measurements showed a doubling of d-glucose mutarotation velocity and HPLC analyses of d-fructose formation during thermal treatment indicated a tripling of aldose-ketose transformation in the presence of l-pyroglutamic acid, which are signs of a faster proceeding non-enzymatic browning process. 2-Pyrrolidone showed no such behavior, thus the additional carboxylic group should be responsible for the observed effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Discussion of Source Reconstruction Models Using 3D MCG Data

    NASA Astrophysics Data System (ADS)

    Melis, Massimo De; Uchikawa, Yoshinori

    In this study we performed the source reconstruction of magnetocardiographic signals generated by the human heart activity to localize the site of origin of the heart activation. The localizations were performed in a four compartment model of the human volume conductor. The analyses were conducted on normal subjects and on a subject affected by the Wolff-Parkinson-White syndrome. Different models of the source activation were used to evaluate whether a general model of the current source can be applied in the study of the cardiac inverse problem. The data analyses were repeated using normal and vector component data of the MCG. The results show that a distributed source model has the better accuracy in performing the source reconstructions, and that 3D MCG data allow finding smaller differences between the different source models.

  18. Drinking, driving, and crashing: a traffic-flow model of alcohol-related motor vehicle accidents.

    PubMed

    Gruenewald, Paul J; Johnson, Fred W

    2010-03-01

    This study examined the influence of on-premise alcohol-outlet densities and of drinking-driver densities on rates of alcohol-related motor vehicle crashes. A traffic-flow model is developed to represent geographic relationships between residential locations of drinking drivers, alcohol outlets, and alcohol-related motor vehicle crashes. Cross-sectional and time-series cross-sectional spatial analyses were performed using data collected from 144 geographic units over 4 years. Data were obtained from archival and survey sources in six communities. Archival data were obtained within community areas and measured activities of either the resident population or persons visiting these communities. These data included local and highway traffic flow, locations of alcohol outlets, population density, network density of the local roadway system, and single-vehicle nighttime (SVN) crashes. Telephone-survey data obtained from residents of the communities were used to estimate the size of the resident drinking and driving population. Cross-sectional analyses showed that effects relating on-premise densities to alcohol-related crashes were moderated by highway trafficflow. Depending on levels of highway traffic flow, 10% greater densities were related to 0% to 150% greater rates of SVN crashes. Time-series cross-sectional analyses showed that changes in the population pool of drinking drivers and on-premise densities interacted to increase SVN crash rates. A simple traffic-flow model can assess the effects of on-premise alcohol-outlet densities and of drinking-driver densities as they vary across communities to produce alcohol-related crashes. Analyses based on these models can usefully guide policy decisions on the sitting of on-premise alcohol outlets.

  19. Evaluation of risk factors for perforated peptic ulcer.

    PubMed

    Yamamoto, Kazuki; Takahashi, Osamu; Arioka, Hiroko; Kobayashi, Daiki

    2018-02-15

    The aim of this study was to evaluate the prediction factors for perforated peptic ulcer (PPU). At St. Luke's International Hospital in Tokyo, Japan, a case control study was performed between August 2004 and March 2016. All patients diagnosed with PPU were included. As control subjects, patients with age, sex and date of CT scan corresponding to those of the PPU subjects were included in the study at a proportion of 2 controls for every PPU subject. All data such as past medical histories, physical findings, and laboratory data were collected through chart reviews. Univariate analyses and multivariate analyses with logistic regression were conducted, and receiver operating characteristic curves (ROCs) were calculated to show validity. Sensitivity analyses were performed to confirm results using a stepwise method and conditional logistic regression. A total of 408 patients were included in this study; 136 were a group of patients with PPU, and 272 were a control group. Univariate analysis showed statistical significance in many categories. Four different models of multivariate analyses were conducted, and significant differences were found for muscular defense and a history of peptic ulcer disease (PUD) in all models. The conditional forced-entry analysis of muscular defense showed an odds ratio (OR) of 23.8 (95% confidence interval [CI]: 5.70-100.0), and the analysis of PUD history showed an OR of 6.40 (95% CI: 1.13-36.2). The sensitivity analysis showed consistent results, with an OR of 23.8-366.2 for muscular defense and an OR of 3.67-7.81 for PUD history. The area under the curve (AUC) of all models was high enough to confirm the results. However, anticoagulants, known risk factors for PUD, did not increase the risk for PPU in our study. The conditional forced-entry analysis of anticoagulant use showed an OR of 0.85 (95% CI: 0.03-22.3). The evaluation of prediction factors and development of a prediction rule for PPU may help our decision making in performing a CT scan for patients with acute abdominal pain.

  20. Sampling strategies for improving tree accuracy and phylogenetic analyses: a case study in ciliate protists, with notes on the genus Paramecium.

    PubMed

    Yi, Zhenzhen; Strüder-Kypke, Michaela; Hu, Xiaozhong; Lin, Xiaofeng; Song, Weibo

    2014-02-01

    In order to assess how dataset-selection for multi-gene analyses affects the accuracy of inferred phylogenetic trees in ciliates, we chose five genes and the genus Paramecium, one of the most widely used model protist genera, and compared tree topologies of the single- and multi-gene analyses. Our empirical study shows that: (1) Using multiple genes improves phylogenetic accuracy, even when their one-gene topologies are in conflict with each other. (2) The impact of missing data on phylogenetic accuracy is ambiguous: resolution power and topological similarity, but not number of represented taxa, are the most important criteria of a dataset for inclusion in concatenated analyses. (3) As an example, we tested the three classification models of the genus Paramecium with a multi-gene based approach, and only the monophyly of the subgenus Paramecium is supported. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Background radiation in inelastic X-ray scattering and X-ray emission spectroscopy. A study for Johann-type spectrometers

    NASA Astrophysics Data System (ADS)

    Paredes Mellone, O. A.; Bianco, L. M.; Ceppi, S. A.; Goncalves Honnicke, M.; Stutz, G. E.

    2018-06-01

    A study of the background radiation in inelastic X-ray scattering (IXS) and X-ray emission spectroscopy (XES) based on an analytical model is presented. The calculation model considers spurious radiation originated from elastic and inelastic scattering processes along the beam paths of a Johann-type spectrometer. The dependence of the background radiation intensity on the medium of the beam paths (air and helium), analysed energy and radius of the Rowland circle was studied. The present study shows that both for IXS and XES experiments the background radiation is dominated by spurious radiation owing to scattering processes along the sample-analyser beam path. For IXS experiments the spectral distribution of the main component of the background radiation shows a weak linear dependence on the energy for the most cases. In the case of XES, a strong non-linear behaviour of the background radiation intensity was predicted for energy analysis very close to the backdiffraction condition, with a rapid increase in intensity as the analyser Bragg angle approaches π / 2. The contribution of the analyser-detector beam path is significantly weaker and resembles the spectral distribution of the measured spectra. Present results show that for usual experimental conditions no appreciable structures are introduced by the background radiation into the measured spectra, both in IXS and XES experiments. The usefulness of properly calculating the background profile is demonstrated in a background subtraction procedure for a real experimental situation. The calculation model was able to simulate with high accuracy the energy dependence of the background radiation intensity measured in a particular XES experiment with air beam paths.

  2. Capturing strain localization behind a geosynthetic-reinforced soil wall

    NASA Astrophysics Data System (ADS)

    Lai, Timothy Y.; Borja, Ronaldo I.; Duvernay, Blaise G.; Meehan, Richard L.

    2003-04-01

    This paper presents the results of finite element (FE) analyses of shear strain localization that occurred in cohesionless soils supported by a geosynthetic-reinforced retaining wall. The innovative aspects of the analyses include capturing of the localized deformation and the accompanying collapse mechanism using a recently developed embedded strong discontinuity model. The case study analysed, reported in previous publications, consists of a 3.5-m tall, full-scale reinforced wall model deforming in plane strain and loaded by surcharge at the surface to failure. Results of the analysis suggest strain localization developing from the toe of the wall and propagating upward to the ground surface, forming a curved failure surface. This is in agreement with a well-documented failure mechanism experienced by the physical wall model showing internal failure surfaces developing behind the wall as a result of the surface loading. Important features of the analyses include mesh sensitivity studies and a comparison of the localization properties predicted by different pre-localization constitutive models, including a family of three-invariant elastoplastic constitutive models appropriate for frictional/dilatant materials. Results of the analysis demonstrate the potential of the enhanced FE method for capturing a collapse mechanism characterized by the presence of a failure, or slip, surface through earthen materials.

  3. Temporal Relations in Daily-Reported Maternal Mood and Disruptive Child Behavior

    ERIC Educational Resources Information Center

    Elgar, Frank J.; Waschbusch, Daniel A.; McGrath, Patrick J.; Stewart, Sherry H.; Curtis, Lori J.

    2004-01-01

    Examined temporal relations between maternal mood and disruptive child behaviour using daily assessments of 30 mother-child dyads carried out over 8 consecutive weeks (623 pooled observations). Pooled time-series analyses showed synchronous fluctuation in child behaviour and maternal distress. Time-lagged models showed temporal relations between…

  4. Modelling by Differential Equations

    ERIC Educational Resources Information Center

    Chaachoua, Hamid; Saglam, Ayse

    2006-01-01

    This paper aims to show the close relation between physics and mathematics taking into account especially the theory of differential equations. By analysing the problems posed by scientists in the seventeenth century, we note that physics is very important for the emergence of this theory. Taking into account this analysis, we show the…

  5. Probabilistic dietary exposure assessment taking into account variability in both amount and frequency of consumption.

    PubMed

    Slob, Wout

    2006-07-01

    Probabilistic dietary exposure assessments that are fully based on Monte Carlo sampling from the raw intake data may not be appropriate. This paper shows that the data should first be analysed by using a statistical model that is able to take the various dimensions of food consumption patterns into account. A (parametric) model is discussed that takes into account the interindividual variation in (daily) consumption frequencies, as well as in amounts consumed. Further, the model can be used to include covariates, such as age, sex, or other individual attributes. Some illustrative examples show how this model may be used to estimate the probability of exceeding an (acute or chronic) exposure limit. These results are compared with the results based on directly counting the fraction of observed intakes exceeding the limit value. This comparison shows that the latter method is not adequate, in particular for the acute exposure situation. A two-step approach for probabilistic (acute) exposure assessment is proposed: first analyse the consumption data by a (parametric) statistical model as discussed in this paper, and then use Monte Carlo techniques for combining the variation in concentrations with the variation in consumption (by sampling from the statistical model). This approach results in an estimate of the fraction of the population as a function of the fraction of days at which the exposure limit is exceeded by the individual.

  6. In-depth study of 16CygB using inversion techniques

    NASA Astrophysics Data System (ADS)

    Buldgen, G.; Salmon, S. J. A. J.; Reese, D. R.; Dupret, M. A.

    2016-12-01

    Context. The 16Cyg binary system hosts the solar-like Kepler targets with the most stringent observational constraints. Indeed, we benefit from very high quality oscillation spectra, as well as spectroscopic and interferometric observations. Moreover, this system is particularly interesting since both stars are very similar in mass but the A component is orbited by a red dwarf, whereas the B component is orbited by a Jovian planet and thus could have formed a more complex planetary system. In our previous study, we showed that seismic inversions of integrated quantities could be used to constrain microscopic diffusion in the A component. In this study, we analyse the B component in the light of a more regularised inversion. Aims: We wish to analyse independently the B component of the 16Cyg binary system using the inversion of an indicator dedicated to analyse core conditions, denoted tu. Using this independent determination, we wish to analyse any differences between both stars due to the potential influence of planetary formation on stellar structure and/or their respective evolution. Methods: First, we recall the observational constraints for 16CygB and the method we used to generate reference stellar models of this star. We then describe how we improved the inversion and how this approach could be used for future targets with a sufficient number of observed frequencies. The inversion results were then used to analyse the differences between the A and B components. Results: The inversion of the tu indicator for 16CygB shows a disagreement with models including microscopic diffusion and sharing the chemical composition previously derived for 16CygA. We show that small changes in chemical composition are insufficient to solve the problem but that extra mixing can account for the differences seen between both stars. We use a parametric approach to analyse the impact of extra mixing in the form of turbulent diffusion on the behaviour of the tu values. We conclude on the necessity of further investigations using models with a physically motivated implementation of extra mixing processes including additional constraints to further improve the accuracy with which the fundamental parameters of this system are determined.

  7. Social comparison and perceived breach of psychological contract: their effects on burnout in a multigroup analysis.

    PubMed

    Cantisano, Gabriela Topa; Domínguez, J Francisco Morales; García, J Luis Caeiro

    2007-05-01

    This study focuses on the mediator role of social comparison in the relationship between perceived breach of psychological contract and burnout. A previous model showing the hypothesized effects of perceived breach on burnout, both direct and mediated, is proposed. The final model reached an optimal fit to the data and was confirmed through multigroup analysis using a sample of Spanish teachers (N = 401) belonging to preprimary, primary, and secondary schools. Multigroup analyses showed that the model fit all groups adequately.

  8. A Multidimensional Model of School Dropout from an 8-Year Longitudinal Study in a General High School Population

    ERIC Educational Resources Information Center

    Fortin, Laurier; Marcotte, Diane; Diallo, Thierno; Potvin, Pierre; Royer, Egide

    2013-01-01

    This study tests an empirical multidimensional model of school dropout, using data collected in the first year of an 8-year longitudinal study, with first year high school students aged 12-13 years. Structural equation modeling analyses show that five personal, family, and school latent factors together contribute to school dropout identified at…

  9. Application of logistic regression to case-control association studies involving two causative loci.

    PubMed

    North, Bernard V; Curtis, David; Sham, Pak C

    2005-01-01

    Models in which two susceptibility loci jointly influence the risk of developing disease can be explored using logistic regression analysis. Comparison of likelihoods of models incorporating different sets of disease model parameters allows inferences to be drawn regarding the nature of the joint effect of the loci. We have simulated case-control samples generated assuming different two-locus models and then analysed them using logistic regression. We show that this method is practicable and that, for the models we have used, it can be expected to allow useful inferences to be drawn from sample sizes consisting of hundreds of subjects. Interactions between loci can be explored, but interactive effects do not exactly correspond with classical definitions of epistasis. We have particularly examined the issue of the extent to which it is helpful to utilise information from a previously identified locus when investigating a second, unknown locus. We show that for some models conditional analysis can have substantially greater power while for others unconditional analysis can be more powerful. Hence we conclude that in general both conditional and unconditional analyses should be performed when searching for additional loci.

  10. Model-independent plot of dynamic PET data facilitates data interpretation and model selection.

    PubMed

    Munk, Ole Lajord

    2012-02-21

    When testing new PET radiotracers or new applications of existing tracers, the blood-tissue exchange and the metabolism need to be examined. However, conventional plots of measured time-activity curves from dynamic PET do not reveal the inherent kinetic information. A novel model-independent volume-influx plot (vi-plot) was developed and validated. The new vi-plot shows the time course of the instantaneous distribution volume and the instantaneous influx rate. The vi-plot visualises physiological information that facilitates model selection and it reveals when a quasi-steady state is reached, which is a prerequisite for the use of the graphical analyses by Logan and Gjedde-Patlak. Both axes of the vi-plot have direct physiological interpretation, and the plot shows kinetic parameter in close agreement with estimates obtained by non-linear kinetic modelling. The vi-plot is equally useful for analyses of PET data based on a plasma input function or a reference region input function. The vi-plot is a model-independent and informative plot for data exploration that facilitates the selection of an appropriate method for data analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Canonical decomposition of magnetotelluric responses: Experiment on 1D anisotropic structures

    NASA Astrophysics Data System (ADS)

    Guo, Ze-qiu; Wei, Wen-bo; Ye, Gao-feng; Jin, Sheng; Jing, Jian-en

    2015-08-01

    Horizontal electrical heterogeneity of subsurface earth is mostly originated from structural complexity and electrical anisotropy, and local near-surface electrical heterogeneity will severely distort regional electromagnetic responses. Conventional distortion analyses for magnetotelluric soundings are primarily physical decomposition methods with respect to isotropic models, which mostly presume that the geoelectric distribution of geological structures is of local and regional patterns represented by 3D/2D models. Due to the widespread anisotropy of earth media, the confusion between 1D anisotropic responses and 2D isotropic responses, and the defects of physical decomposition methods, we propose to conduct modeling experiments with canonical decomposition in terms of 1D layered anisotropic models, and the method is one of the mathematical decomposition methods based on eigenstate analyses differentiated from distortion analyses, which can be used to recover electrical information such as strike directions, and maximum and minimum conductivity. We tested this method with numerical simulation experiments on several 1D synthetic models, which turned out that canonical decomposition is quite effective to reveal geological anisotropic information. Finally, for the background of anisotropy from previous study by geological and seismological methods, canonical decomposition is applied to real data acquired in North China Craton for 1D anisotropy analyses, and the result shows that, with effective modeling and cautious interpretation, canonical decomposition could be another good method to detect anisotropy of geological media.

  12. Data Assimilation with the Extended Cmam: Nudging to Re-Analyses of the Lower Atmosphere

    NASA Astrophysics Data System (ADS)

    Fomichev, V. I.; Beagley, S. R.; Shepherd, M. G.; Semeniuk, K.; Mclandress, C. W.; Scinocca, J.; McConnell, J. C.

    2012-12-01

    The extended CMAM is currently being run in a forecast mode allowing the use of the model to simulate specific events. The current analysis period covers 1990-2010. The model is forced using ERA-Interim re-analyses via a nudging technique for the troposphere/stratosphere in combination with the GCM evolution in the lower atmosphere. Thus a transient forced model state is created in the lower atmosphere. The upper atmosphere is allowed to evolve in response to the observed conditions occurring in the lower atmosphere and in response to other transient forcing's such as SSTs, solar flux, and CO2 and CFC boundary changes. This methodology allows specific events and observations to be more successfully compared with the model. The model results compared to TOMS and ACE observations show a good agreement.

  13. Analytical and numerical analyses of an unconfined aquifer test considering unsaturated zone characteristics

    USGS Publications Warehouse

    Moench, A.F.

    2008-01-01

    A 7-d, constant rate aquifer test conducted by University of Waterloo researchers at Canadian Forces Base Borden in Ontario, Canada, is useful for advancing understanding of fluid flow processes in response to pumping from an unconfined aquifer. Measured data include not only drawdown in the saturated zone but also volumetric soil moisture measured at various times and distances from the pumped well. Analytical analyses were conducted with the model published in 2001 by Moench and colleagues, which allows for gradual drainage but does not include unsaturated zone characteristics, and the model published in 2006 by Mathias and Butler, which assumes that moisture retention and relative hydraulic conductivity (RHC) in the unsaturated zone are exponential functions of pressure head. Parameters estimated with either model yield good matches between measured and simulated drawdowns in piezometers. Numerical analyses were conducted with two versions of VS2DT: one that uses traditional Brooks and Corey functional relations and one that uses a RHC function introduced in 2001 by Assouline that includes an additional parameter that accounts for soil structure and texture. The analytical model of Mathias and Butler and numerical model of VS2DT with the Assouline model both show that the RHC function must contain a fitting parameter that is different from that used in the moisture retention function. Results show the influence of field-scale heterogeneity and suggest that the RHC at the Borden site declines more rapidly with elevation above the top of the capillary fringe than would be expected if the parameters were to reflect local- or core-scale soil structure and texture.

  14. Panning for the gold in health research: incorporating studies' methodological quality in meta-analysis.

    PubMed

    Johnson, Blair T; Low, Robert E; MacDonald, Hayley V

    2015-01-01

    Systematic reviews now routinely assess methodological quality to gauge the validity of the included studies and of the synthesis as a whole. Although trends from higher quality studies should be clearer, it is uncertain how often meta-analyses incorporate methodological quality in models of study results either as predictors, or, more interestingly, in interactions with theoretical moderators. We survey 200 meta-analyses in three health promotion domains to examine when and how meta-analyses incorporate methodological quality. Although methodological quality assessments commonly appear in contemporary meta-analyses (usually as scales), they are rarely incorporated in analyses, and still more rarely analysed in interaction with theoretical determinants of the success of health promotions. The few meta-analyses (2.5%) that did include such an interaction analysis showed that moderator results remained significant in higher quality studies or were present only among higher quality studies. We describe how to model quality interactively with theoretically derived moderators and discuss strengths and weaknesses of this approach and in relation to current meta-analytic practice. In large literatures exhibiting heterogeneous effects, meta-analyses can incorporate methodological quality and generate conclusions that enable greater confidence not only about the substantive phenomenon but also about the role that methodological quality itself plays.

  15. Failure rate analysis of Goddard Space Flight Center spacecraft performance during orbital life

    NASA Technical Reports Server (NTRS)

    Norris, H. P.; Timmins, A. R.

    1976-01-01

    Space life performance data on 57 Goddard Space Flight Center spacecraft are analyzed from the standpoint of determining an appropriate reliability model and the associated reliability parameters. Data from published NASA reports, which cover the space performance of GSFC spacecraft launched in the 1960-1970 decade, form the basis of the analyses. The results of the analyses show that the time distribution of 449 malfunctions, of which 248 were classified as failures (not necessarily catastrophic), follow a reliability growth pattern that can be described with either the Duane model or a Weibull distribution. The advantages of both mathematical models are used in order to: identify space failure rates, observe chronological trends, and compare failure rates with those experienced during the prelaunch environmental tests of the flight model spacecraft.

  16. A Model for Simulating the Response of Aluminum Honeycomb Structure to Transverse Loading

    NASA Technical Reports Server (NTRS)

    Ratcliffe, James G.; Czabaj, Michael W.; Jackson, Wade C.

    2012-01-01

    A 1-dimensional material model was developed for simulating the transverse (thickness-direction) loading and unloading response of aluminum honeycomb structure. The model was implemented as a user-defined material subroutine (UMAT) in the commercial finite element analysis code, ABAQUS(Registered TradeMark)/Standard. The UMAT has been applied to analyses for simulating quasi-static indentation tests on aluminum honeycomb-based sandwich plates. Comparison of analysis results with data from these experiments shows overall good agreement. Specifically, analyses of quasi-static indentation tests yielded accurate global specimen responses. Predicted residual indentation was also in reasonable agreement with measured values. Overall, this simple model does not involve a significant computational burden, which makes it more tractable to simulate other damage mechanisms in the same analysis.

  17. Evaluation properties of the French version of the OUT-PATSAT35 satisfaction with care questionnaire according to classical and item response theory analyses.

    PubMed

    Panouillères, M; Anota, A; Nguyen, T V; Brédart, A; Bosset, J F; Monnier, A; Mercier, M; Hardouin, J B

    2014-09-01

    The present study investigates the properties of the French version of the OUT-PATSAT35 questionnaire, which evaluates the outpatients' satisfaction with care in oncology using classical analysis (CTT) and item response theory (IRT). This cross-sectional multicenter study includes 692 patients who completed the questionnaire at the end of their ambulatory treatment. CTT analyses tested the main psychometric properties (convergent and divergent validity, and internal consistency). IRT analyses were conducted separately for each OUT-PATSAT35 domain (the doctors, the nurses or the radiation therapists and the services/organization) by models from the Rasch family. We examined the fit of the data to the model expectations and tested whether the model assumptions of unidimensionality, monotonicity and local independence were respected. A total of 605 (87.4%) respondents were analyzed with a mean age of 64 years (range 29-88). Internal consistency for all scales separately and for the three main domains was good (Cronbach's α 0.74-0.98). IRT analyses were performed with the partial credit model. No disordered thresholds of polytomous items were found. Each domain showed high reliability but fitted poorly to the Rasch models. Three items in particular, the item about "promptness" in the doctors' domain and the items about "accessibility" and "environment" in the services/organization domain, presented the highest default of fit. A correct fit of the Rasch model can be obtained by dropping these items. Most of the local dependence concerned items about "information provided" in each domain. A major deviation of unidimensionality was found in the nurses' domain. CTT showed good psychometric properties of the OUT-PATSAT35. However, the Rasch analysis revealed some misfitting and redundant items. Taking the above problems into consideration, it could be interesting to refine the questionnaire in a future study.

  18. Canonical Correlational Models of Students' Perceptions of Assessment Tasks, Motivational Orientations, and Learning Strategies

    ERIC Educational Resources Information Center

    Alkharusi, Hussain

    2013-01-01

    The present study aims at deriving correlational models of students' perceptions of assessment tasks, motivational orientations, and learning strategies using canonical analyses. Data were collected from 198 Omani tenth grade students. Results showed that high degrees of authenticity and transparency in assessment were associated with positive…

  19. Long-distance travel behaviours accelerate and aggravate the large-scale spatial spreading of infectious diseases.

    PubMed

    Xu, Zhijing; Zu, Zhenghu; Zheng, Tao; Zhang, Wendou; Xu, Qing; Liu, Jinjie

    2014-01-01

    The study analyses the role of long-distance travel behaviours on the large-scale spatial spreading of directly transmitted infectious diseases, focusing on two different travel types in terms of the travellers travelling to a specific group or not. For this purpose, we have formulated and analysed a metapopulation model in which the individuals in each subpopulation are organised into a scale-free contact network. The long-distance travellers between the subpopulations will temporarily change the network structure of the destination subpopulation through the "merging effects (MEs)," which indicates that the travellers will be regarded as either connected components or isolated nodes in the contact network. The results show that the presence of the MEs has constantly accelerated the transmission of the diseases and aggravated the outbreaks compared to the scenario in which the diversity of the long-distance travel types is arbitrarily discarded. Sensitivity analyses show that these results are relatively constant regarding a wide range variation of several model parameters. Our study has highlighted several important causes which could significantly affect the spatiotemporal disease dynamics neglected by the present studies.

  20. Hierarchical Linear Modeling Analyses of NEO-PI-R Scales In the Baltimore Longitudinal Study of Aging

    PubMed Central

    Terracciano, Antonio; McCrae, Robert R.; Brant, Larry J.; Costa, Paul T.

    2009-01-01

    We examined age trends in the five factors and 30 facets assessed by the Revised NEO Personality Inventory in Baltimore Longitudinal Study of Aging data (N = 1,944; 5,027 assessments) collected between 1989 and 2004. Consistent with cross-sectional results, Hierarchical Linear Modeling analyses showed gradual personality changes in adulthood: a decline up to age 80 in Neuroticism, stability and then decline in Extraversion, decline in Openness, increase in Agreeableness, and increase up to age 70 in Conscientiousness. Some facets showed different curves from the factor they define. Birth cohort effects were modest, and there were no consistent Gender × Age interactions. Significant non-normative changes were found for all five factors; they were not explained by attrition but might be due to genetic factors, disease, or life experience. PMID:16248708

  1. Applying Rasch analysis to evaluate measurement equivalence of different administration formats of the Activity Limitation scale of the Cambridge Pulmonary Hypertension Outcome Review (CAMPHOR).

    PubMed

    Twiss, J; McKenna, S P; Graham, J; Swetz, K; Sloan, J; Gomberg-Maitland, M

    2016-04-09

    Electronic formats of patient-reported outcome (PRO) measures are now routinely used in clinical research studies. When changing from a validated paper and pen to electronic administration it is necessary to establish their equivalence. This study reports on the value of Rasch analysis in this process. Three groups of US pulmonary hypertension (PH) patients participated. The first completed an electronic version of the CAMPHOR Activity Limitation scale (e-sample) and this was compared with two pen and paper administrated samples (pp1 and pp2). The three databases were combined and analysed for fit to the Rasch model. Equivalence was evaluated by differential item functioning (DIF) analyses. The three datasets were matched randomly in terms of sample size (n = 147). Mean age (years) and percentage of male respondents were as follows: e-sample (51.7, 16.0 %); pp1 (50.0, 14.0 %); pp2 (55.5, 40.4 %). The combined dataset achieved fit to the Rasch model. Two items showed evidence of borderline DIF. Further analyses showed the inclusion of these items had little impact on Rasch estimates indicating the DIF identified was unimportant. Differences between the performance of the electronic and pen and paper administrations of the CAMPHOR Activity Limitation scale were minor. The results were successful in showing how the Rasch model can be used to determine the equivalence of alternative formats of PRO measures.

  2. Does climate directly influence NPP globally?

    PubMed

    Chu, Chengjin; Bartlett, Megan; Wang, Youshi; He, Fangliang; Weiner, Jacob; Chave, Jérôme; Sack, Lawren

    2016-01-01

    The need for rigorous analyses of climate impacts has never been more crucial. Current textbooks state that climate directly influences ecosystem annual net primary productivity (NPP), emphasizing the urgent need to monitor the impacts of climate change. A recent paper challenged this consensus, arguing, based on an analysis of NPP for 1247 woody plant communities across global climate gradients, that temperature and precipitation have negligible direct effects on NPP and only perhaps have indirect effects by constraining total stand biomass (Mtot ) and stand age (a). The authors of that study concluded that the length of the growing season (lgs ) might have a minor influence on NPP, an effect they considered not to be directly related to climate. In this article, we describe flaws that affected that study's conclusions and present novel analyses to disentangle the effects of stand variables and climate in determining NPP. We re-analyzed the same database to partition the direct and indirect effects of climate on NPP, using three approaches: maximum-likelihood model selection, independent-effects analysis, and structural equation modeling. These new analyses showed that about half of the global variation in NPP could be explained by Mtot combined with climate variables and supported strong and direct influences of climate independently of Mtot , both for NPP and for net biomass change averaged across the known lifetime of the stands (ABC = average biomass change). We show that lgs is an important climate variable, intrinsically correlated with, and contributing to mean annual temperature and precipitation (Tann and Pann ), all important climatic drivers of NPP. Our analyses provide guidance for statistical and mechanistic analyses of climate drivers of ecosystem processes for predictive modeling and provide novel evidence supporting the strong, direct role of climate in determining vegetation productivity at the global scale. © 2015 John Wiley & Sons Ltd.

  3. Modeling evapotranspiration over China's landmass from 1979-2012 using three surface models

    NASA Astrophysics Data System (ADS)

    Sun, Shaobo; Chen, Baozhang; Zhang, Huifang; Lin, Xiaofeng

    2017-04-01

    Land surface models (LSMs) are useful tools to estimate land evapotranspiration at a grid scale and for a long-term applications. Here, the Community Land Model 4.0 (CLM4.0), Dynamic Land Model (DLM) and Variable Infiltration Capacity (VIC) model were driven with observation-based forcing data sets, and a multiple LSM ensemble-averaged evapotranspiration (ET) product (LSMs-ET) was developed and its spatial-temporal variations were analyzed for the China landmass over the period 1979-2012. Evaluations against measurements from nine flux towers at site scale and surface water budget based ET at regional scale showed that the LSMs-ET had good performance in most areas of China's landmass. The inter-comparisons between the ET estimates and the independent ET products from remote sensing and upscaling methods suggested that there were a fairly consistent patterns between each data sets. The LSMs-ET produced a mean annual ET of 351.24±10.7 mm yr-1 over 1979-2012, and its spatial-temporal variation analyses showed that (i) there was an overall significant ET increasing trend, with a value of 0.72 mm yr-1 (p < 0.01); (ii) 36.01% of Chinese land had significant increasing trends, ranging from 1 to 9 mm yr-1, while only 6.41% of the area showed significant decreasing trends, ranging from -6.28 to -0.08 mm yr-1. Analyses of ET variations in each climate region clearly showed that the Tibetan Plateau areas were the main contributors to the overall increasing ET trends of China.

  4. Distinguishing State Variability From Trait Change in Longitudinal Data: The Role of Measurement (Non)Invariance in Latent State-Trait Analyses

    PubMed Central

    Geiser, Christian; Keller, Brian T.; Lockhart, Ginger; Eid, Michael; Cole, David A.; Koch, Tobias

    2014-01-01

    Researchers analyzing longitudinal data often want to find out whether the process they study is characterized by (1) short-term state variability, (2) long-term trait change, or (3) a combination of state variability and trait change. Classical latent state-trait (LST) models are designed to measure reversible state variability around a fixed set-point or trait, whereas latent growth curve (LGC) models focus on long-lasting and often irreversible trait changes. In the present paper, we contrast LST and LGC models from the perspective of measurement invariance (MI) testing. We show that establishing a pure state-variability process requires (a) the inclusion of a mean structure and (b) establishing strong factorial invariance in LST analyses. Analytical derivations and simulations demonstrate that LST models with non-invariant parameters can mask the fact that a trait-change or hybrid process has generated the data. Furthermore, the inappropriate application of LST models to trait change or hybrid data can lead to bias in the estimates of consistency and occasion-specificity, which are typically of key interest in LST analyses. Four tips for the proper application of LST models are provided. PMID:24652650

  5. Numerical Modelling of Connections Between Stones in Foundations of Historical Buildings

    NASA Astrophysics Data System (ADS)

    Przewlocki, Jaroslaw; Zielinska, Monika; Grebowski, Karol

    2017-12-01

    The aim of this paper is to analyse the behaviour of old building foundations composed of stones (the main load-bearing elements) and mortar, based on numerical analysis. Some basic aspects of historical foundations are briefly discussed, with an emphasis on their development, techniques, and material. The behaviour of a foundation subjected to the loads transmitted from the upper parts of the structure is described using the finite element method (FEM). The main problems in analysing the foundations of historical buildings are determining the characteristics of the materials and the degree of degradation of the mortar, which is the weakest part of the foundation. Mortar is graded using the damaged-plastic model. In this model, exceeding the bearing capacity occurs due to the degradation of materials. The damaged-plastic model is the most accurate model describing the work and properties of mortar because it shows exactly what happens with this material throughout its total load history. For a uniformly loaded fragment of the foundation, both stresses and strains were analysed. The results of the analysis presented in this paper contribute to further research in the field of understanding both behaviour and modelling in historical buildings’ foundations.

  6. The influence of track modelling options on the simulation of rail vehicle dynamics

    NASA Astrophysics Data System (ADS)

    Di Gialleonardo, Egidio; Braghin, Francesco; Bruni, Stefano

    2012-09-01

    This paper investigates the effect of different models for track flexibility on the simulation of railway vehicle running dynamics on tangent and curved track. To this end, a multi-body model of the rail vehicle is defined including track flexibility effects on three levels of detail: a perfectly rigid pair of rails, a sectional track model and a three-dimensional finite element track model. The influence of the track model on the calculation of the nonlinear critical speed is pointed out and it is shown that neglecting the effect of track flexibility results in an overestimation of the critical speed by more than 10%. Vehicle response to stochastic excitation from track irregularity is also investigated, analysing the effect of track flexibility models on the vertical and lateral wheel-rail contact forces. Finally, the effect of the track model on the calculation of dynamic forces produced by wheel out-of-roundness is analysed, showing that peak dynamic loads are very sensitive to the track model used in the simulation.

  7. Students' Perceptions and Teachers' Self-Ratings of Modelling Civic Virtues: An Exploratory Empirical Study in Dutch Primary Schools

    ERIC Educational Resources Information Center

    Willems, Frank; Denessen, Eddie; Hermans, Chris; Vermeer, Paul

    2012-01-01

    This is a study of teachers' modelling of civic virtues in the classroom. It focusses on three virtues of good citizenship: justice, tolerance and solidarity. The aim is to explore the extent to which teachers can be regarded as models of these virtues. Questionnaires were developed for both students and teachers. Factor analyses showed that the…

  8. Stability and Optimal Harvesting of Modified Leslie-Gower Predator-Prey Model

    NASA Astrophysics Data System (ADS)

    Toaha, S.; Azis, M. I.

    2018-03-01

    This paper studies a modified of dynamics of Leslie-Gower predator-prey population model. The model is stated as a system of first order differential equations. The model consists of one predator and one prey. The Holling type II as a predation function is considered in this model. The predator and prey populations are assumed to be beneficial and then the two populations are harvested with constant efforts. Existence and stability of the interior equilibrium point are analysed. Linearization method is used to get the linearized model and the eigenvalue is used to justify the stability of the interior equilibrium point. From the analyses, we show that under a certain condition the interior equilibrium point exists and is locally asymptotically stable. For the model with constant efforts of harvesting, cost function, revenue function, and profit function are considered. The stable interior equilibrium point is then related to the maximum profit problem as well as net present value of revenues problem. We show that there exists a certain value of the efforts that maximizes the profit function and net present value of revenues while the interior equilibrium point remains stable. This means that the populations can live in coexistence for a long time and also maximize the benefit even though the populations are harvested with constant efforts.

  9. An assessment on the use of bivariate, multivariate and soft computing techniques for collapse susceptibility in GIS environ

    NASA Astrophysics Data System (ADS)

    Yilmaz, Işik; Marschalko, Marian; Bednarik, Martin

    2013-04-01

    The paper presented herein compares and discusses the use of bivariate, multivariate and soft computing techniques for collapse susceptibility modelling. Conditional probability (CP), logistic regression (LR) and artificial neural networks (ANN) models representing the bivariate, multivariate and soft computing techniques were used in GIS based collapse susceptibility mapping in an area from Sivas basin (Turkey). Collapse-related factors, directly or indirectly related to the causes of collapse occurrence, such as distance from faults, slope angle and aspect, topographical elevation, distance from drainage, topographic wetness index (TWI), stream power index (SPI), Normalized Difference Vegetation Index (NDVI) by means of vegetation cover, distance from roads and settlements were used in the collapse susceptibility analyses. In the last stage of the analyses, collapse susceptibility maps were produced from the models, and they were then compared by means of their validations. However, Area Under Curve (AUC) values obtained from all three models showed that the map obtained from soft computing (ANN) model looks like more accurate than the other models, accuracies of all three models can be evaluated relatively similar. The results also showed that the conditional probability is an essential method in preparation of collapse susceptibility map and highly compatible with GIS operating features.

  10. Differences in Performance Among Test Statistics for Assessing Phylogenomic Model Adequacy.

    PubMed

    Duchêne, David A; Duchêne, Sebastian; Ho, Simon Y W

    2018-05-18

    Statistical phylogenetic analyses of genomic data depend on models of nucleotide or amino acid substitution. The adequacy of these substitution models can be assessed using a number of test statistics, allowing the model to be rejected when it is found to provide a poor description of the evolutionary process. A potentially valuable use of model-adequacy test statistics is to identify when data sets are likely to produce unreliable phylogenetic estimates, but their differences in performance are rarely explored. We performed a comprehensive simulation study to identify test statistics that are sensitive to some of the most commonly cited sources of phylogenetic estimation error. Our results show that, for many test statistics, traditional thresholds for assessing model adequacy can fail to reject the model when the phylogenetic inferences are inaccurate and imprecise. This is particularly problematic when analysing loci that have few variable informative sites. We propose new thresholds for assessing substitution model adequacy and demonstrate their effectiveness in analyses of three phylogenomic data sets. These thresholds lead to frequent rejection of the model for loci that yield topological inferences that are imprecise and are likely to be inaccurate. We also propose the use of a summary statistic that provides a practical assessment of overall model adequacy. Our approach offers a promising means of enhancing model choice in genome-scale data sets, potentially leading to improvements in the reliability of phylogenomic inference.

  11. A multi-state model for sick-leave data applied to a randomized control trial study of low back pain.

    PubMed

    Lie, Stein Atle; Eriksen, Hege R; Ursin, Holger; Hagen, Eli Molde

    2008-05-01

    Analysing and presenting data on different outcomes after sick-leave is challenging. The use of extended statistical methods supplies additional information and allows further exploitation of data. Four hundred and fifty-seven patients, sick-listed for 8-12 weeks for low back pain, were randomized to intervention (n=237) or control (n=220). Outcome was measured as: "sick-listed'', "returned to work'', or "disability pension''. The individuals shifted between the three states between one and 22 times (mean 6.4 times). In a multi-state model, shifting between the states was set up in a transition intensity matrix. The probability of being in any of the states was calculated as a transition probability matrix. The effects of the intervention were modelled using a non-parametric model. There was an effect of the intervention for leaving the state sick-listed and shifting to returned to work (relative risk (RR)=1.27, 95% confidence interval (CI) 1.09- 1.47). The nonparametric estimates showed an effect of the intervention for leaving sick-listed and shifting to returned to work in the first 6 months. We found a protective effect of the intervention for shifting back to sick-listed between 6 and 18 months. The analyses showed that the probability of staying in the state returned to work was not different between the intervention and control groups at the end of the follow-up (3 years). We demonstrate that these alternative analyses give additional results and increase the strength of the analyses. The simple intervention did not decrease the probability of being on sick-leave in the long term; however, it decreased the time that individuals were on sick-leave.

  12. Simplified models for dark matter face their consistent completions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonçalves, Dorival; Machado, Pedro A. N.; No, Jose Miguel

    Simplified dark matter models have been recently advocated as a powerful tool to exploit the complementarity between dark matter direct detection, indirect detection and LHC experimental probes. Focusing on pseudoscalar mediators between the dark and visible sectors, we show that the simplified dark matter model phenomenology departs significantly from that of consistentmore » $${SU(2)_{\\mathrm{L}} \\times U(1)_{\\mathrm{Y}}}$$ gauge invariant completions. We discuss the key physics simplified models fail to capture, and its impact on LHC searches. Notably, we show that resonant mono-Z searches provide competitive sensitivities to standard mono-jet analyses at $13$ TeV LHC.« less

  13. An ocular biomechanic model for dynamic simulation of different eye movements.

    PubMed

    Iskander, J; Hossny, M; Nahavandi, S; Del Porto, L

    2018-04-11

    Simulating and analysing eye movement is useful for assessing visual system contribution to discomfort with respect to body movements, especially in virtual environments where simulation sickness might occur. It can also be used in the design of eye prosthesis or humanoid robot eye. In this paper, we present two biomechanic ocular models that are easily integrated into the available musculoskeletal models. The model was previously used to simulate eye-head coordination. The models are used to simulate and analyse eye movements. The proposed models are based on physiological and kinematic properties of the human eye. They incorporate an eye-globe, orbital suspension tissues and six muscles with their connective tissues (pulleys). Pulleys were incorporated in rectus and inferior oblique muscles. The two proposed models are the passive pulleys and the active pulleys models. Dynamic simulations of different eye movements, including fixation, saccade and smooth pursuit, are performed to validate both models. The resultant force-length curves of the models were similar to the experimental data. The simulation results show that the proposed models are suitable to generate eye movement simulations with results comparable to other musculoskeletal models. The maximum kinematic root mean square error (RMSE) is 5.68° and 4.35° for the passive and active pulley models, respectively. The analysis of the muscle forces showed realistic muscle activation with increased muscle synergy in the active pulley model. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. [Cross-cultural adaptation and validation of the PROMIS Global Health scale in the Portuguese language].

    PubMed

    Zumpano, Camila Eugênia; Mendonça, Tânia Maria da Silva; Silva, Carlos Henrique Martins da; Correia, Helena; Arnold, Benjamin; Pinto, Rogério de Melo Costa

    2017-01-23

    This study aimed to perform the cross-cultural adaptation and validation of the Patient-Reported Outcomes Measurement Information System (PROMIS) Global Health scale in the Portuguese language. The ten Global Health items were cross-culturally adapted by the method proposed in the Functional Assessment of Chronic Illness Therapy (FACIT). The instrument's final version in Portuguese was self-administered by 1,010 participants in Brazil. The scale's precision was verified by floor and ceiling effects analysis, reliability of internal consistency, and test-retest reliability. Exploratory and confirmatory factor analyses were used to assess the construct's validity and instrument's dimensionality. Calibration of the items used the Gradual Response Model proposed by Samejima. Four global items required adjustments after the pretest. Analysis of the psychometric properties showed that the Global Health scale has good reliability, with Cronbach's alpha of 0.83 and intra-class correlation of 0.89. Exploratory and confirmatory factor analyses showed good fit in the previously established two-dimensional model. The Global Physical Health and Global Mental Health scale showed good latent trait coverage according to the Gradual Response Model. The PROMIS Global Health items showed equivalence in Portuguese compared to the original version and satisfactory psychometric properties for application in clinical practice and research in the Brazilian population.

  15. Collapse susceptibility mapping in karstified gypsum terrain (Sivas basin - Turkey) by conditional probability, logistic regression, artificial neural network models

    NASA Astrophysics Data System (ADS)

    Yilmaz, Isik; Keskin, Inan; Marschalko, Marian; Bednarik, Martin

    2010-05-01

    This study compares the GIS based collapse susceptibility mapping methods such as; conditional probability (CP), logistic regression (LR) and artificial neural networks (ANN) applied in gypsum rock masses in Sivas basin (Turkey). Digital Elevation Model (DEM) was first constructed using GIS software. Collapse-related factors, directly or indirectly related to the causes of collapse occurrence, such as distance from faults, slope angle and aspect, topographical elevation, distance from drainage, topographic wetness index- TWI, stream power index- SPI, Normalized Difference Vegetation Index (NDVI) by means of vegetation cover, distance from roads and settlements were used in the collapse susceptibility analyses. In the last stage of the analyses, collapse susceptibility maps were produced from CP, LR and ANN models, and they were then compared by means of their validations. Area Under Curve (AUC) values obtained from all three methodologies showed that the map obtained from ANN model looks like more accurate than the other models, and the results also showed that the artificial neural networks is a usefull tool in preparation of collapse susceptibility map and highly compatible with GIS operating features. Key words: Collapse; doline; susceptibility map; gypsum; GIS; conditional probability; logistic regression; artificial neural networks.

  16. Modelling of capital asset pricing by considering the lagged effects

    NASA Astrophysics Data System (ADS)

    Sukono; Hidayat, Y.; Bon, A. Talib bin; Supian, S.

    2017-01-01

    In this paper the problem of modelling the Capital Asset Pricing Model (CAPM) with the effect of the lagged is discussed. It is assumed that asset returns are analysed influenced by the market return and the return of risk-free assets. To analyse the relationship between asset returns, the market return, and the return of risk-free assets, it is conducted by using a regression equation of CAPM, and regression equation of lagged distributed CAPM. Associated with the regression equation lagged CAPM distributed, this paper also developed a regression equation of Koyck transformation CAPM. Results of development show that the regression equation of Koyck transformation CAPM has advantages, namely simple as it only requires three parameters, compared with regression equation of lagged distributed CAPM.

  17. The Work Role Functioning Questionnaire v2.0 Showed Consistent Factor Structure Across Six Working Samples.

    PubMed

    Abma, Femke I; Bültmann, Ute; Amick Iii, Benjamin C; Arends, Iris; Dorland, Heleen F; Flach, Peter A; van der Klink, Jac J L; van de Ven, Hardy A; Bjørner, Jakob Bue

    2017-09-09

    Objective The Work Role Functioning Questionnaire v2.0 (WRFQ) is an outcome measure linking a persons' health to the ability to meet work demands in the twenty-first century. We aimed to examine the construct validity of the WRFQ in a heterogeneous set of working samples in the Netherlands with mixed clinical conditions and job types to evaluate the comparability of the scale structure. Methods Confirmatory factor and multi-group analyses were conducted in six cross-sectional working samples (total N = 2433) to evaluate and compare a five-factor model structure of the WRFQ (work scheduling demands, output demands, physical demands, mental and social demands, and flexibility demands). Model fit indices were calculated based on RMSEA ≤ 0.08 and CFI ≥ 0.95. After fitting the five-factor model, the multidimensional structure of the instrument was evaluated across samples using a second order factor model. Results The factor structure was robust across samples and a multi-group model had adequate fit (RMSEA = 0.63, CFI = 0.972). In sample specific analyses, minor modifications were necessary in three samples (final RMSEA 0.055-0.080, final CFI between 0.955 and 0.989). Applying the previous first order specifications, a second order factor model had adequate fit in all samples. Conclusion A five-factor model of the WRFQ showed consistent structural validity across samples. A second order factor model showed adequate fit, but the second order factor loadings varied across samples. Therefore subscale scores are recommended to compare across different clinical and working samples.

  18. Urinary symptoms following external beam radiotherapy of the prostate: Dose-symptom correlates with multiple-event and event-count models.

    PubMed

    Yahya, Noorazrul; Ebert, Martin A; Bulsara, Max; House, Michael J; Kennedy, Angel; Joseph, David J; Denham, James W

    2015-11-01

    This study aimed to compare urinary dose-symptom correlates after external beam radiotherapy of the prostate using commonly utilised peak-symptom models to multiple-event and event-count models which account for repeated events. Urinary symptoms (dysuria, haematuria, incontinence and frequency) from 754 participants from TROG 03.04-RADAR trial were analysed. Relative (R1-R75 Gy) and absolute (A60-A75Gy) bladder dose-surface area receiving more than a threshold dose and equivalent uniform dose using exponent a (range: a ∈[1 … 100]) were derived. The dose-symptom correlates were analysed using; peak-symptom (logistic), multiple-event (generalised estimating equation) and event-count (negative binomial regression) models. Stronger dose-symptom correlates were found for incontinence and frequency using multiple-event and/or event-count models. For dysuria and haematuria, similar or better relationships were found using peak-symptom models. Dysuria, haematuria and high grade (⩾ 2) incontinence were associated to high dose (R61-R71 Gy). Frequency and low grade (⩾ 1) incontinence were associated to low and intermediate dose-surface parameters (R13-R41Gy). Frequency showed a parallel behaviour (a=1) while dysuria, haematuria and incontinence showed a more serial behaviour (a=4 to a ⩾ 100). Relative dose-surface showed stronger dose-symptom associations. For certain endpoints, the multiple-event and event-count models provide stronger correlates over peak-symptom models. Accounting for multiple events may be advantageous for a more complete understanding of urinary dose-symptom relationships. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Unification of gauge and Yukawa couplings

    NASA Astrophysics Data System (ADS)

    Abdalgabar, Ammar; Khojali, Mohammed Omer; Cornell, Alan S.; Cacciapaglia, Giacomo; Deandrea, Aldo

    2018-01-01

    The unification of gauge and top Yukawa couplings is an attractive feature of gauge-Higgs unification models in extra-dimensions. This feature is usually considered difficult to obtain based on simple group theory analyses. We reconsider a minimal toy model including the renormalisation group running at one loop. Our results show that the gauge couplings unify asymptotically at high energies, and that this may result from the presence of an UV fixed point. The Yukawa coupling in our toy model is enhanced at low energies, showing that a genuine unification of gauge and Yukawa couplings may be achieved.

  20. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics.

    PubMed

    Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.

  1. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics

    PubMed Central

    McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817

  2. Modeling and Analysis of Process Parameters for Evaluating Shrinkage Problems During Plastic Injection Molding of a DVD-ROM Cover

    NASA Astrophysics Data System (ADS)

    Öktem, H.

    2012-01-01

    Plastic injection molding plays a key role in the production of high-quality plastic parts. Shrinkage is one of the most significant problems of a plastic part in terms of quality in the plastic injection molding. This article focuses on the study of the modeling and analysis of the effects of process parameters on the shrinkage by evaluating the quality of the plastic part of a DVD-ROM cover made with Acrylonitrile Butadiene Styrene (ABS) polymer material. An effective regression model was developed to determine the mathematical relationship between the process parameters (mold temperature, melt temperature, injection pressure, injection time, and cooling time) and the volumetric shrinkage by utilizing the analysis data. Finite element (FE) analyses designed by Taguchi (L27) orthogonal arrays were run in the Moldflow simulation program. Analysis of variance (ANOVA) was then performed to check the adequacy of the regression model and to determine the effect of the process parameters on the shrinkage. Experiments were conducted to control the accuracy of the regression model with the FE analyses obtained from Moldflow. The results show that the regression model agrees very well with the FE analyses and the experiments. From this, it can be concluded that this study succeeded in modeling the shrinkage problem in our application.

  3. Application of global sensitivity analysis methods to Takagi-Sugeno-Kang rainfall-runoff fuzzy models

    NASA Astrophysics Data System (ADS)

    Jacquin, A. P.; Shamseldin, A. Y.

    2009-04-01

    This study analyses the sensitivity of the parameters of Takagi-Sugeno-Kang rainfall-runoff fuzzy models previously developed by the authors. These models can be classified in two types, where the first type is intended to account for the effect of changes in catchment wetness and the second type incorporates seasonality as a source of non-linearity in the rainfall-runoff relationship. The sensitivity analysis is performed using two global sensitivity analysis methods, namely Regional Sensitivity Analysis (RSA) and Sobol's Variance Decomposition (SVD). In general, the RSA method has the disadvantage of not being able to detect sensitivities arising from parameter interactions. By contrast, the SVD method is suitable for analysing models where the model response surface is expected to be affected by interactions at a local scale and/or local optima, such as the case of the rainfall-runoff fuzzy models analysed in this study. The data of six catchments from different geographical locations and sizes are used in the sensitivity analysis. The sensitivity of the model parameters is analysed in terms of two measures of goodness of fit, assessing the model performance from different points of view. These measures are the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the study show that the sensitivity of the model parameters depends on both the type of non-linear effects (i.e. changes in catchment wetness or seasonality) that dominates the catchment's rainfall-runoff relationship and the measure used to assess the model performance. Acknowledgements: This research was supported by FONDECYT, Research Grant 11070130. We would also like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.

  4. Statistical power of intervention analyses: simulation and empirical application to treated lumber prices

    Treesearch

    Jeffrey P. Prestemon

    2009-01-01

    Timber product markets are subject to large shocks deriving from natural disturbances and policy shifts. Statistical modeling of shocks is often done to assess their economic importance. In this article, I simulate the statistical power of univariate and bivariate methods of shock detection using time series intervention models. Simulations show that bivariate methods...

  5. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    PubMed Central

    De Esteban-Trivigno, Soledad; Püschel, Thomas A.; Fortuny, Josep

    2017-01-01

    Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches. PMID:29043107

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hektor, Andi; Marzola, Luca; Institute of Physics, University of Tartu,Ravila 14c, 50411 Tartu

    Motivated by the recent indications for a 750 GeV resonance in the di-photon final state at the LHC, in this work we analyse the compatibility of the excess with the broad photon excess detected at the Galactic Centre. Intriguingly, by analysing the parameter space of an effective models where a 750 GeV pseudoscalar particles mediates the interaction between the Standard Model and a scalar dark sector, we prove the compatibility of the two signals. We show, however, that the LHC mono-jet searches and the Fermi LAT measurements strongly limit the viable parameter space. We comment on the possible impact ofmore » cosmic antiproton flux measurement by the AMS-02 experiment.« less

  7. Asteroid Bennu Temperature Maps for OSIRIS-REx Spacecraft and Instrument Thermal Analyses

    NASA Technical Reports Server (NTRS)

    Choi, Michael K.; Emery, Josh; Delbo, Marco

    2014-01-01

    A thermophysical model has been developed to generate asteroid Bennu surface temperature maps for OSIRIS-REx spacecraft and instrument thermal design and analyses at the Critical Design Review (CDR). Two-dimensional temperature maps for worst hot and worst cold cases are used in Thermal Desktop to assure adequate thermal design margins. To minimize the complexity of the Bennu geometry in Thermal Desktop, it is modeled as a sphere instead of the radar shape. The post-CDR updated thermal inertia and a modified approach show that the new surface temperature predictions are more benign. Therefore the CDR Bennu surface temperature predictions are conservative.

  8. Viscous and thermal modelling of thermoplastic composites forming process

    NASA Astrophysics Data System (ADS)

    Guzman, Eduardo; Liang, Biao; Hamila, Nahiene; Boisse, Philippe

    2016-10-01

    Thermoforming thermoplastic prepregs is a fast manufacturing process. It is suitable for automotive composite parts manufacturing. The simulation of thermoplastic prepreg forming is achieved by alternate thermal and mechanical analyses. The thermal properties are obtained from a mesoscopic analysis and a homogenization procedure. The forming simulation is based on a viscous-hyperelastic approach. The thermal simulations define the coefficients of the mechanical model that depend on the temperature. The forming simulations modify the boundary conditions and the internal geometry of the thermal analyses. The comparison of the simulation with an experimental thermoforming of a part representative of automotive applications shows the efficiency of the approach.

  9. Relativistic corrections to fractal analyses of the galaxy distribution

    NASA Astrophysics Data System (ADS)

    Célérier, M.-N.; Thieberger, R.

    2001-02-01

    The effect of curvature on the results of fractal analyses of the galaxy distribution is investigated. We show that, if the universe satisfies the criteria of a wide class of parabolic homogeneous models, the observers measuring the fractal index with the integrated conditional density procedure may use the Hubble formula, without having to allow for curvature, out to distances of 600 Mpc, and possibly far beyond. This contradicts a previous claim by Ribeiro (\\cite{r33}) that, in the Einstein-de Sitter case, relativistic corrections should be taken into account at much smaller scales. We state for the class of cosmological models under study, and give grounds for conjecture for others, that the averaging procedure has a smoothing effect and that, therefore, the redshift-distance relation provides an upper limit to the relativistic corrections involved in such analyses.

  10. Local influence for generalized linear models with missing covariates.

    PubMed

    Shi, Xiaoyan; Zhu, Hongtu; Ibrahim, Joseph G

    2009-12-01

    In the analysis of missing data, sensitivity analyses are commonly used to check the sensitivity of the parameters of interest with respect to the missing data mechanism and other distributional and modeling assumptions. In this article, we formally develop a general local influence method to carry out sensitivity analyses of minor perturbations to generalized linear models in the presence of missing covariate data. We examine two types of perturbation schemes (the single-case and global perturbation schemes) for perturbing various assumptions in this setting. We show that the metric tensor of a perturbation manifold provides useful information for selecting an appropriate perturbation. We also develop several local influence measures to identify influential points and test model misspecification. Simulation studies are conducted to evaluate our methods, and real datasets are analyzed to illustrate the use of our local influence measures.

  11. Modeling Nonstationarity in Space and Time

    PubMed Central

    2017-01-01

    Summary We propose to model a spatio-temporal random field that has nonstationary covariance structure in both space and time domains by applying the concept of the dimension expansion method in Bornn et al. (2012). Simulations are conducted for both separable and nonseparable space-time covariance models, and the model is also illustrated with a streamflow dataset. Both simulation and data analyses show that modeling nonstationarity in both space and time can improve the predictive performance over stationary covariance models or models that are nonstationary in space but stationary in time. PMID:28134977

  12. Modeling nonstationarity in space and time.

    PubMed

    Shand, Lyndsay; Li, Bo

    2017-09-01

    We propose to model a spatio-temporal random field that has nonstationary covariance structure in both space and time domains by applying the concept of the dimension expansion method in Bornn et al. (2012). Simulations are conducted for both separable and nonseparable space-time covariance models, and the model is also illustrated with a streamflow dataset. Both simulation and data analyses show that modeling nonstationarity in both space and time can improve the predictive performance over stationary covariance models or models that are nonstationary in space but stationary in time. © 2017, The International Biometric Society.

  13. High-resolution surface analysis for extended-range downscaling with limited-area atmospheric models

    NASA Astrophysics Data System (ADS)

    Separovic, Leo; Husain, Syed Zahid; Yu, Wei; Fernig, David

    2014-12-01

    High-resolution limited-area model (LAM) simulations are frequently employed to downscale coarse-resolution objective analyses over a specified area of the globe using high-resolution computational grids. When LAMs are integrated over extended time frames, from months to years, they are prone to deviations in land surface variables that can be harmful to the quality of the simulated near-surface fields. Nudging of the prognostic surface fields toward a reference-gridded data set is therefore devised in order to prevent the atmospheric model from diverging from the expected values. This paper presents a method to generate high-resolution analyses of land-surface variables, such as surface canopy temperature, soil moisture, and snow conditions, to be used for the relaxation of lower boundary conditions in extended-range LAM simulations. The proposed method is based on performing offline simulations with an external surface model, forced with the near-surface meteorological fields derived from short-range forecast, operational analyses, and observed temperatures and humidity. Results show that the outputs of the surface model obtained in the present study have potential to improve the near-surface atmospheric fields in extended-range LAM integrations.

  14. Bayesian hierarchical models for cost-effectiveness analyses that use data from cluster randomized trials.

    PubMed

    Grieve, Richard; Nixon, Richard; Thompson, Simon G

    2010-01-01

    Cost-effectiveness analyses (CEA) may be undertaken alongside cluster randomized trials (CRTs) where randomization is at the level of the cluster (for example, the hospital or primary care provider) rather than the individual. Costs (and outcomes) within clusters may be correlated so that the assumption made by standard bivariate regression models, that observations are independent, is incorrect. This study develops a flexible modeling framework to acknowledge the clustering in CEA that use CRTs. The authors extend previous Bayesian bivariate models for CEA of multicenter trials to recognize the specific form of clustering in CRTs. They develop new Bayesian hierarchical models (BHMs) that allow mean costs and outcomes, and also variances, to differ across clusters. They illustrate how each model can be applied using data from a large (1732 cases, 70 primary care providers) CRT evaluating alternative interventions for reducing postnatal depression. The analyses compare cost-effectiveness estimates from BHMs with standard bivariate regression models that ignore the data hierarchy. The BHMs show high levels of cost heterogeneity across clusters (intracluster correlation coefficient, 0.17). Compared with standard regression models, the BHMs yield substantially increased uncertainty surrounding the cost-effectiveness estimates, and altered point estimates. The authors conclude that ignoring clustering can lead to incorrect inferences. The BHMs that they present offer a flexible modeling framework that can be applied more generally to CEA that use CRTs.

  15. Psychometric properties of the Liebowitz Social Anxiety Scale (LSAS) in a longitudinal study of African Americans with anxiety disorders.

    PubMed

    Beard, Courtney; Rodriguez, Benjamin F; Moitra, Ethan; Sibrava, Nicholas J; Bjornsson, Andri; Weisberg, Risa B; Keller, Martin B

    2011-06-01

    The Liebowitz Social Anxiety Scale (LSAS) is a widely used measure of social anxiety. However, no study has examined the psychometric properties of the LSAS in an African American sample. The current study examined the LSAS characteristics in 97 African Americans diagnosed with an anxiety disorder. Overall, the original LSAS subscales showed excellent internal consistency and temporal stability. Similar to previous reports, fear and avoidance subscales were so highly correlated that they yielded redundant information. Confirmatory factor analyses for three previously proposed models failed to demonstrate an excellent fit to our data. However, a four-factor model showed minimally acceptable fit. Overall, the LSAS performed similarly in our African American sample as in previous European American samples. Exploratory factor analyses are warranted to determine whether a better factor structure exists for African Americans. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Bias in Cross-Sectional Analyses of Longitudinal Mediation: Partial and Complete Mediation under an Autoregressive Model

    ERIC Educational Resources Information Center

    Maxwell, Scott E.; Cole, David A.; Mitchell, Melissa A.

    2011-01-01

    Maxwell and Cole (2007) showed that cross-sectional approaches to mediation typically generate substantially biased estimates of longitudinal parameters in the special case of complete mediation. However, their results did not apply to the more typical case of partial mediation. We extend their previous work by showing that substantial bias can…

  17. Active lifestyles in older adults: an integrated predictive model of physical activity and exercise

    PubMed Central

    Galli, Federica; Chirico, Andrea; Mallia, Luca; Girelli, Laura; De Laurentiis, Michelino; Lucidi, Fabio; Giordano, Antonio; Botti, Gerardo

    2018-01-01

    Physical activity and exercise have been identified as behaviors to preserve physical and mental health in older adults. The aim of the present study was to test the Integrated Behavior Change model in exercise and physical activity behaviors. The study evaluated two different samples of older adults: the first engaged in exercise class, the second doing spontaneous physical activity. The key analyses relied on Variance-Based Structural Modeling, which were performed by means of WARP PLS 6.0 statistical software. The analyses estimated the Integrated Behavior Change model in predicting exercise and physical activity, in a longitudinal design across two months of assessment. The tested models exhibited a good fit with the observed data derived from the model focusing on exercise, as well as with those derived from the model focusing on physical activity. Results showed, also, some effects and relations specific to each behavioral context. Results may form a starting point for future experimental and intervention research. PMID:29875997

  18. Re-evaluation of a novel approach for quantitative myocardial oedema detection by analysing tissue inhomogeneity in acute myocarditis using T2-mapping.

    PubMed

    Baeßler, Bettina; Schaarschmidt, Frank; Treutlein, Melanie; Stehning, Christian; Schnackenburg, Bernhard; Michels, Guido; Maintz, David; Bunck, Alexander C

    2017-12-01

    To re-evaluate a recently suggested approach of quantifying myocardial oedema and increased tissue inhomogeneity in myocarditis by T2-mapping. Cardiac magnetic resonance data of 99 patients with myocarditis were retrospectively analysed. Thirthy healthy volunteers served as controls. T2-mapping data were acquired at 1.5 T using a gradient-spin-echo T2-mapping sequence. T2-maps were segmented according to the 16-segments AHA-model. Segmental T2-values, segmental pixel-standard deviation (SD) and the derived parameters maxT2, maxSD and madSD were analysed and compared to the established Lake Louise criteria (LLC). A re-estimation of logistic regression models revealed that all models containing an SD-parameter were superior to any model containing global myocardial T2. Using a combined cut-off of 1.8 ms for madSD + 68 ms for maxT2 resulted in a diagnostic sensitivity of 75% and specificity of 80% and showed a similar diagnostic performance compared to LLC in receiver-operating-curve analyses. Combining madSD, maxT2 and late gadolinium enhancement (LGE) in a model resulted in a superior diagnostic performance compared to LLC (sensitivity 93%, specificity 83%). The results show that the novel T2-mapping-derived parameters exhibit an additional diagnostic value over LGE with the inherent potential to overcome the current limitations of T2-mapping. • A novel quantitative approach to myocardial oedema imaging in myocarditis was re-evaluated. • The T2-mapping-derived parameters maxT2 and madSD were compared to traditional Lake-Louise criteria. • Using maxT2 and madSD with dedicated cut-offs performs similarly to Lake-Louise criteria. • Adding maxT2 and madSD to LGE results in further increased diagnostic performance. • This novel approach has the potential to overcome the limitations of T2-mapping.

  19. The morphological state space revisited: what do phylogenetic patterns in homoplasy tell us about the number of possible character states?

    PubMed Central

    Hoyal Cuthill, Jennifer F.

    2015-01-01

    Biological variety and major evolutionary transitions suggest that the space of possible morphologies may have varied among lineages and through time. However, most models of phylogenetic character evolution assume that the potential state space is finite. Here, I explore what the morphological state space might be like, by analysing trends in homoplasy (repeated derivation of the same character state). Analyses of ten published character matrices are compared against computer simulations with different state space models: infinite states, finite states, ordered states and an ‘inertial' model, simulating phylogenetic constraints. Of these, only the infinite states model results in evolution without homoplasy, a prediction which is not generally met by real phylogenies. Many authors have interpreted the ubiquity of homoplasy as evidence that the number of evolutionary alternatives is finite. However, homoplasy is also predicted by phylogenetic constraints on the morphological distance that can be traversed between ancestor and descendent. Phylogenetic rarefaction (sub-sampling) shows that finite and inertial state spaces do produce contrasting trends in the distribution of homoplasy. Two clades show trends characteristic of phylogenetic inertia, with decreasing homoplasy (increasing consistency index) as we sub-sample more distantly related taxa. One clade shows increasing homoplasy, suggesting exhaustion of finite states. Different clades may, therefore, show different patterns of character evolution. However, when parsimony uninformative characters are excluded (which may occur without documentation in cladistic studies), it may no longer be possible to distinguish inertial and finite state spaces. Interestingly, inertial models predict that homoplasy should be clustered among comparatively close relatives (parallel evolution), whereas finite state models do not. If morphological evolution is often inertial in nature, then homoplasy (false homology) may primarily occur between close relatives, perhaps being replaced by functional analogy at higher taxonomic scales. PMID:26640650

  20. Reliability of four models for clinical gait analysis.

    PubMed

    Kainz, Hans; Graham, David; Edwards, Julie; Walsh, Henry P J; Maine, Sheanna; Boyd, Roslyn N; Lloyd, David G; Modenese, Luca; Carty, Christopher P

    2017-05-01

    Three-dimensional gait analysis (3DGA) has become a common clinical tool for treatment planning in children with cerebral palsy (CP). Many clinical gait laboratories use the conventional gait analysis model (e.g. Plug-in-Gait model), which uses Direct Kinematics (DK) for joint kinematic calculations, whereas, musculoskeletal models, mainly used for research, use Inverse Kinematics (IK). Musculoskeletal IK models have the advantage of enabling additional analyses which might improve the clinical decision-making in children with CP. Before any new model can be used in a clinical setting, its reliability has to be evaluated and compared to a commonly used clinical gait model (e.g. Plug-in-Gait model) which was the purpose of this study. Two testers performed 3DGA in eleven CP and seven typically developing participants on two occasions. Intra- and inter-tester standard deviations (SD) and standard error of measurement (SEM) were used to compare the reliability of two DK models (Plug-in-Gait and a six degrees-of-freedom model solved using Vicon software) and two IK models (two modifications of 'gait2392' solved using OpenSim). All models showed good reliability (mean SEM of 3.0° over all analysed models and joint angles). Variations in joint kinetics were less in typically developed than in CP participants. The modified 'gait2392' model which included all the joint rotations commonly reported in clinical 3DGA, showed reasonable reliable joint kinematic and kinetic estimates, and allows additional musculoskeletal analysis on surgically adjustable parameters, e.g. muscle-tendon lengths, and, therefore, is a suitable model for clinical gait analysis. Copyright © 2017. Published by Elsevier B.V.

  1. Sex-specific effect of CPB2 Ala147Thr but not Thr325Ile variants on the risk of venous thrombosis: A comprehensive meta-analysis

    PubMed Central

    Zwingerman, Nora; Medina-Rivera, Alejandra; Kassam, Irfahan; Wilson, Michael D.; Morange, Pierre-Emmanuel; Trégouët, David-Alexandre; Gagnon, France

    2017-01-01

    Background Thrombin activatable fibrinolysis inhibitor (TAFI), encoded by the Carboxypeptidase B2 gene (CPB2), is an inhibitor of fibrinolysis and plays a role in the pathogenesis of venous thrombosis. Experimental findings support a functional role of genetic variants in CPB2, while epidemiological studies have been unable to confirm associations with risk of venous thrombosis. Sex-specific effects could underlie the observed inconsistent associations between CPB2 genetic variants and venous thrombosis. Methods A comprehensive literature search was conducted for associations between Ala147Thr and Thr325Ile variants with venous thrombosis. Authors were contacted to provide sex-specific genotype counts from their studies. Combined and sex-specific random effects meta-analyses were used to estimate a pooled effect estimate for primary and secondary genetic models. Results A total of 17 studies met the inclusion criteria. A sex-specific meta-analysis applying a dominant model supported a protective effect of Ala147Thr on venous thrombosis in females (OR = 0.81, 95%CI: 0.68,0.97; p = 0.018), but not in males (OR = 1.06, 95%CI:0.96–1.16; p = 0.263). The Thr325Ile did not show a sex-specific effect but showed variation in allele frequencies by geographic region. A subgroup analysis of studies in European countries showed decreased risk, with a recessive model (OR = 0.83, 95%CI:0.71–0.97, p = 0.021) for venous thrombosis. Conclusions A comprehensive literature review, including unpublished data, provided greater statistical power for the analyses and decreased the likelihood of publication bias influencing the results. Sex-specific analyses explained apparent discrepancies across genetic studies of Ala147Thr and venous thrombosis. While, careful selection of genetic models based on population genetics, evolutionary and biological knowledge can increase power by decreasing the need to adjust for testing multiple models. PMID:28552956

  2. Sex-specific effect of CPB2 Ala147Thr but not Thr325Ile variants on the risk of venous thrombosis: A comprehensive meta-analysis.

    PubMed

    Zwingerman, Nora; Medina-Rivera, Alejandra; Kassam, Irfahan; Wilson, Michael D; Morange, Pierre-Emmanuel; Trégouët, David-Alexandre; Gagnon, France

    2017-01-01

    Thrombin activatable fibrinolysis inhibitor (TAFI), encoded by the Carboxypeptidase B2 gene (CPB2), is an inhibitor of fibrinolysis and plays a role in the pathogenesis of venous thrombosis. Experimental findings support a functional role of genetic variants in CPB2, while epidemiological studies have been unable to confirm associations with risk of venous thrombosis. Sex-specific effects could underlie the observed inconsistent associations between CPB2 genetic variants and venous thrombosis. A comprehensive literature search was conducted for associations between Ala147Thr and Thr325Ile variants with venous thrombosis. Authors were contacted to provide sex-specific genotype counts from their studies. Combined and sex-specific random effects meta-analyses were used to estimate a pooled effect estimate for primary and secondary genetic models. A total of 17 studies met the inclusion criteria. A sex-specific meta-analysis applying a dominant model supported a protective effect of Ala147Thr on venous thrombosis in females (OR = 0.81, 95%CI: 0.68,0.97; p = 0.018), but not in males (OR = 1.06, 95%CI:0.96-1.16; p = 0.263). The Thr325Ile did not show a sex-specific effect but showed variation in allele frequencies by geographic region. A subgroup analysis of studies in European countries showed decreased risk, with a recessive model (OR = 0.83, 95%CI:0.71-0.97, p = 0.021) for venous thrombosis. A comprehensive literature review, including unpublished data, provided greater statistical power for the analyses and decreased the likelihood of publication bias influencing the results. Sex-specific analyses explained apparent discrepancies across genetic studies of Ala147Thr and venous thrombosis. While, careful selection of genetic models based on population genetics, evolutionary and biological knowledge can increase power by decreasing the need to adjust for testing multiple models.

  3. Hospital survey on patient safety culture: psychometric analysis on a Scottish sample.

    PubMed

    Sarac, Cakil; Flin, Rhona; Mearns, Kathryn; Jackson, Jeanette

    2011-10-01

    To investigate the psychometric properties of the Hospital Survey on Patient Safety Culture on a Scottish NHS data set. The data were collected from 1969 clinical staff (estimated 22% response rate) from one acute hospital from each of seven Scottish Health boards. Using a split-half validation technique, the data were randomly split; an exploratory factor analysis was conducted on the calibration data set, and confirmatory factor analyses were conducted on the validation data set to investigate and check the original US model fit in a Scottish sample. Following the split-half validation technique, exploratory factor analysis results showed a 10-factor optimal measurement model. The confirmatory factor analyses were then performed to compare the model fit of two competing models (10-factor alternative model vs 12-factor original model). An S-B scaled χ(2) square difference test demonstrated that the original 12-factor model performed significantly better in a Scottish sample. Furthermore, reliability analyses of each component yielded satisfactory results. The mean scores on the climate dimensions in the Scottish sample were comparable with those found in other European countries. This study provided evidence that the original 12-factor structure of the Hospital Survey on Patient Safety Culture scale has been replicated in this Scottish sample. Therefore, no modifications are required to the original 12-factor model, which is suggested for use, since it would allow researchers the possibility of cross-national comparisons.

  4. Regional frequency analysis of extreme rainfall for the Baltimore Metropolitan region based on stochastic storm transposition

    NASA Astrophysics Data System (ADS)

    Zhou, Z.; Smith, J. A.; Yang, L.; Baeck, M. L.; Wright, D.; Liu, S.

    2017-12-01

    Regional frequency analyses of extreme rainfall are critical for development of engineering hydrometeorology procedures. In conventional approaches, the assumptions that `design storms' have specified time profiles and are uniform in space are commonly applied but often not appropriate, especially over regions with heterogeneous environments (due to topography, water-land boundaries and land surface properties). In this study, we present regional frequency analyses of extreme rainfall for Baltimore study region combining storm catalogs of rainfall fields derived from weather radar and stochastic storm transposition (SST, developed by Wright et al., 2013). The study region is Dead Run, a small (14.3 km2) urban watershed, in the Baltimore Metropolitan region. Our analyses build on previous empirical and modeling studies showing pronounced spatial heterogeneities in rainfall due to the complex terrain, including the Chesapeake Bay to the east, mountainous terrain to the west and urbanization in this region. We expand the original SST approach by applying a multiplier field that accounts for spatial heterogeneities in extreme rainfall. We also characterize the spatial heterogeneities of extreme rainfall distribution through analyses of rainfall fields in the storm catalogs. We examine the characteristics of regional extreme rainfall and derive intensity-duration-frequency (IDF) curves using the SST approach for heterogeneous regions. Our results highlight the significant heterogeneity of extreme rainfall in this region. Estimates of IDF show the advantages of SST in capturing the space-time structure of extreme rainfall. We also illustrate application of SST analyses for flood frequency analyses using a distributed hydrological model. Reference: Wright, D. B., J. A. Smith, G. Villarini, and M. L. Baeck (2013), Estimating the frequency of extreme rainfall using weather radar and stochastic storm transposition, J. Hydrol., 488, 150-165.

  5. Personality and the latent structure of PTSD comorbidity

    PubMed Central

    Miller, Mark W.; Wolf, Erika J.

    2012-01-01

    This study examined the structure of PTSD comorbidity and its relationship to personality in a sample of 214 veterans using data from diagnostic interviews and the Multidimensional Personality Questionnaire-Brief Form (MPQ-BF; Patrick, Curtin, & Tellegen, 2002). Confirmatory factor analyses supported a three factor model composed of Externalizing, Fear and Distress factors. Analyses that examined the location of borderline personality disorder revealed significant cross-loadings for this disorder on both Externalizing and Distress. Structural equation models showed trait negative emotionality to be significantly related to all three comorbidity factors whereas positive emotionality and constraint evidenced specific associations with Distress and Externalizing, respectively. These results shed new light on the location of borderline personality disorder within the internalizing/externalizing model and clarify the relative influence of broad dimensions of personality on patterns of comorbidity. PMID:22480716

  6. Stationarity is undead: Uncertainty dominates the distribution of extremes

    NASA Astrophysics Data System (ADS)

    Serinaldi, Francesco; Kilsby, Chris G.

    2015-03-01

    The increasing effort to develop and apply nonstationary models in hydrologic frequency analyses under changing environmental conditions can be frustrated when the additional uncertainty related to the model complexity is accounted for along with the sampling uncertainty. In order to show the practical implications and possible problems of using nonstationary models and provide critical guidelines, in this study we review the main tools developed in this field (such as nonstationary distribution functions, return periods, and risk of failure) highlighting advantages and disadvantages. The discussion is supported by three case studies that revise three illustrative examples reported in the scientific and technical literature referring to the Little Sugar Creek (at Charlotte, North Carolina), Red River of the North (North Dakota/Minnesota), and the Assunpink Creek (at Trenton, New Jersey). The uncertainty of the results is assessed by complementing point estimates with confidence intervals (CIs) and emphasizing critical aspects such as the subjectivity affecting the choice of the models' structure. Our results show that (1) nonstationary frequency analyses should not only be based on at-site time series but require additional information and detailed exploratory data analyses (EDA); (2) as nonstationary models imply that the time-varying model structure holds true for the entire future design life period, an appropriate modeling strategy requires that EDA identifies a well-defined deterministic mechanism leading the examined process; (3) when the model structure cannot be inferred in a deductive manner and nonstationary models are fitted by inductive inference, model structure introduces an additional source of uncertainty so that the resulting nonstationary models can provide no practical enhancement of the credibility and accuracy of the predicted extreme quantiles, whereas possible model misspecification can easily lead to physically inconsistent results; (4) when the model structure is uncertain, stationary models and a suitable assessment of the uncertainty accounting for possible temporal persistence should be retained as more theoretically coherent and reliable options for practical applications in real-world design and management problems; (5) a clear understanding of the actual probabilistic meaning of stationary and nonstationary return periods and risk of failure is required for a correct risk assessment and communication.

  7. How to Determine the Centre of Mass of Bodies from Image Modelling

    ERIC Educational Resources Information Center

    Dias, Marco Adriano; Carvalho, Paulo Simeão; Rodrigues, Marcelo

    2016-01-01

    Image modelling is a recent technique in physics education that includes digital tools for image treatment and analysis, such as digital stroboscopic photography (DSP) and video analysis software. It is commonly used to analyse the motion of objects. In this work we show how to determine the position of the centre of mass (CM) of objects with…

  8. Modeling Analyses of the Effects of Changes in Nitrogen Oxides Emission from the Electric Power Sector on Ozone Levels in the Eastern United States

    EPA Science Inventory

    This modeling study tests a hypothetical scenario to see what air quality might have looked like if no emission controls had been placed on electric generating units, as required by the NOx State Implementation Plan (SIP) Call required in 2004. Results showed that ozone levels w...

  9. Dynamic Relationship between Gross Domestic Product and Domestic Investment in Rwanda

    ERIC Educational Resources Information Center

    Ocaya, Bruno; Ruranga, Charles; Kaberuka, William

    2012-01-01

    This study uses a VAR model to analyse the dynamic relationship between gross domestic product (GDP) and domestic investment (DI) in Rwanda for the period 1970 to 2011. Several selection lag criteria chose a maximum lag of one, and a bivariate VAR(1) model specification in levels was adopted. Unit root tests show that both GDP and DI series are…

  10. God image and happiness in chronic pain patients: the mediating role of disease interpretation.

    PubMed

    Dezutter, Jessie; Luyckx, Koen; Schaap-Jonker, Hanneke; Büssing, Arndt; Corveleyn, Jozef; Hutsebaut, Dirk

    2010-05-01

    The present study explored the role of the emotional experience of God (i.e., positive and negative God images) in the happiness of chronic pain (CP) patients. Framed in the transactional model of stress, we tested a model in which God images would influence happiness partially through its influence on disease interpretation as a mediating mechanism. We expected God images to have both a direct and an indirect (through the interpretation of disease) effect on happiness. A cross-sectional questionnaire design was adopted in order to measure demographics, pain condition, God images, disease interpretation, and happiness. One hundred thirty-six CP patients, all members of a national patients' association, completed the questionnaires. Correlational analyses showed meaningful associations among God images, disease interpretation, and happiness. Path analyses from a structural equation modeling approach indicated that positive God images seemed to influence happiness, both directly and indirectly through the pathway of positive interpretation of the disease. Ancillary analyses showed that the negative influence of angry God images on happiness disappeared after controlling for pain severity. The results indicated that one's emotional experience of God has an influence on happiness in CP patients, both directly and indirectly through the pathway of positive disease interpretation. These findings can be framed within the transactional theory of stress and can stimulate further pain research investigating the possible effects of religion in the adaptation to CP.

  11. Development of soy lecithin based novel self-assembled emulsion hydrogels.

    PubMed

    Singh, Vinay K; Pandey, Preeti M; Agarwal, Tarun; Kumar, Dilip; Banerjee, Indranil; Anis, Arfat; Pal, Kunal

    2015-03-01

    The current study reports the development and characterization of soy lecithin based novel self-assembled emulsion hydrogels. Sesame oil was used as the representative oil phase. Emulsion gels were formed when the concentration of soy lecithin was >40% w/w. Metronidazole was used as the model drug for the drug release and the antimicrobial tests. Microscopic study showed the apolar dispersed phase in an aqueous continuum phase, suggesting the formation of emulsion hydrogels. FTIR study indicated the formation of intermolecular hydrogen bonding, whereas, the XRD study indicated predominantly amorphous nature of the emulsion gels. Composition dependent mechanical and drug release properties of the emulsion gels were observed. In-depth analyses of the mechanical studies were done using Ostwald-de Waele power-law, Kohlrausch and Weichert models, whereas, the drug release profiles were modeled using Korsmeyer-Peppas and Peppas-Sahlin models. The mechanical analyses indicated viscoelastic nature of the emulsion gels. The release of the drug from the emulsion gels was diffusion mediated. The drug loaded emulsion gels showed good antimicrobial activity. The biocompatibility test using HaCaT cells (human keratinocytes) suggested biocompatibility of the emulsion gels. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. [Approximation to the dynamics of meningococcal meningitis through dynamic systems and time series].

    PubMed

    Canals, M

    1996-02-01

    Meningococcal meningitis is subjected to epidemiological surveillance due to its severity and the occasional presentation of epidemic outbreaks. This work analyses previous disease models, generate new ones and analyses monthly cases using ARIMA time series models. The results show that disease dynamics for closed populations is epidemic and the epidemic size is related to the proportion of carriers and the transmissiveness of the agent. In open populations, disease dynamics depends on the admission rate of susceptible and the relative admission of infected individuals. Our model considers a logistic populational growth and carrier admission proportional to populational size, generating an endemic dynamics. Considering a non-instantaneous system response, a greater realism is obtained establishing that the endemic situation may present a dynamics highly sensitive to initial conditions, depending on the transmissiveness and proportion of susceptible individuals in the population. Time series model showed an adequate predictive capacity in terms no longer than 10 months. The lack of long term predictability was attributed to local changes in the proportion of carriers or on transmissiveness that lead to chaotic dynamics over a seasonal pattern. Predictions for 1995 and 1996 were obtained.

  13. Infusing considerations of trophic dependencies into species distribution modelling.

    PubMed

    Trainor, Anne M; Schmitz, Oswald J

    2014-12-01

    Community ecology involves studying the interdependence of species with each other and their environment to predict their geographical distribution and abundance. Modern species distribution analyses characterise species-environment dependency well, but offer only crude approximations of species interdependency. Typically, the dependency between focal species and other species is characterised using other species' point occurrences as spatial covariates to constrain the focal species' predicted range. This implicitly assumes that the strength of interdependency is homogeneous across space, which is not generally supported by analyses of species interactions. This discrepancy has an important bearing on the accuracy of inferences about habitat suitability for species. We introduce a framework that integrates principles from consumer-resource analyses, resource selection theory and species distribution modelling to enhance quantitative prediction of species geographical distributions. We show how to apply the framework using a case study of lynx and snowshoe hare interactions with each other and their environment. The analysis shows how the framework offers a spatially refined understanding of species distribution that is sensitive to nuances in biophysical attributes of the environment that determine the location and strength of species interactions. © 2014 John Wiley & Sons Ltd/CNRS.

  14. Predicting 3D structure and stability of RNA pseudoknots in monovalent and divalent ion solutions.

    PubMed

    Shi, Ya-Zhou; Jin, Lei; Feng, Chen-Jie; Tan, Ya-Lan; Tan, Zhi-Jie

    2018-06-01

    RNA pseudoknots are a kind of minimal RNA tertiary structural motifs, and their three-dimensional (3D) structures and stability play essential roles in a variety of biological functions. Therefore, to predict 3D structures and stability of RNA pseudoknots is essential for understanding their functions. In the work, we employed our previously developed coarse-grained model with implicit salt to make extensive predictions and comprehensive analyses on the 3D structures and stability for RNA pseudoknots in monovalent/divalent ion solutions. The comparisons with available experimental data show that our model can successfully predict the 3D structures of RNA pseudoknots from their sequences, and can also make reliable predictions for the stability of RNA pseudoknots with different lengths and sequences over a wide range of monovalent/divalent ion concentrations. Furthermore, we made comprehensive analyses on the unfolding pathway for various RNA pseudoknots in ion solutions. Our analyses for extensive pseudokonts and the wide range of monovalent/divalent ion concentrations verify that the unfolding pathway of RNA pseudoknots is mainly dependent on the relative stability of unfolded intermediate states, and show that the unfolding pathway of RNA pseudoknots can be significantly modulated by their sequences and solution ion conditions.

  15. RAS testing and cetuximab treatment for metastatic colorectal cancer: a cost-effectiveness analysis in a setting with limited health resources.

    PubMed

    Wu, Bin; Yao, Yuan; Zhang, Ke; Ma, Xuezhen

    2017-09-19

    To test the cost-effectiveness of cetuximab plus irinotecan, fluorouracil, and leucovorin (FOLFIRI) as first-line treatment in patients with metastatic colorectal cancer (mCRC) from a Chinese medical insurance perspective. Baseline analysis showed that the addition of cetuximab increased quality-adjusted life-years (QALYs) by 0.63, an increase of $17,086 relative to FOLFIRI chemotherapy, resulting in an incremental cost-effectiveness ratio (ICER) of $27,145/QALY. When the patient assistance program (PAP) was available, the ICER decreased to $14,049/QALY, which indicated that the cetuximab is cost-effective at a willingness-to-pay threshold of China ($22,200/QALY). One-way sensitivity analyses showed that the median overall survival time for the cetuximab was the most influential parameter. A Markov model by incorporating clinical, utility and cost data was developed to evaluate the economic outcome of cetuximab in mCRC. The lifetime horizon was used, and sensitivity analyses were carried out to test the robustness of the model results. The impact of PAP was also evaluated in scenario analyses. RAS testing with cetuximab treatment is likely to be cost-effective for patients with mCRC when PAP is available in China.

  16. Patterns and Variability in Global Ocean Chlorophyll: Satellite Observations and Modeling

    NASA Technical Reports Server (NTRS)

    Gregg, Watson

    2004-01-01

    Recent analyses of SeaWiFS data have shown that global ocean chlorophyll has increased more than 4% since 1998. The North Pacific ocean basin has increased nearly 19%. These trend analyses follow earlier results showing decadal declines in global ocean chlorophyll and primary production. To understand the causes of these changes and trends we have applied the newly developed NASA Ocean Biogeochemical Assimilation Model (OBAM), which is driven in mechanistic fashion by surface winds, sea surface temperature, atmospheric iron deposition, sea ice, and surface irradiance. The model utilizes chlorophyll from SeaWiFS in a daily assimilation. The model has in place many of the climatic variables that can be expected to produce the changes observed in SeaWiFS data. This enables us to diagnose the model performance, the assimilation performance, and possible causes for the increase in chlorophyll. A full discussion of the changes and trends, possible causes, modeling approaches, and data assimilation will be the focus of the seminar.

  17. An exact arithmetic toolbox for a consistent and reproducible structural analysis of metabolic network models

    PubMed Central

    Chindelevitch, Leonid; Trigg, Jason; Regev, Aviv; Berger, Bonnie

    2014-01-01

    Constraint-based models are currently the only methodology that allows the study of metabolism at the whole-genome scale. Flux balance analysis is commonly used to analyse constraint-based models. Curiously, the results of this analysis vary with the software being run, a situation that we show can be remedied by using exact rather than floating-point arithmetic. Here we introduce MONGOOSE, a toolbox for analysing the structure of constraint-based metabolic models in exact arithmetic. We apply MONGOOSE to the analysis of 98 existing metabolic network models and find that the biomass reaction is surprisingly blocked (unable to sustain non-zero flux) in nearly half of them. We propose a principled approach for unblocking these reactions and extend it to the problems of identifying essential and synthetic lethal reactions and minimal media. Our structural insights enable a systematic study of constraint-based metabolic models, yielding a deeper understanding of their possibilities and limitations. PMID:25291352

  18. Crosswind stability of FSAE race car considering the location of the pressure center

    NASA Astrophysics Data System (ADS)

    Zhao, Lijun; He, Huimin; Wang, Jianfeng; Li, Yaou; Yang, Na; Liu, Yiqun

    2017-09-01

    An 8-DOF vehicle dynamic model of FSAE race car was established, including the lateral motion, pitch motion, roll motion, yaw motion and four tires rotation. The model of aerodynamic lateral force and pressure center model were set up based on the vehicle speed and crosswind parameters. The simulation model was built by Simulink, to analyse the crosswind stability for straight-line condition. Results showed that crosswind influences the yawing velocity and sideslip angle seriously.

  19. A generalized linear factor model approach to the hierarchical framework for responses and response times.

    PubMed

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-05-01

    We show how the hierarchical model for responses and response times as developed by van der Linden (2007), Fox, Klein Entink, and van der Linden (2007), Klein Entink, Fox, and van der Linden (2009), and Glas and van der Linden (2010) can be simplified to a generalized linear factor model with only the mild restriction that there is no hierarchical model at the item side. This result is valuable as it enables all well-developed modelling tools and extensions that come with these methods. We show that the restriction we impose on the hierarchical model does not influence parameter recovery under realistic circumstances. In addition, we present two illustrative real data analyses to demonstrate the practical benefits of our approach. © 2014 The British Psychological Society.

  20. Remote sensing requirements as suggested by watershed model sensitivity analyses

    NASA Technical Reports Server (NTRS)

    Salomonson, V. V.; Rango, A.; Ormsby, J. P.; Ambaruch, R.

    1975-01-01

    A continuous simulation watershed model has been used to perform sensitivity analyses that provide guidance in defining remote sensing requirements for the monitoring of watershed features and processes. The results show that out of 26 input parameters having meaningful effects on simulated runoff, 6 appear to be obtainable with existing remote sensing techniques. Of these six parameters, 3 require the measurement of the areal extent of surface features (impervious areas, water bodies, and the extent of forested area), two require the descrimination of land use that can be related to overland flow roughness coefficient or the density of vegetation so as to estimate the magnitude of precipitation interception, and one parameter requires the measurement of distance to get the length over which overland flow typically occurs. Observational goals are also suggested for monitoring such fundamental watershed processes as precipitation, soil moisture, and evapotranspiration. A case study on the Patuxent River in Maryland shows that runoff simulation is improved if recent satellite land use observations are used as model inputs as opposed to less timely topographic map information.

  1. Psychometric Properties of the Bermond-Vorst Alexithymia Questionnaire (BVAQ) in the General Population and a Clinical Population.

    PubMed

    de Vroege, Lars; Emons, Wilco H M; Sijtsma, Klaas; van der Feltz-Cornelis, Christina M

    2018-01-01

    The Bermond-Vorst Alexithymia Questionnaire (BVAQ) has been validated in student samples and small clinical samples, but not in the general population; thus, representative general-population norms are lacking. We examined the factor structure of the BVAQ in Longitudinal Internet Studies for the Social Sciences panel data from the Dutch general population ( N  = 974). Factor analyses revealed a first-order five-factor model and a second-order two-factor model. However, in the second-order model, the factor interpreted as analyzing ability loaded on both the affective factor and the cognitive factor. Further analyses showed that the first-order test scores are more reliable than the second-order test scores. External and construct validity were addressed by comparing BVAQ scores with a clinical sample of patients suffering from somatic symptom and related disorder (SSRD) ( N  = 235). BVAQ scores differed significantly between the general population and patients suffering from SSRD, suggesting acceptable construct validity. Age was positively associated with alexithymia. Males showed higher levels of alexithymia. The BVAQ is a reliable alternative measure for measuring alexithymia.

  2. Research Workforce Diversity: The Case of Balancing National versus International Postdocs in US Biomedical Research.

    PubMed

    Ghaffarzadegan, Navid; Hawley, Joshua; Desai, Anand

    2014-03-01

    The US government has been increasingly supporting postdoctoral training in biomedical sciences to develop the domestic research workforce. However, current trends suggest that mostly international researchers benefit from the funding, many of whom might leave the USA after training. In this paper, we describe a model used to analyse the flow of national versus international researchers into and out of postdoctoral training. We calibrate our model in the case of the USA and successfully replicate the data. We use the model to conduct simulation-based analyses of effects of different policies on the diversity of postdoctoral researchers. Our model shows that capping the duration of postdoctoral careers, a policy proposed previously, favours international postdoctoral researchers. The analysis suggests that the leverage point to help the growth of domestic research workforce is in the pregraduate education area, and many policies implemented at the postgraduate level have minimal or unintended effects on diversity.

  3. How Genes Modulate Patterns of Aging-Related Changes on the Way to 100: Biodemographic Models and Methods in Genetic Analyses of Longitudinal Data

    PubMed Central

    Yashin, Anatoliy I.; Arbeev, Konstantin G.; Wu, Deqing; Arbeeva, Liubov; Kulminski, Alexander; Kulminskaya, Irina; Akushevich, Igor; Ukraintseva, Svetlana V.

    2016-01-01

    Background and Objective To clarify mechanisms of genetic regulation of human aging and longevity traits, a number of genome-wide association studies (GWAS) of these traits have been performed. However, the results of these analyses did not meet expectations of the researchers. Most detected genetic associations have not reached a genome-wide level of statistical significance, and suffered from the lack of replication in the studies of independent populations. The reasons for slow progress in this research area include low efficiency of statistical methods used in data analyses, genetic heterogeneity of aging and longevity related traits, possibility of pleiotropic (e.g., age dependent) effects of genetic variants on such traits, underestimation of the effects of (i) mortality selection in genetically heterogeneous cohorts, (ii) external factors and differences in genetic backgrounds of individuals in the populations under study, the weakness of conceptual biological framework that does not fully account for above mentioned factors. One more limitation of conducted studies is that they did not fully realize the potential of longitudinal data that allow for evaluating how genetic influences on life span are mediated by physiological variables and other biomarkers during the life course. The objective of this paper is to address these issues. Data and Methods We performed GWAS of human life span using different subsets of data from the original Framingham Heart Study cohort corresponding to different quality control (QC) procedures and used one subset of selected genetic variants for further analyses. We used simulation study to show that approach to combining data improves the quality of GWAS. We used FHS longitudinal data to compare average age trajectories of physiological variables in carriers and non-carriers of selected genetic variants. We used stochastic process model of human mortality and aging to investigate genetic influence on hidden biomarkers of aging and on dynamic interaction between aging and longevity. We investigated properties of genes related to selected variants and their roles in signaling and metabolic pathways. Results We showed that the use of different QC procedures results in different sets of genetic variants associated with life span. We selected 24 genetic variants negatively associated with life span. We showed that the joint analyses of genetic data at the time of bio-specimen collection and follow up data substantially improved significance of associations of selected 24 SNPs with life span. We also showed that aging related changes in physiological variables and in hidden biomarkers of aging differ for the groups of carriers and non-carriers of selected variants. Conclusions . The results of these analyses demonstrated benefits of using biodemographic models and methods in genetic association studies of these traits. Our findings showed that the absence of a large number of genetic variants with deleterious effects may make substantial contribution to exceptional longevity. These effects are dynamically mediated by a number of physiological variables and hidden biomarkers of aging. The results of these research demonstrated benefits of using integrative statistical models of mortality risks in genetic studies of human aging and longevity. PMID:27773987

  4. Copernicus atmospheric service for stratospheric ozone: validation and intercomparison of four near real-time analyses, 2009-2012

    NASA Astrophysics Data System (ADS)

    Lefever, K.; van der A, R.; Baier, F.; Christophe, Y.; Errera, Q.; Eskes, H.; Flemming, J.; Inness, A.; Jones, L.; Lambert, J.-C.; Langerock, B.; Schultz, M. G.; Stein, O.; Wagner, A.; Chabrillat, S.

    2014-05-01

    This paper evaluates the performance of the stratospheric ozone analyses delivered in near real time by the MACC (Monitoring Atmospheric Composition and Climate) project during the 3 year period between September 2009 and September 2012. Ozone analyses produced by four different chemistry transport models and data assimilation techniques are examined: the ECMWF Integrated Forecast System (IFS) coupled to MOZART-3 (IFS-MOZART), the BIRA-IASB Belgian Assimilation System for Chemical ObsErvations (BASCOE), the DLR/RIU Synoptic Analysis of Chemical Constituents by Advanced Data Assimilation (SACADA), and the KNMI Data Assimilation Model based on Transport Model version 3 (TM3DAM). The assimilated satellite ozone retrievals differed for each system: SACADA and TM3DAM assimilated only total ozone observations, BASCOE assimilated profiles for ozone and some related species, while IFS-MOZART assimilated both types of ozone observations. The stratospheric ozone analyses are compared to independent ozone observations from ground-based instruments, ozone sondes and the ACE-FTS (Atmospheric Chemistry Experiment - Fourier Transform Spectrometer) satellite instrument. All analyses show total column values which are generally in good agreement with groundbased observations (biases <5%) and a realistic seasonal cycle. The only exceptions are found for BASCOE which systematically underestimates total ozone in the Tropics with about 7-10% at Chengkung (Taiwan, 23.1° N/121.365° E), resulting from the fact that BASCOE does not include any tropospheric processes, and for SACADA which overestimates total ozone in the absence of UV observations for the assimilation. Due to the large weight given to column observations in the assimilation procedure, IFS-MOZART is able to reproduce total column observations very well, but alternating positive and negative biases compared to ozonesonde and ACE-FTS satellite data are found in the vertical as well as an overestimation of 30 to 60% in the polar lower stratosphere during ozone depletion events. The assimilation of near real-time (NRT) Microwave Limb Sounder (MLS) profiles which only go down to 68 hPa is not able to correct for the deficiency of the underlying MOZART model, which may be related to the applied meteorological fields. Biases of BASCOE compared to ozonesonde or ACE-FTS ozone profiles do not exceed 10% over the entire vertical stratospheric range, thanks to the good performance of the model in ozone hole conditions and the assimilation of offline MLS profiles going down to 215 hPa. TM3DAM provides very realistic total ozone columns, but is not designed to provide information on the vertical distribution of ozone. Compared to ozonesondes and ACE-FTS satellite data, SACADA performs best in the Arctic, but shows large biases (>50%) for ozone in the lower stratosphere in the Tropics and in the Antarctic, especially during ozone hole conditions. This study shows that ozone analyses with realistic total ozone column densities do not necessarily yield good agreement with the observed ozone profiles. It also shows the large benefit obtained from the assimilation of a single limb-scanning instrument (Aura MLS) with a high density of observations. Hence even state-of-the-art models of stratospheric chemistry still require the assimilation of limb observations for a correct representation of the vertical distribution of ozone in the stratosphere.

  5. Population dynamics modeling of introduced smallmouth bass in the upper Colorado River basin

    USGS Publications Warehouse

    Breton, André R.; Winkelman, Dana L.; Bestgen, Kevin R.; Hawkins, John A.

    2014-01-01

    The purpose of these analyses was to identify an effective control strategy to further reduce smallmouth bass in the upper Colorado River basin from the current level. Our simulation results showed that “the surge”, an early to mid-summer increase in electrofishing effort targeting nest-guarding male smallmouth bass, should be made a core component of any future smallmouth bass management strategy in the upper basin. Immigration from off channel reservoirs is supporting smallmouth bass popualtions in the Yampa River and our modeling analyses suggest that smallmouth bass  in Little Yampa Canyon might go extinct in a few years under the present level of exploitation.

  6. Seveso 1986, Chernobyl 1976: a physicist' look at 2 ecological disasters

    NASA Astrophysics Data System (ADS)

    Ratti, S.

    2004-05-01

    Seveso suffered a chemical accident with a severe loss of supertoxic material (TCCD) released in the atmosphere; Chernobyl was a world known nuclear accident. The pollution induced by the two accident are analysed in term of fractal models. The first case involved a limited micro ecological system; the second one spread over a macro ecological system. The pollution is reproduced by means of simple Fractal Sum of Pulses models in the Seveso region; for the Chernobyl accident in northern Italy and in several european Countries. The 2 accidents are also analysed in terms of Universal Multifractals showing that thethe parameters α and C1 are those describing respectively rainfall (Seveso) and cloud formation (Chernobyl).

  7. Percolation analyses of observed and simulated galaxy clustering

    NASA Astrophysics Data System (ADS)

    Bhavsar, S. P.; Barrow, J. D.

    1983-11-01

    A percolation cluster analysis is performed on equivalent regions of the CFA redshift survey of galaxies and the 4000 body simulations of gravitational clustering made by Aarseth, Gott and Turner (1979). The observed and simulated percolation properties are compared and, unlike correlation and multiplicity function analyses, favour high density (Omega = 1) models with n = - 1 initial data. The present results show that the three-dimensional data are consistent with the degree of filamentary structure present in isothermal models of galaxy formation at the level of percolation analysis. It is also found that the percolation structure of the CFA data is a function of depth. Percolation structure does not appear to be a sensitive probe of intrinsic filamentary structure.

  8. Decadal variability of the Tropical Atlantic Ocean Surface Temperature in shipboard measurements and in a Global Ocean-Atmosphere model

    NASA Technical Reports Server (NTRS)

    Mehta, Vikram M.; Delworth, Thomas

    1995-01-01

    Sea surface temperature (SST) variability was investigated in a 200-yr integration of a global model of the coupled oceanic and atmospheric general circulations developed at the Geophysical Fluid Dynamics Laboratory (GFDL). The second 100 yr of SST in the coupled model's tropical Atlantic region were analyzed with a variety of techniques. Analyses of SST time series, averaged over approximately the same subregions as the Global Ocean Surface Temperature Atlas (GOSTA) time series, showed that the GFDL SST anomalies also undergo pronounced quasi-oscillatory decadal and multidecadal variability but at somewhat shorter timescales than the GOSTA SST anomalies. Further analyses of the horizontal structures of the decadal timescale variability in the GFDL coupled model showed the existence of two types of variability in general agreement with results of the GOSTA SST time series analyses. One type, characterized by timescales between 8 and 11 yr, has high spatial coherence within each hemisphere but not between the two hemispheres of the tropical Atlantic. A second type, characterized by timescales between 12 and 20 yr, has high spatial coherence between the two hemispheres. The second type of variability is considerably weaker than the first. As in the GOSTA time series, the multidecadal variability in the GFDL SST time series has approximately opposite phases between the tropical North and South Atlantic Oceans. Empirical orthogonal function analyses of the tropical Atlantic SST anomalies revealed a north-south bipolar pattern as the dominant pattern of decadal variability. It is suggested that the bipolar pattern can be interpreted as decadal variability of the interhemispheric gradient of SST anomalies. The decadal and multidecadal timescale variability of the tropical Atlantic SST, both in the actual and in the GFDL model, stands out significantly above the background 'red noise' and is coherent within each of the time series, suggesting that specific sets of processes may be responsible for the choice of the decadal and multidecadal timescales. Finally, it must be emphasized that the GFDL coupled ocean-atmosphere model generates the decadal and multidecadal timescale variability without any externally applied force, solar or lunar, at those timescales.

  9. Validating the cross-cultural factor structure and invariance property of the Insomnia Severity Index: evidence based on ordinal EFA and CFA.

    PubMed

    Chen, Po-Yi; Yang, Chien-Ming; Morin, Charles M

    2015-05-01

    The purpose of this study is to examine the factor structure of the Insomnia Severity Index (ISI) across samples recruited from different countries. We tried to identify the most appropriate factor model for the ISI and further examined the measurement invariance property of the ISI across samples from different countries. Our analyses included one data set collected from a Taiwanese sample and two data sets obtained from samples in Hong Kong and Canada. The data set collected in Taiwan was analyzed with ordinal exploratory factor analysis (EFA) to obtain the appropriate factor model for the ISI. After that, we conducted a series of confirmatory factor analyses (CFAs), which is a special case of the structural equation model (SEM) that concerns the parameters in the measurement model, to the statistics collected in Canada and Hong Kong. The purposes of these CFA were to cross-validate the result obtained from EFA and further examine the cross-cultural measurement invariance of the ISI. The three-factor model outperforms other models in terms of global fit indices in Taiwan's population. Its external validity is also supported by confirmatory factor analyses. Furthermore, the measurement invariance analyses show that the strong invariance property between the samples from different cultures holds, providing evidence that the ISI results obtained in different cultures are comparable. The factorial validity of the ISI is stable in different populations. More importantly, its invariance property across cultures suggests that the ISI is a valid measure of the insomnia severity construct across countries. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. The evolution of misoscale circulations in a downburst-producing storm and comparison to numerical results

    NASA Technical Reports Server (NTRS)

    Kessinger, C. J.; Wilson, J. W.; Weisman, M.; Klemp, J.

    1984-01-01

    Data from three NCAR radars are used in both single and dual Doppler analyses to trace the evolution of a June 30, 1982 Colorado convective storm containing downburst-type winds and strong vortices 1-2 km in diameter. The analyses show that a series of small circulations formed along a persistent cyclonic shear boundary; at times as many as three misocyclones were present with vertical vorticity values as large as 0.1/s using a 0.25 km grid interval. The strength of the circulations suggests the possibility of accompanying tornadoes or funnels, although none were observed. Dual-Doppler analyses show that strong, small-scale downdrafts develop in close proximity to the misocyclones. A midlevel mesocyclone formed in the same general region of the storm where the misocylones later developed. The observations are compared with numerical simulations from a three-dimensional cloud model initialized with sounding data from the same day.

  11. Gene set analysis using variance component tests.

    PubMed

    Huang, Yen-Tsung; Lin, Xihong

    2013-06-28

    Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.

  12. Improved Fuzzy Logic System to Evaluate Milk Electrical Conductivity Signals from On-Line Sensors to Monitor Dairy Goat Mastitis.

    PubMed

    Zaninelli, Mauro; Tangorra, Francesco Maria; Costa, Annamaria; Rossi, Luciana; Dell'Orto, Vittorio; Savoini, Giovanni

    2016-07-13

    The aim of this study was to develop and test a new fuzzy logic model for monitoring the udder health status (HS) of goats. The model evaluated, as input variables, the milk electrical conductivity (EC) signal, acquired on-line for each gland by a dedicated sensor, the bandwidth length and the frequency and amplitude of the first main peak of the Fourier frequency spectrum of the recorded milk EC signal. Two foremilk gland samples were collected from eight Saanen goats for six months at morning milking (lactation stages (LS): 0-60 Days In Milking (DIM); 61-120 DIM; 121-180 DIM), for a total of 5592 samples. Bacteriological analyses and somatic cell counts (SCC) were used to define the HS of the glands. With negative bacteriological analyses and SCC < 1,000,000 cells/mL, glands were classified as healthy. When bacteriological analyses were positive or showed a SCC > 1,000,000 cells/mL, glands were classified as not healthy (NH). For each EC signal, an estimated EC value was calculated and a relative deviation was obtained. Furthermore, the Fourier frequency spectrum was evaluated and bandwidth length, frequency and amplitude of the first main peak were identified. Before using these indexes as input variables of the fuzzy logic model a linear mixed-effects model was developed to evaluate the acquired data considering the HS, LS and LS × HS as explanatory variables. Results showed that performance of a fuzzy logic model, in the monitoring of mammary gland HS, could be improved by the use of EC indexes derived from the Fourier frequency spectra of gland milk EC signals recorded by on-line EC sensors.

  13. Predicting microbiologically defined infection in febrile neutropenic episodes in children: global individual participant data multivariable meta-analysis

    PubMed Central

    Phillips, Robert S; Sung, Lillian; Amman, Roland A; Riley, Richard D; Castagnola, Elio; Haeusler, Gabrielle M; Klaassen, Robert; Tissing, Wim J E; Lehrnbecher, Thomas; Chisholm, Julia; Hakim, Hana; Ranasinghe, Neil; Paesmans, Marianne; Hann, Ian M; Stewart, Lesley A

    2016-01-01

    Background: Risk-stratified management of fever with neutropenia (FN), allows intensive management of high-risk cases and early discharge of low-risk cases. No single, internationally validated, prediction model of the risk of adverse outcomes exists for children and young people. An individual patient data (IPD) meta-analysis was undertaken to devise one. Methods: The ‘Predicting Infectious Complications in Children with Cancer' (PICNICC) collaboration was formed by parent representatives, international clinical and methodological experts. Univariable and multivariable analyses, using random effects logistic regression, were undertaken to derive and internally validate a risk-prediction model for outcomes of episodes of FN based on clinical and laboratory data at presentation. Results: Data came from 22 different study groups from 15 countries, of 5127 episodes of FN in 3504 patients. There were 1070 episodes in 616 patients from seven studies available for multivariable analysis. Univariable analyses showed associations with microbiologically defined infection (MDI) in many items, including higher temperature, lower white cell counts and acute myeloid leukaemia, but not age. Patients with osteosarcoma/Ewings sarcoma and those with more severe mucositis were associated with a decreased risk of MDI. The predictive model included: malignancy type, temperature, clinically ‘severely unwell', haemoglobin, white cell count and absolute monocyte count. It showed moderate discrimination (AUROC 0.723, 95% confidence interval 0.711–0.759) and good calibration (calibration slope 0.95). The model was robust to bootstrap and cross-validation sensitivity analyses. Conclusions: This new prediction model for risk of MDI appears accurate. It requires prospective studies assessing implementation to assist clinicians and parents/patients in individualised decision making. PMID:26954719

  14. Does the growth response of woody plants to elevated CO2 increase with temperature? A model-oriented meta-analysis.

    PubMed

    Baig, Sofia; Medlyn, Belinda E; Mercado, Lina M; Zaehle, Sönke

    2015-12-01

    The temperature dependence of the reaction kinetics of the Rubisco enzyme implies that, at the level of a chloroplast, the response of photosynthesis to rising atmospheric CO2 concentration (Ca ) will increase with increasing air temperature. Vegetation models incorporating this interaction predict that the response of net primary productivity (NPP) to elevated CO2 (eCa ) will increase with rising temperature and will be substantially larger in warm tropical forests than in cold boreal forests. We tested these model predictions against evidence from eCa experiments by carrying out two meta-analyses. Firstly, we tested for an interaction effect on growth responses in factorial eCa  × temperature experiments. This analysis showed a positive, but nonsignificant interaction effect (95% CI for above-ground biomass response = -0.8, 18.0%) between eCa and temperature. Secondly, we tested field-based eCa experiments on woody plants across the globe for a relationship between the eCa effect on plant biomass and mean annual temperature (MAT). This second analysis showed a positive but nonsignificant correlation between the eCa response and MAT. The magnitude of the interactions between CO2 and temperature found in both meta-analyses were consistent with model predictions, even though both analyses gave nonsignificant results. Thus, we conclude that it is not possible to distinguish between the competing hypotheses of no interaction vs. an interaction based on Rubisco kinetics from the available experimental database. Experiments in a wider range of temperature zones are required. Until such experimental data are available, model predictions should aim to incorporate uncertainty about this interaction. © 2015 John Wiley & Sons Ltd.

  15. [Study on the ingredients of reserpine by TLC-FT-SERS].

    PubMed

    Wang, Y; Zi, F; Wang, Y; Zhao, Y; Zhang, X; Weng, S

    1999-12-01

    A new method for analysing the ingredients of reserpine by thin layer chromatography (TLC) and surface-enhanced Raman spectroscopy (SERS) is reported in this paper. The results show that the characteristic spectral bands of reserpine satuated at the thin layer with the amount of sample about 2 microg were obtained. The difference between SERS and solid spectra was found. An absorption model of reserpine and silver sol was proposed. This method can be used to analyse the chemical ingredients with high sensitivity.

  16. Sediment delivery modeling in practice: Comparing the effects of watershed characteristics and data resolution across hydroclimatic regions.

    PubMed

    Hamel, Perrine; Falinski, Kim; Sharp, Richard; Auerbach, Daniel A; Sánchez-Canales, María; Dennedy-Frank, P James

    2017-02-15

    Geospatial models are commonly used to quantify sediment contributions at the watershed scale. However, the sensitivity of these models to variation in hydrological and geomorphological features, in particular to land use and topography data, remains uncertain. Here, we assessed the performance of one such model, the InVEST sediment delivery model, for six sites comprising a total of 28 watersheds varying in area (6-13,500km 2 ), climate (tropical, subtropical, mediterranean), topography, and land use/land cover. For each site, we compared uncalibrated and calibrated model predictions with observations and alternative models. We then performed correlation analyses between model outputs and watershed characteristics, followed by sensitivity analyses on the digital elevation model (DEM) resolution. Model performance varied across sites (overall r 2 =0.47), but estimates of the magnitude of specific sediment export were as or more accurate than global models. We found significant correlations between metrics of sediment delivery and watershed characteristics, including erosivity, suggesting that empirical relationships may ultimately be developed for ungauged watersheds. Model sensitivity to DEM resolution varied across and within sites, but did not correlate with other observed watershed variables. These results were corroborated by sensitivity analyses performed on synthetic watersheds ranging in mean slope and DEM resolution. Our study provides modelers using InVEST or similar geospatial sediment models with practical insights into model behavior and structural uncertainty: first, comparison of model predictions across regions is possible when environmental conditions differ significantly; second, local knowledge on the sediment budget is needed for calibration; and third, model outputs often show significant sensitivity to DEM resolution. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. A fractal growth model: Exploring the connection pattern of hubs in complex networks

    NASA Astrophysics Data System (ADS)

    Li, Dongyan; Wang, Xingyuan; Huang, Penghe

    2017-04-01

    Fractal is ubiquitous in many real-world networks. Previous researches showed that the strong disassortativity between the hub-nodes on all length scales was the key principle that gave rise to the fractal architecture of networks. Although fractal property emerged in some models, there were few researches about the fractal growth model and quantitative analyses about the strength of the disassortativity for fractal model. In this paper, we proposed a novel inverse renormalization method, named Box-based Preferential Attachment (BPA), to build the fractal growth models in which the Preferential Attachment was performed at box level. The proposed models provided a new framework that demonstrated small-world-fractal transition. Also, we firstly demonstrated the statistical characteristic of connection patterns of the hubs in fractal networks. The experimental results showed that, given proper growing scale and added edges, the proposed models could clearly show pure small-world or pure fractal or both of them. It also showed that the hub connection ratio showed normal distribution in many real-world networks. At last, the comparisons of connection pattern between the proposed models and the biological and technical networks were performed. The results gave useful reference for exploring the growth principle and for modeling the connection patterns for real-world networks.

  18. Model tests and numerical analyses on horizontal impedance functions of inclined single piles embedded in cohesionless soil

    NASA Astrophysics Data System (ADS)

    Goit, Chandra Shekhar; Saitoh, Masato

    2013-03-01

    Horizontal impedance functions of inclined single piles are measured experimentally for model soil-pile systems with both the effects of local soil nonlinearity and resonant characteristics. Two practical pile inclinations of 5° and 10° in addition to a vertical pile embedded in cohesionless soil and subjected to lateral harmonic pile head loadings for a wide range of frequencies are considered. Results obtained with low-to-high amplitude of lateral loadings on model soil-pile systems encased in a laminar shear box show that the local nonlinearities have a profound impact on the horizontal impedance functions of piles. Horizontal impedance functions of inclined piles are found to be smaller than the vertical pile and the values decrease as the angle of pile inclination increases. Distinct values of horizontal impedance functions are obtained for the `positive' and `negative' cycles of harmonic loadings, leading to asymmetric force-displacement relationships for the inclined piles. Validation of these experimental results is carried out through three-dimensional nonlinear finite element analyses, and the results from the numerical models are in good agreement with the experimental data. Sensitivity analyses conducted on the numerical models suggest that the consideration of local nonlinearity at the vicinity of the soil-pile interface influence the response of the soil-pile systems.

  19. The Divergent Meanings of Life Satisfaction: Item Response Modeling of the Satisfaction with Life Scale in Greenland and Norway

    ERIC Educational Resources Information Center

    Vitterso, Joar; Biswas-Diener, Robert; Diener, Ed

    2005-01-01

    Cultural differences in response to the Satisfaction With Life Scale (SWLS) items is investigated. Data were fit to a mixed Rasch model in order to identify latent classes of participants in a combined sample of Norwegians (N = 461) and Greenlanders (N = 180). Initial analyses showed no mean difference in life satisfaction between the two…

  20. Characterization of Aral Sea Particulate Matter in Kyrgyzstan

    EPA Science Inventory

    1. Elemental analyses of resuspendable soils from the Aral Sea region and Kyrgyz soils show that the composition of the soils are remarkably uniform thereby supporting chemical source apportionment models that treat this region as a homogeneous source with respect to elemental co...

  1. Comparative Analyses of Zebrafish Anxiety-Like Behavior Using Conflict-Based Novelty Tests.

    PubMed

    Kysil, Elana V; Meshalkina, Darya A; Frick, Erin E; Echevarria, David J; Rosemberg, Denis B; Maximino, Caio; Lima, Monica Gomes; Abreu, Murilo S; Giacomini, Ana C; Barcellos, Leonardo J G; Song, Cai; Kalueff, Allan V

    2017-06-01

    Modeling of stress and anxiety in adult zebrafish (Danio rerio) is increasingly utilized in neuroscience research and central nervous system (CNS) drug discovery. Representing the most commonly used zebrafish anxiety models, the novel tank test (NTT) focuses on zebrafish diving in response to potentially threatening stimuli, whereas the light-dark test (LDT) is based on fish scototaxis (innate preference for dark vs. bright areas). Here, we systematically evaluate the utility of these two tests, combining meta-analyses of published literature with comparative in vivo behavioral and whole-body endocrine (cortisol) testing. Overall, the NTT and LDT behaviors demonstrate a generally good cross-test correlation in vivo, whereas meta-analyses of published literature show that both tests have similar sensitivity to zebrafish anxiety-like states. Finally, NTT evokes higher levels of cortisol, likely representing a more stressful procedure than LDT. Collectively, our study reappraises NTT and LDT for studying anxiety-like states in zebrafish, and emphasizes their developing utility for neurobehavioral research. These findings can help optimize drug screening procedures by choosing more appropriate models for testing anxiolytic or anxiogenic drugs.

  2. A d-statistic for single-case designs that is equivalent to the usual between-groups d-statistic.

    PubMed

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E; Boyajian, Jonathan G; Sullivan, Kristynn J; Andrade, Alma; Barrientos, Jeannette L

    2014-01-01

    We describe a standardised mean difference statistic (d) for single-case designs that is equivalent to the usual d in between-groups experiments. We show how it can be used to summarise treatment effects over cases within a study, to do power analyses in planning new studies and grant proposals, and to meta-analyse effects across studies of the same question. We discuss limitations of this d-statistic, and possible remedies to them. Even so, this d-statistic is better founded statistically than other effect size measures for single-case design, and unlike many general linear model approaches such as multilevel modelling or generalised additive models, it produces a standardised effect size that can be integrated over studies with different outcome measures. SPSS macros for both effect size computation and power analysis are available.

  3. A study of a steering system algorithm for pleasure boats based on stability analysis of a human-machine system model

    NASA Astrophysics Data System (ADS)

    Ikeda, Fujio; Toyama, Shigehiro; Ishiduki, Souta; Seta, Hiroaki

    2016-09-01

    Maritime accidents of small ships continue to increase in number. One of the major factors is poor manoeuvrability of the Manual Hydraulic Steering Mechanism (MHSM) in common use. The manoeuvrability can be improved by using the Electronic Control Steering Mechanism (ECSM). This paper conducts stability analyses of a pleasure boat controlled by human models in view of path following on a target course, in order to establish design guidelines for the ECSM. First, to analyse the stability region, the research derives the linear approximated model in a planar global coordinate system. Then, several human models are assumed to develop closed-loop human-machine controlled systems. These human models include basic proportional, derivative, integral and time-delay actions. The stability analysis simulations for those human-machine systems are carried out. The results show that the stability region tends to spread as a ship's velocity increases in the case of the basic proportional human model. The derivative action and time-delay action of human models are effective in spreading the stability region in their respective ranges of frontal gazing points.

  4. Neutronics Conversion Analyses of the Laue-Langevin Institute (ILL) High Flux Reactor (RHF)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergeron, A.; Dionne, B.; Calzavara, Y.

    2014-09-30

    The following report describes the neutronics results obtained with the MCNP model of the RHF U7Mo LEU reference design that has been established in 2010 during the feasibility analysis. This work constitutes a complete and detailed neutronics analysis of that LEU design using models that have been significantly improved since 2010 and the release of the feasibility report. When possible, the credibility of the neutronics model is tested by comparing the HEU model results with experimental data or other codes calculations results. The results obtained with the LEU model are systematically compared to the HEU model. The changes applied tomore » the neutronics model lead to better comparisons with experimental data or improved the calculation efficiency but do not challenge the conclusion of the feasibility analysis. If the U7Mo fuel is commercially available, not cost prohibitive, a back-end solution is established and if it is possible to manufacture the proposed element, neutronics analyses show that the performance of the reactor would not be challenged by the conversion to LEU fuel.« less

  5. Separate-channel analysis of two-channel microarrays: recovering inter-spot information.

    PubMed

    Smyth, Gordon K; Altman, Naomi S

    2013-05-26

    Two-channel (or two-color) microarrays are cost-effective platforms for comparative analysis of gene expression. They are traditionally analysed in terms of the log-ratios (M-values) of the two channel intensities at each spot, but this analysis does not use all the information available in the separate channel observations. Mixed models have been proposed to analyse intensities from the two channels as separate observations, but such models can be complex to use and the gain in efficiency over the log-ratio analysis is difficult to quantify. Mixed models yield test statistics for the null distributions can be specified only approximately, and some approaches do not borrow strength between genes. This article reformulates the mixed model to clarify the relationship with the traditional log-ratio analysis, to facilitate information borrowing between genes, and to obtain an exact distributional theory for the resulting test statistics. The mixed model is transformed to operate on the M-values and A-values (average log-expression for each spot) instead of on the log-expression values. The log-ratio analysis is shown to ignore information contained in the A-values. The relative efficiency of the log-ratio analysis is shown to depend on the size of the intraspot correlation. A new separate channel analysis method is proposed that assumes a constant intra-spot correlation coefficient across all genes. This approach permits the mixed model to be transformed into an ordinary linear model, allowing the data analysis to use a well-understood empirical Bayes analysis pipeline for linear modeling of microarray data. This yields statistically powerful test statistics that have an exact distributional theory. The log-ratio, mixed model and common correlation methods are compared using three case studies. The results show that separate channel analyses that borrow strength between genes are more powerful than log-ratio analyses. The common correlation analysis is the most powerful of all. The common correlation method proposed in this article for separate-channel analysis of two-channel microarray data is no more difficult to apply in practice than the traditional log-ratio analysis. It provides an intuitive and powerful means to conduct analyses and make comparisons that might otherwise not be possible.

  6. Spatial analysis of toxic emissions in LCA: a sub-continental nested USEtox model with freshwater archetypes.

    PubMed

    Kounina, Anna; Margni, Manuele; Shaked, Shanna; Bulle, Cécile; Jolliet, Olivier

    2014-08-01

    This paper develops continent-specific factors for the USEtox model and analyses the accuracy of different model architectures, spatial scales and archetypes in evaluating toxic impacts, with a focus on freshwater pathways. Inter-continental variation is analysed by comparing chemical fate and intake fractions between sub-continental zones of two life cycle impact assessment models: (1) the nested USEtox model parameterized with sub-continental zones and (2) the spatially differentiated IMPACTWorld model with 17 interconnected sub-continental regions. Substance residence time in water varies by up to two orders of magnitude among the 17 zones assessed with IMPACTWorld and USEtox, and intake fraction varies by up to three orders of magnitude. Despite this variation, the nested USEtox model succeeds in mimicking the results of the spatially differentiated model, with the exception of very persistent volatile pollutants that can be transported to polar regions. Intra-continental variation is analysed by comparing fate and intake fractions modelled with the a-spatial (one box) IMPACT Europe continental model vs. the spatially differentiated version of the same model. Results show that the one box model might overestimate chemical fate and characterisation factors for freshwater eco-toxicity of persistent pollutants by up to three orders of magnitude for point source emissions. Subdividing Europe into three archetypes, based on freshwater residence time (how long it takes water to reach the sea), improves the prediction of fate and intake fractions for point source emissions, bringing them within a factor five compared to the spatial model. We demonstrated that a sub-continental nested model such as USEtox, with continent-specific parameterization complemented with freshwater archetypes, can thus represent inter- and intra-continental spatial variations, whilst minimizing model complexity. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Inter-model variability in hydrological extremes projections for Amazonian sub-basins

    NASA Astrophysics Data System (ADS)

    Andres Rodriguez, Daniel; Garofolo, Lucas; Lázaro de Siqueira Júnior, José; Samprogna Mohor, Guilherme; Tomasella, Javier

    2014-05-01

    Irreducible uncertainties due to knowledge's limitations, chaotic nature of climate system and human decision-making process drive uncertainties in Climate Change projections. Such uncertainties affect the impact studies, mainly when associated to extreme events, and difficult the decision-making process aimed at mitigation and adaptation. However, these uncertainties allow the possibility to develop exploratory analyses on system's vulnerability to different sceneries. The use of different climate model's projections allows to aboard uncertainties issues allowing the use of multiple runs to explore a wide range of potential impacts and its implications for potential vulnerabilities. Statistical approaches for analyses of extreme values are usually based on stationarity assumptions. However, nonstationarity is relevant at the time scales considered for extreme value analyses and could have great implications in dynamic complex systems, mainly under climate change transformations. Because this, it is required to consider the nonstationarity in the statistical distribution parameters. We carried out a study of the dispersion in hydrological extremes projections using climate change projections from several climate models to feed the Distributed Hydrological Model of the National Institute for Spatial Research, MHD-INPE, applied in Amazonian sub-basins. This model is a large-scale hydrological model that uses a TopModel approach to solve runoff generation processes at the grid-cell scale. MHD-INPE model was calibrated for 1970-1990 using observed meteorological data and comparing observed and simulated discharges by using several performance coeficients. Hydrological Model integrations were performed for present historical time (1970-1990) and for future period (2010-2100). Because climate models simulate the variability of the climate system in statistical terms rather than reproduce the historical behavior of climate variables, the performances of the model's runs during the historical period, when feed with climate model data, were tested using descriptors of the Flow Duration Curves. The analyses of projected extreme values were carried out considering the nonstationarity of the GEV distribution parameters and compared with extremes events in present time. Results show inter-model variability in a broad dispersion on projected extreme's values. Such dispersion implies different degrees of socio-economic impacts associated to extreme hydrological events. Despite the no existence of one optimum result, this variability allows the analyses of adaptation strategies and its potential vulnerabilities.

  8. Do choices of sport fisheries reflect angler preferences for site attributes?

    Treesearch

    Harris. Charles C.; B. L. Driver; Bergersen. E. P.

    1985-01-01

    A revised recreation choice model is proposed and partially tested using results of a 1980 survey of Colorado anglers. Results of discriminant analyses show modest and useful prediction from preference for trout fishery site attributes to choice of type of fishery used.

  9. Prominent contribution of portal mesenchymal cells to liver fibrosis in ischemic and obstructive cholestatic injuries.

    PubMed

    Beaussier, Marc; Wendum, Dominique; Schiffer, Eduardo; Dumont, Sylvie; Rey, Colette; Lienhart, André; Housset, Chantal

    2007-03-01

    Liver fibrosis is produced by myofibroblasts of different origins. In culture models, rat myofibroblasts derived from hepatic stellate cells (HSCs) and from periductal portal mesenchymal cells, show distinct proliferative and immunophenotypic evolutive profiles, in particular regarding desmin microfilament (overexpressed vs shut-down, respectively). Here, we examined the contributions of both cell types, in two rat models of cholestatic injury, arterial liver ischemia and bile duct ligation (BDL). Serum and (immuno)histochemical hepatic analyses were performed at different time points (2 days, 1, 2 and 6 weeks) after injury induction. Cholestatic liver injury, as attested by serum biochemical tests, was moderate/resolutive in ischemia vs severe and sustained in BDL. Spatio-temporal and morphometric analyses of cytokeratin-19 and Sirius red stainings showed that in both models, fibrosis accumulated around reactive bile ductules, with a significant correlation between the progression rates of fibrosis and of the ductular reaction (both higher in BDL). After 6 weeks, fibrosis was stabilized and did not exceed F2 (METAVIR) in arterial ischemia, whereas micronodular cirrhosis (F4) was established in BDL. Immuno-analyses of alpha-smooth muscle actin and desmin expression profiles showed that intralobular HSCs underwent early phenotypic changes marked by desmin overexpression in both models and that the accumulation of fibrosis coincided with that of alpha-SMA-labeled myofibroblasts around portal/septal ductular structures. With the exception of desmin-positive myofibroblasts located at the portal/septal-lobular interface at early stages, and of myofibroblastic HSCs detected together with fine lobular septa in BDL cirrhotic liver, the vast majority of myofibroblasts were desmin-negative. These findings suggest that both in resolutive and sustained cholestatic injury, fibrosis is produced by myofibroblasts that derive predominantly from portal/periportal mesenchymal cells. While HSCs massively undergo phenotypic changes marked by desmin overexpression, a minority fully converts into matrix-producing myofibroblasts, at sites, which however may be important in the healing process that circumscribes wounded hepatocytes.

  10. Behavior of auxetic structures under compression and impact forces

    NASA Astrophysics Data System (ADS)

    Yang, Chulho; Vora, Hitesh D.; Chang, Young

    2018-02-01

    In recent years, various auxetic material structures have been designed and fabricated for diverse applications that utilize normal materials that follow Hooke’s law but still show the properties of negative Poisson’s ratios (NPR). One potential application is body protection pads that are comfortable to wear and effective in protecting body parts by reducing impact force and preventing injuries in high-risk individuals such as elderly people, industrial workers, law enforcement and military personnel, and athletes. This paper reports an integrated theoretical, computational, and experimental investigation conducted for typical auxetic materials that exhibit NPR properties. Parametric 3D CAD models of auxetic structures such as re-entrant hexagonal cells and arrowheads were developed. Then, key structural characteristics of protection pads were evaluated through static analyses of FEA models. Finally, impact analyses were conducted through dynamic simulations of FEA models to validate the results obtained from the static analyses. Efforts were also made to relate the individual and/or combined effect of auxetic structures and materials to the overall stiffness and shock-absorption performance of the protection pads. An advanced additive manufacturing (3D printing) technique was used to build prototypes of the auxetic structures. Three different materials typically used for fused deposition modeling technology, namely polylactic acid (PLA) and thermoplastic polyurethane material (NinjaFlex® and SemiFlex®), were used for different stiffness and shock-absorption properties. The 3D printed prototypes were then tested and the results were compared to the computational predictions. The results showed that the auxetic material could be effective in reducing the shock forces. Each structure and material combination demonstrated unique structural properties such as stiffness, Poisson’s ratio, and efficiency in shock absorption. Auxetic structures showed better shock absorption performance than non-auxetic ones. The mechanism for ideal input force distribution or shunting could be suggested for designing protectors using various shapes, thicknesses, and materials of auxetic materials to reduce the risk of injury.

  11. Analysing spectroscopically the propagation of a CME from its source on the disk to its impact as it propagates outwards

    NASA Astrophysics Data System (ADS)

    Harra, Louise K.; Doschek, G. A.; Matthews, Sarah A.; De Pontieu, Bart; Long, David

    We analyse a complex coronal mass ejection observed by Hinode, SDO and IRIS. SDO AIA shows that the eruption occurs between several active regions with flaring occurring in all of them. Hinode EIS observed one of the flaring active regions that shows a fast outwards propagation which is related to the CME lifting off. The eruption is then observed as it propagates away from the Sun, pushing the existing post-flare loops downwards as it goes. Spectroscopic observations are made during this time with IRIS measuring the impact that this CME front has as it pushes the loops downwards. Strong enhancements in the cool Mg II emission at these locations that show complex dynamics. We discuss these new observations in context of CME models.

  12. A modeling study on the influence of blood flow regulation on skin temperature pulsations

    NASA Astrophysics Data System (ADS)

    Tang, Yanliang; Mizeva, Irina; He, Ying

    2017-04-01

    Nowadays together with known optic techniques of microcirculation blood flow monitoring, skin temperature measurements are developed as well. In this paper, a simple one-dimensional bioheat transfer model was developed to analyse the heat wave transport in biological tissue, where an arteriole vessel with pulsatile blood is located. The simulated results show that the skin temperature oscillation amplitudes attenuate with the increase of blood flow oscillation frequency which gives the same tendency as that in the experiments. The parameter analyses further show that the amplitude of oscillation is also influenced by oscillation amplitude of blood and effective thermal conductivity. When oscillation amplitude of blood flow and effective thermal conductivity increase, the amplitude of skin temperature oscillation increases nonlinearly. Variation of effective thermal convective influence to the time delay of the thermal wave on the skin surface and distort it. Combination of two measurement techniques: one for estimation blood flow oscillations in the microvessels and other to the skin temperature measurement can produce additional information about the skin properties.

  13. Spreading of a ferrofluid core in three-stream micromixer channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhaomeng; Varma, V. B.; Ramanujan, R. V., E-mail: ramanujan@ntu.edu.sg

    2015-05-15

    Spreading of a water based ferrofluid core, cladded by a diamagnetic fluid, in three-stream micromixer channels was studied. This spreading, induced by an external magnetic field, is known as magnetofluidic spreading (MFS). MFS is useful for various novel applications where control of fluid-fluid interface is desired, such as micromixers or micro-chemical reactors. However, fundamental aspects of MFS are still unclear, and a model without correction factors is lacking. Hence, in this work, both experimental and numerical analyses were undertaken to study MFS. We show that MFS increased for higher applied magnetic fields, slower flow speed of both fluids, smaller flowmore » rate of ferrofluid relative to cladding, and higher initial magnetic particle concentration. Spreading, mainly due to connective diffusion, was observed mostly near the channel walls. Our multi-physics model, which combines magnetic and fluidic analyses, showed, for the first time, excellent agreement between theory and experiment. These results can be useful for lab-on-a-chip devices.« less

  14. How are the Concepts and Theories of Acid Base Reactions Presented? Chemistry in Textbooks and as Presented by Teachers

    NASA Astrophysics Data System (ADS)

    Furió-Más, Carlos; Calatayud, María Luisa; Guisasola, Jenaro; Furió-Gómez, Cristina

    2005-09-01

    This paper investigates the views of science and scientific activity that can be found in chemistry textbooks and heard from teachers when acid base reactions are introduced to grade 12 and university chemistry students. First, the main macroscopic and microscopic conceptual models are developed. Second, we attempt to show how the existence of views of science in textbooks and of chemistry teachers contributes to an impoverished image of chemistry. A varied design has been elaborated to analyse some epistemological deficiencies in teaching acid base reactions. Textbooks have been analysed and teachers have been interviewed. The results obtained show that the teaching process does not emphasize the macroscopic presentation of acids and bases. Macroscopic and microscopic conceptual models involved in the explanation of acid base processes are mixed in textbooks and by teachers. Furthermore, the non-problematic introduction of concepts, such as the hydrolysis concept, and the linear, cumulative view of acid base theories (Arrhenius and Brönsted) were detected.

  15. Reliability, Factor Structure, and Associations With Measures of Problem Relationship and Behavior of the Personality Inventory for DSM-5 in a Sample of Italian Community-Dwelling Adolescents.

    PubMed

    Somma, Antonella; Borroni, Serena; Maffei, Cesare; Giarolli, Laura E; Markon, Kristian E; Krueger, Robert F; Fossati, Andrea

    2017-10-01

    In order to assess the reliability, factorial validity, and criterion validity of the Personality Inventory for DSM-5 (PID-5) among adolescents, 1,264 Italian high school students were administered the PID-5. Participants were also administered the Questionnaire on Relationships and Substance Use as a criterion measure. In the full sample, McDonald's ω values were adequate for the PID-5 scales (median ω = .85, SD = .06), except for Suspiciousness. However, all PID-5 scales showed average inter-item correlation values in the .20-.55 range. Exploratory structural equation modeling analyses provided moderate support for the a priori model of PID-5 trait scales. Ordinal logistic regression analyses showed that selected PID-5 trait scales predicted a significant, albeit moderate (Cox & Snell R 2 values ranged from .08 to .15, all ps < .001) amount of variance in Questionnaire on Relationships and Substance Use variables.

  16. Structural dynamics of shroudless, hollow fan blades with composite in-lays

    NASA Technical Reports Server (NTRS)

    Aiello, R. A.; Hirschbein, M. S.; Chamis, C. C.

    1982-01-01

    Structural and dynamic analyses are presented for a shroudless, hollow titanium fan blade proposed for future use in aircraft turbine engines. The blade was modeled and analyzed using the composite blade structural analysis computer program (COBSTRAN); an integrated program consisting of mesh generators, composite mechanics codes, NASTRAN, and pre- and post-processors. Vibration and impact analyses are presented. The vibration analysis was conducted with COBSTRAN. Results show the effect of the centrifugal force field on frequencies, twist, and blade camber. Bird impact analysis was performed with the multi-mode blade impact computer program. This program uses the geometric model and modal analysis from the COBSTRAN vibration analysis to determine the gross impact response of the fan blades to bird strikes. The structural performance of this blade is also compared to a blade of similar design but with composite in-lays on the outer surface. Results show that the composite in-lays can be selected (designed) to substantially modify the mechanical performance of the shroudless, hollow fan blade.

  17. Spreading of a ferrofluid core in three-stream micromixer channels

    NASA Astrophysics Data System (ADS)

    Wang, Zhaomeng; Varma, V. B.; Xia, Huan Ming; Wang, Z. P.; Ramanujan, R. V.

    2015-05-01

    Spreading of a water based ferrofluid core, cladded by a diamagnetic fluid, in three-stream micromixer channels was studied. This spreading, induced by an external magnetic field, is known as magnetofluidic spreading (MFS). MFS is useful for various novel applications where control of fluid-fluid interface is desired, such as micromixers or micro-chemical reactors. However, fundamental aspects of MFS are still unclear, and a model without correction factors is lacking. Hence, in this work, both experimental and numerical analyses were undertaken to study MFS. We show that MFS increased for higher applied magnetic fields, slower flow speed of both fluids, smaller flow rate of ferrofluid relative to cladding, and higher initial magnetic particle concentration. Spreading, mainly due to connective diffusion, was observed mostly near the channel walls. Our multi-physics model, which combines magnetic and fluidic analyses, showed, for the first time, excellent agreement between theory and experiment. These results can be useful for lab-on-a-chip devices.

  18. The Relationships Between Modelling and Argumentation from the Perspective of the Model of Modelling Diagram

    NASA Astrophysics Data System (ADS)

    Cardoso Mendonça, Paula Cristina; Justi, Rosária

    2013-09-01

    Some studies related to the nature of scientific knowledge demonstrate that modelling is an inherently argumentative process. This study aims at discussing the relationship between modelling and argumentation by analysing data collected during the modelling-based teaching of ionic bonding and intermolecular interactions. The teaching activities were planned from the transposition of the main modelling stages that constitute the 'Model of Modelling Diagram' so that students could experience each of such stages. All the lessons were video recorded and their transcriptions supported the elaboration of case studies for each group of students. From the analysis of the case studies, we identified argumentative situations when students performed all of the modelling stages. Our data show that the argumentative situations were related to sense making, articulating and persuasion purposes, and were closely related to the generation of explanations in the modelling processes. They also show that representations are important resources for argumentation. Our results are consistent with some of those already reported in the literature regarding the relationship between modelling and argumentation, but are also divergent when they show that argumentation is not only related to the model evaluation phase.

  19. A health economic model to determine the long-term costs and clinical outcomes of raising low HDL-cholesterol in the prevention of coronary heart disease.

    PubMed

    Roze, S; Liens, D; Palmer, A; Berger, W; Tucker, D; Renaudin, C

    2006-12-01

    The aim of this study was to describe a health economic model developed to project lifetime clinical and cost outcomes of lipid-modifying interventions in patients not reaching target lipid levels and to assess the validity of the model. The internet-based, computer simulation model is made up of two decision analytic sub-models, the first utilizing Monte Carlo simulation, and the second applying Markov modeling techniques. Monte Carlo simulation generates a baseline cohort for long-term simulation by assigning an individual lipid profile to each patient, and applying the treatment effects of interventions under investigation. The Markov model then estimates the long-term clinical (coronary heart disease events, life expectancy, and quality-adjusted life expectancy) and cost outcomes up to a lifetime horizon, based on risk equations from the Framingham study. Internal and external validation analyses were performed. The results of the model validation analyses, plotted against corresponding real-life values from Framingham, 4S, AFCAPS/TexCAPS, and a meta-analysis by Gordon et al., showed that the majority of values were close to the y = x line, which indicates a perfect fit. The R2 value was 0.9575 and the gradient of the regression line was 0.9329, both very close to the perfect fit (= 1). Validation analyses of the computer simulation model suggest the model is able to recreate the outcomes from published clinical studies and would be a valuable tool for the evaluation of new and existing therapy options for patients with persistent dyslipidemia.

  20. LOD score exclusion analyses for candidate genes using random population samples.

    PubMed

    Deng, H W; Li, J; Recker, R R

    2001-05-01

    While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes with random population samples. We develop a LOD score approach for exclusion analyses of candidate genes with random population samples. Under this approach, specific genetic effects and inheritance models at candidate genes can be analysed and if a LOD score is < or = - 2.0, the locus can be excluded from having an effect larger than that specified. Computer simulations show that, with sample sizes often employed in association studies, this approach has high power to exclude a gene from having moderate genetic effects. In contrast to regular association analyses, population admixture will not affect the robustness of our analyses; in fact, it renders our analyses more conservative and thus any significant exclusion result is robust. Our exclusion analysis complements association analysis for candidate genes in random population samples and is parallel to the exclusion mapping analyses that may be conducted in linkage analyses with pedigrees or relative pairs. The usefulness of the approach is demonstrated by an application to test the importance of vitamin D receptor and estrogen receptor genes underlying the differential risk to osteoporotic fractures.

  1. Testing a Coupled Global-limited-area Data Assimilation System using Observations from the 2004 Pacific Typhoon Season

    NASA Astrophysics Data System (ADS)

    Holt, C. R.; Szunyogh, I.; Gyarmati, G.; Hoffman, R. N.; Leidner, M.

    2011-12-01

    Tropical cyclone (TC) track and intensity forecasts have improved in recent years due to increased model resolution, improved data assimilation, and the rapid increase in the number of routinely assimilated observations over oceans. The data assimilation approach that has received the most attention in recent years is Ensemble Kalman Filtering (EnKF). The most attractive feature of the EnKF is that it uses a fully flow-dependent estimate of the error statistics, which can have important benefits for the analysis of rapidly developing TCs. We implement the Local Ensemble Transform Kalman Filter algorithm, a vari- ation of the EnKF, on a reduced-resolution version of the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) model and the NCEP Regional Spectral Model (RSM) to build a coupled global-limited area anal- ysis/forecast system. This is the first time, to our knowledge, that such a system is used for the analysis and forecast of tropical cyclones. We use data from summer 2004 to study eight tropical cyclones in the Northwest Pacific. The benchmark data sets that we use to assess the performance of our system are the NCEP Reanalysis and the NCEP Operational GFS analyses from 2004. These benchmark analyses were both obtained by the Statistical Spectral Interpolation, which was the operational data assimilation system of NCEP in 2004. The GFS Operational analysis assimilated a large number of satellite radiance observations in addition to the observations assimilated in our system. All analyses are verified against the Joint Typhoon Warning Center Best Track data set. The errors are calculated for the position and intensity of the TCs. The global component of the ensemble-based system shows improvement in po- sition analysis over the NCEP Reanalysis, but shows no significant difference from the NCEP operational analysis for most of the storm tracks. The regional com- ponent of our system improves position analysis over all the global analyses. The intensity analyses, measured by the minimum sea level pressure, are of similar quality in all of the analyses. Regional deterministic forecasts started from our analyses are generally not significantly different from those started from the GFS operational analysis. On average, the regional experiments performed better for longer than 48 h sea level pressure forecasts, while the global forecast performed better in predicting the position for longer than 48 h.

  2. Seismic behavior of an Italian Renaissance Sanctuary: Damage assessment by numerical modelling

    NASA Astrophysics Data System (ADS)

    Clementi, Francesco; Nespeca, Andrea; Lenci, Stefano

    2016-12-01

    The paper deals with modelling and analysis of architectural heritage through the discussion of an illustrative case study: the Medieval Sanctuary of Sant'Agostino (Offida, Italy). Using the finite element technique, a 3D numerical model of the sanctuary is built, and then used to identify the main sources of the damages. The work shows that advanced numerical analyses could offer significant information for the understanding of the causes of existing damage and, more generally, on the seismic vulnerability.

  3. A dynamic model for tumour growth and metastasis formation.

    PubMed

    Haustein, Volker; Schumacher, Udo

    2012-07-05

    A simple and fast computational model to describe the dynamics of tumour growth and metastasis formation is presented. The model is based on the calculation of successive generations of tumour cells and enables one to describe biologically important entities like tumour volume, time point of 1st metastatic growth or number of metastatic colonies at a given time. The model entirely relies on the chronology of these successive events of the metastatic cascade. The simulation calculations were performed for two embedded growth models to describe the Gompertzian like growth behaviour of tumours. The initial training of the models was carried out using an analytical solution for the size distribution of metastases of a hepatocellular carcinoma. We then show the applicability of our models to clinical data from the Munich Cancer Registry. Growth and dissemination characteristics of metastatic cells originating from cells in the primary breast cancer can be modelled thus showing its ability to perform systematic analyses relevant for clinical breast cancer research and treatment. In particular, our calculations show that generally metastases formation has already been initiated before the primary can be detected clinically.

  4. A dynamic model for tumour growth and metastasis formation

    PubMed Central

    2012-01-01

    A simple and fast computational model to describe the dynamics of tumour growth and metastasis formation is presented. The model is based on the calculation of successive generations of tumour cells and enables one to describe biologically important entities like tumour volume, time point of 1st metastatic growth or number of metastatic colonies at a given time. The model entirely relies on the chronology of these successive events of the metastatic cascade. The simulation calculations were performed for two embedded growth models to describe the Gompertzian like growth behaviour of tumours. The initial training of the models was carried out using an analytical solution for the size distribution of metastases of a hepatocellular carcinoma. We then show the applicability of our models to clinical data from the Munich Cancer Registry. Growth and dissemination characteristics of metastatic cells originating from cells in the primary breast cancer can be modelled thus showing its ability to perform systematic analyses relevant for clinical breast cancer research and treatment. In particular, our calculations show that generally metastases formation has already been initiated before the primary can be detected clinically. PMID:22548735

  5. Forecasting incidence of dengue in Rajasthan, using time series analyses.

    PubMed

    Bhatnagar, Sunil; Lal, Vivek; Gupta, Shiv D; Gupta, Om P

    2012-01-01

    To develop a prediction model for dengue fever/dengue haemorrhagic fever (DF/DHF) using time series data over the past decade in Rajasthan and to forecast monthly DF/DHF incidence for 2011. Seasonal autoregressive integrated moving average (SARIMA) model was used for statistical modeling. During January 2001 to December 2010, the reported DF/DHF cases showed a cyclical pattern with seasonal variation. SARIMA (0,0,1) (0,1,1) 12 model had the lowest normalized Bayesian information criteria (BIC) of 9.426 and mean absolute percentage error (MAPE) of 263.361 and appeared to be the best model. The proportion of variance explained by the model was 54.3%. Adequacy of the model was established through Ljung-Box test (Q statistic 4.910 and P-value 0.996), which showed no significant correlation between residuals at different lag times. The forecast for the year 2011 showed a seasonal peak in the month of October with an estimated 546 cases. Application of SARIMA model may be useful for forecast of cases and impending outbreaks of DF/DHF and other infectious diseases, which exhibit seasonal pattern.

  6. Quantifying, Analysing and Modeling Rockfall Activity in two Different Alpine Catchments using Terrestrial Laserscanning

    NASA Astrophysics Data System (ADS)

    Haas, F.; Heckmann, T.; Wichmann, V.; Becht, M.

    2011-12-01

    Rockfall processes play a major role as a natural hazard, especially if the rock faces are located close to infrastructure. However these processes cause also the retreat of the steep rock faces by weathering and the growth of the corresponding talus cones by routing debris down the talus cones. That's why this process plays also an important role for the geomorphic system and the sediment budget of high mountain catchments. The presented investigation deals with the use of TLS for quantification and for analysis of rockfall activity in two study areas located in the Alps. The rockfaces of both catchments and the corresponding talus cones were scanned twice a year from different distances. Figure 1 shows an example for the spatial distribution of surface changes at a rockface in the Northern Dolomites between 2008 and 2010. The measured surface changes at this location yields to a mean rockwall retreat of 0.04 cm/a. But high resolution TLS data are not only applicable to quantify rockfall activity they can also be used to characterize the surface properties of the corresponding talus cones and the runout distances of bigger boulders and this can lead to a better process understanding. Therefore the surface roughness of talus cones in both catchments was characterized from the TLS point clouds by a GIS approach. The resulting detailed maps of the surface conditions on the talus cones were used to improve an existing process model which is able to model runout distances on the talus cones using distributed friction parameters. Beside this the investigations showed, that also the shape of the boulders has an influence on the runout distance. That's why the interrelationships between rock fragment morphology and runout distance of over 600 single boulders were analysed at the site of a large rockfall event. The submitted poster will show the results of the quantification of the rockfall activity and additionally it will show the results of the analyses of the talus cones and of the large rockfall event and applying these results to an existing rockfall model.

  7. Virulence regulation in Staphylococcus aureus: the need for in vivo analysis of virulence factor regulation.

    PubMed

    Pragman, Alexa A; Schlievert, Patrick M

    2004-10-01

    Staphylococcus aureus is a pathogenic microorganism that is responsible for a wide variety of clinical infections. These infections can be relatively mild, but serious, life-threatening infections may result from the expression of staphylococcal virulence factors that are coordinated by virulence regulators. Much work has been done to characterize the actions of staphylococcal virulence regulators in broth culture. Recently, several laboratories showed that transcriptional analyses of virulence regulators in in vivo animal models or in human infection did not correlate with transcriptional analyses accomplished in vitro. In describing the differences between in vitro and in vivo transcription of staphylococcal virulence regulators, we hope to encourage investigators to study virulence regulators using infection models whenever possible.

  8. Coalescent Modelling Suggests Recent Secondary-Contact of Cryptic Penguin Species

    PubMed Central

    Grosser, Stefanie; Burridge, Christopher P.; Peucker, Amanda J.; Waters, Jonathan M.

    2015-01-01

    Molecular genetic analyses present powerful tools for elucidating demographic and biogeographic histories of taxa. Here we present genetic evidence showing a dynamic history for two cryptic lineages within Eudyptula, the world's smallest penguin. Specifically, we use a suite of genetic markers to reveal that two congeneric taxa ('Australia' and 'New Zealand') co-occur in southern New Zealand, with only low levels of hybridization. Coalescent modelling suggests that the Australian little penguin only recently expanded into southern New Zealand. Analyses conducted under time-dependent molecular evolutionary rates lend support to the hypothesis of recent anthropogenic turnover, consistent with shifts detected in several other New Zealand coastal vertebrate taxa. This apparent turnover event highlights the dynamic nature of the region’s coastal ecosystem. PMID:26675310

  9. Coalescent Modelling Suggests Recent Secondary-Contact of Cryptic Penguin Species.

    PubMed

    Grosser, Stefanie; Burridge, Christopher P; Peucker, Amanda J; Waters, Jonathan M

    2015-01-01

    Molecular genetic analyses present powerful tools for elucidating demographic and biogeographic histories of taxa. Here we present genetic evidence showing a dynamic history for two cryptic lineages within Eudyptula, the world's smallest penguin. Specifically, we use a suite of genetic markers to reveal that two congeneric taxa ('Australia' and 'New Zealand') co-occur in southern New Zealand, with only low levels of hybridization. Coalescent modelling suggests that the Australian little penguin only recently expanded into southern New Zealand. Analyses conducted under time-dependent molecular evolutionary rates lend support to the hypothesis of recent anthropogenic turnover, consistent with shifts detected in several other New Zealand coastal vertebrate taxa. This apparent turnover event highlights the dynamic nature of the region's coastal ecosystem.

  10. Application of machine learning techniques to analyse the effects of physical exercise in ventricular fibrillation.

    PubMed

    Caravaca, Juan; Soria-Olivas, Emilio; Bataller, Manuel; Serrano, Antonio J; Such-Miquel, Luis; Vila-Francés, Joan; Guerrero, Juan F

    2014-02-01

    This work presents the application of machine learning techniques to analyse the influence of physical exercise in the physiological properties of the heart, during ventricular fibrillation. To this end, different kinds of classifiers (linear and neural models) are used to classify between trained and sedentary rabbit hearts. The use of those classifiers in combination with a wrapper feature selection algorithm allows to extract knowledge about the most relevant features in the problem. The obtained results show that neural models outperform linear classifiers (better performance indices and a better dimensionality reduction). The most relevant features to describe the benefits of physical exercise are those related to myocardial heterogeneity, mean activation rate and activation complexity. © 2013 Published by Elsevier Ltd.

  11. An assessment of the microgravity and acoustic environments in Space Station Freedom using VAPEPS

    NASA Technical Reports Server (NTRS)

    Bergen, Thomas F.; Scharton, Terry D.; Badilla, Gloria A.

    1992-01-01

    The Vibroacoustic Payload Environment Prediction System (VAPEPS) was used to predict the stationary on-orbit environments in one of the Space Station Freedom modules. The model of the module included the outer structure, equipment and payload racks, avionics, and cabin air and duct systems. Acoustic and vibratory outputs of various source classes were derived and input to the model. Initial results of analyses, performed in one-third octave frequency bands from 10 to 10,000 Hz, show that both the microgravity and acoustic environments will be exceeded in some one-third octave bands with the current SSF design. Further analyses indicate that interior acoustic level requirements will be exceeded even if the microgravity requirements are met.

  12. Wide-field Imaging System and Rapid Direction of Optical Zoom (WOZ)

    DTIC Science & Technology

    2010-12-24

    The modeling tools are based on interaction between three commercial software packages: SolidWorks, COMSOL Multiphysics, and ZEMAX optical design...deformation resulting from the applied voltages. Finally, the deformed surface can be exported to ZEMAX via MatLab. From ZEMAX , various analyses can...results to extract from ZEMAX to support the optimization remains to be determined. Figure 1 shows the deformation calculated using a model of an

  13. Neutrinos and flavor symmetries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanimoto, Morimitsu

    2015-07-15

    We discuss the recent progress of flavor models with the non-Abelian discrete symmetry in the lepton sector focusing on the θ{sub 13} and CP violating phase. In both direct approach and indirect approach of the flavor symmetry, the non-vanishing θ{sub 13} is predictable. The flavor symmetry with the generalised CP symmetry can also predicts the CP violating phase. We show the phenomenological analyses of neutrino mixing for the typical flavor models.

  14. The importance of accurate muscle modelling for biomechanical analyses: a case study with a lizard skull

    PubMed Central

    Gröning, Flora; Jones, Marc E. H.; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.

    2013-01-01

    Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944

  15. Characterizing Uncertainty and Variability in PBPK Models ...

    EPA Pesticide Factsheets

    Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro

  16. Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry

    PubMed Central

    Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna

    2015-01-01

    Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717

  17. Authentication of Primordial Characteristics of the CLBL-1 Cell Line Prove the Integrity of a Canine B-Cell Lymphoma in a Murine In Vivo Model

    PubMed Central

    Reimann-Berg, Nicola; Walter, Ingrid; Fuchs-Baumgartinger, Andrea; Wagner, Siegfried; Kovacic, Boris; Essler, Sabine E.; Schwendenwein, Ilse; Nolte, Ingo; Saalmüller, Armin; Escobar, Hugo Murua

    2012-01-01

    Cell lines are key tools in cancer research allowing the generation of neoplasias in animal models resembling the initial tumours able to mimic the original neoplasias closely in vivo. Canine lymphoma is the major hematopoietic malignancy in dogs and considered as a valuable spontaneous large animal model for human Non-Hodgkin's Lymphoma (NHL). Herein we describe the establishment and characterisation of an in vivo model using the canine B-cell lymphoma cell line CLBL-1 analysing the stability of the induced tumours and the ability to resemble the original material. CLBL-1 was injected into Rag2−/−γc −/− mice. The generated tumor material was analysed by immunophenotyping and histopathology and used to establish the cell line CLBL-1M. Both cell lines were karyotyped for detection of chromosomal aberrations. Additionally, CLBL-1 was stimulated with IL-2 and DSP30 as described for primary canine B-cell lymphomas and NHL to examine the stimulatory effect on cell proliferation. CLBL-1 in vivo application resulted in lymphoma-like disease and tumor formation. Immunophenotypic analysis of tumorous material showed expression of CD45+, MHCII+, CD11a+ and CD79αcy+. PARR analysis showed positivity for IgH indicating a monoclonal character. These cytogenetic, molecular, immunophenotypical and histological characterisations of the in vivo model reveal that the induced tumours and thereof generated cell line resemble closely the original material. After DSP30 and IL-2 stimulation, CLBL-1 showed to respond in the same way as primary material. The herein described CLBL-1 in vivo model provides a highly stable tool for B-cell lymphoma research in veterinary and human medicine allowing various further in vivo studies. PMID:22761949

  18. Authentication of primordial characteristics of the CLBL-1 cell line prove the integrity of a canine B-cell lymphoma in a murine in vivo model.

    PubMed

    Rütgen, Barbara C; Willenbrock, Saskia; Reimann-Berg, Nicola; Walter, Ingrid; Fuchs-Baumgartinger, Andrea; Wagner, Siegfried; Kovacic, Boris; Essler, Sabine E; Schwendenwein, Ilse; Nolte, Ingo; Saalmüller, Armin; Murua Escobar, Hugo

    2012-01-01

    Cell lines are key tools in cancer research allowing the generation of neoplasias in animal models resembling the initial tumours able to mimic the original neoplasias closely in vivo. Canine lymphoma is the major hematopoietic malignancy in dogs and considered as a valuable spontaneous large animal model for human Non-Hodgkin's Lymphoma (NHL). Herein we describe the establishment and characterisation of an in vivo model using the canine B-cell lymphoma cell line CLBL-1 analysing the stability of the induced tumours and the ability to resemble the original material. CLBL-1 was injected into Rag2(-/-)γ(c) (-/-) mice. The generated tumor material was analysed by immunophenotyping and histopathology and used to establish the cell line CLBL-1M. Both cell lines were karyotyped for detection of chromosomal aberrations. Additionally, CLBL-1 was stimulated with IL-2 and DSP30 as described for primary canine B-cell lymphomas and NHL to examine the stimulatory effect on cell proliferation. CLBL-1 in vivo application resulted in lymphoma-like disease and tumor formation. Immunophenotypic analysis of tumorous material showed expression of CD45(+), MHCII(+), CD11a(+) and CD79αcy(+). PARR analysis showed positivity for IgH indicating a monoclonal character. These cytogenetic, molecular, immunophenotypical and histological characterisations of the in vivo model reveal that the induced tumours and thereof generated cell line resemble closely the original material. After DSP30 and IL-2 stimulation, CLBL-1 showed to respond in the same way as primary material. The herein described CLBL-1 in vivo model provides a highly stable tool for B-cell lymphoma research in veterinary and human medicine allowing various further in vivo studies.

  19. Developing the Communicative Participation Item Bank: Rasch Analysis Results From a Spasmodic Dysphonia Sample

    PubMed Central

    Baylor, Carolyn R.; Yorkston, Kathryn M.; Eadie, Tanya L.; Miller, Robert M.; Amtmann, Dagmar

    2011-01-01

    Purpose The purpose of this study was to conduct the initial psychometric analyses of the Communicative Participation Item Bank—a new self-report instrument designed to measure the extent to which communication disorders interfere with communicative participation. This item bank is intended for community-dwelling adults across a range of communication disorders. Method A set of 141 candidate items was administered to 208 adults with spasmodic dysphonia. Participants rated the extent to which their condition interfered with participation in various speaking communication situations. Questionnaires were administered online or in a paper version per participant preference. Participants also completed the Voice Handicap Index (B. H. Jacobson et al., 1997) and a demographic questionnaire. Rasch analyses were conducted using Winsteps software (J. M. Linacre, 1991). Results The results show that items functioned better when the 5-category response format was recoded to a 4-category format. After removing 8 items that did not fit the Rasch model, the remaining 133 items demonstrated strong evidence of sufficient unidimensionality, with the model accounting for 89.3% of variance. Item location values ranged from −2.73 to 2.20 logits. Conclusions Preliminary Rasch analyses of the Communicative Participation Item Bank show strong psychometric properties. Further testing in populations with other communication disorders is needed. PMID:19717652

  20. Performance Evaluation of Three Blood Glucose Monitoring Systems Using ISO 15197: 2013 Accuracy Criteria, Consensus and Surveillance Error Grid Analyses, and Insulin Dosing Error Modeling in a Hospital Setting.

    PubMed

    Bedini, José Luis; Wallace, Jane F; Pardo, Scott; Petruschke, Thorsten

    2015-10-07

    Blood glucose monitoring is an essential component of diabetes management. Inaccurate blood glucose measurements can severely impact patients' health. This study evaluated the performance of 3 blood glucose monitoring systems (BGMS), Contour® Next USB, FreeStyle InsuLinx®, and OneTouch® Verio™ IQ, under routine hospital conditions. Venous blood samples (N = 236) obtained for routine laboratory procedures were collected at a Spanish hospital, and blood glucose (BG) concentrations were measured with each BGMS and with the available reference (hexokinase) method. Accuracy of the 3 BGMS was compared according to ISO 15197:2013 accuracy limit criteria, by mean absolute relative difference (MARD), consensus error grid (CEG) and surveillance error grid (SEG) analyses, and an insulin dosing error model. All BGMS met the accuracy limit criteria defined by ISO 15197:2013. While all measurements of the 3 BGMS were within low-risk zones in both error grid analyses, the Contour Next USB showed significantly smaller MARDs between reference values compared to the other 2 BGMS. Insulin dosing errors were lowest for the Contour Next USB than compared to the other systems. All BGMS fulfilled ISO 15197:2013 accuracy limit criteria and CEG criterion. However, taking together all analyses, differences in performance of potential clinical relevance may be observed. Results showed that Contour Next USB had lowest MARD values across the tested glucose range, as compared with the 2 other BGMS. CEG and SEG analyses as well as calculation of the hypothetical bolus insulin dosing error suggest a high accuracy of the Contour Next USB. © 2015 Diabetes Technology Society.

  1. Impact Analyses and Tests of Metal Cask Considering Aircraft Engine Crash - 12308

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sanghoon; Choi, Woo-Seok; Kim, Ki-Young

    2012-07-01

    The structural integrity of a dual purpose metal cask currently under development by the Korea Radioactive Waste Management Cooperation (KRMC) is evaluated through analyses and tests under a high-speed missile impact considering the targeted aircraft crash conditions. The impact conditions were carefully chosen through a survey on accident cases and recommendations from the literature. The missile impact velocity was set at 150 m/s, and two impact orientations were considered. A simplified missile simulating a commercial aircraft engine is designed from an impact load history curve provided in the literature. In the analyses, the focus is on the evaluation of themore » containment boundary integrity of the metal cask. The analyses results are compared with the results of tests using a 1/3 scale model. The results show very good agreements, and the procedure and methodology adopted in the structural analyses are validated. While the integrity of the cask is maintained in one evaluation where the missile impacts the top side of the free standing cask, the containment boundary is breached in another case in which the missile impacts the center of the cask lid in a perpendicular orientation. A safety assessment using a numerical simulation of an aircraft engine crash into spent nuclear fuel storage systems is performed. A commercially available explicit finite element code is utilized for the dynamic simulation, and the strain rate effect is included in the modeling of the materials used in the target system and missile. The simulation results show very good agreement with the test results. It is noted that this is the first test considering an aircraft crash in Korea. (authors)« less

  2. Large deviation approach to the generalized random energy model

    NASA Astrophysics Data System (ADS)

    Dorlas, T. C.; Dukes, W. M. B.

    2002-05-01

    The generalized random energy model is a generalization of the random energy model introduced by Derrida to mimic the ultrametric structure of the Parisi solution of the Sherrington-Kirkpatrick model of a spin glass. It was solved exactly in two special cases by Derrida and Gardner. A complete solution for the thermodynamics in the general case was given by Capocaccia et al. Here we use large deviation theory to analyse the model in a very straightforward way. We also show that the variational expression for the free energy can be evaluated easily using the Cauchy-Schwarz inequality.

  3. [Factor structure of the German version of the BIS/BAS Scales in a population-based sample].

    PubMed

    Müller, A; Smits, D; Claes, L; de Zwaan, M

    2013-02-01

    The Behavioural Inhibition System/Behavioural Activation System Scale (BIS/BAS-Scales) developed by Carver and White 1 is a self-rating instrument to assess the dispositional sensitivity to punishment and reward. The present work aims to examine the factor structure of the German version of the BIS/BAS-Scales. In a large German population-based sample (n = 1881) the model fit of several factor models was tested by using confirmatory factor analyses. The best model fit was found for the 5-factor model with two BIS (anxiety, fear) and three BAS (drive, reward responsiveness, fun seeking) scales, whereas the BIS-fear, the BAS-reward responsiveness, and the BAS-fun seeking subscales showed low internal consistency. The BIS/BAS scales were negatively correlated with age, and women reported higher BIS subscale scores than men. Confirmatory factor analyses suggest a 5-factor model. However, due to the low internal reliability of some of the subscales the use of this model is questionable. © Georg Thieme Verlag KG Stuttgart · New York.

  4. A modeling approach to compare ΣPCB concentrations between congener-specific analyses

    USGS Publications Warehouse

    Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.

    2017-01-01

    Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time. 

  5. Semivolatile POA and parameterized total combustion SOA in CMAQv5.2: impacts on source strength and partitioning

    EPA Science Inventory

    Mounting evidence from field and laboratory observations coupled with atmospheric model analyses shows that primary combustion emissions of organic compounds dynamically partition between the vapor and particulate phases, especially as near-source emissions dilute and cool to amb...

  6. Combustion and Performance Analyses of Coaxial Element Injectors with Liquid Oxygen/Liquid Methane Propellants

    NASA Technical Reports Server (NTRS)

    Hulka, J. R.; Jones, G. W.

    2010-01-01

    Liquid rocket engines using oxygen and methane propellants are being considered by the National Aeronautics and Space Administration (NASA) for in-space vehicles. This propellant combination has not been previously used in a flight-qualified engine system, so limited test data and analysis results are available at this stage of early development. NASA has funded several hardware-oriented activities with oxygen and methane propellants over the past several years with the Propulsion and Cryogenic Advanced Development (PCAD) project, under the Exploration Technology Development Program. As part of this effort, the NASA Marshall Space Flight Center has conducted combustion, performance, and combustion stability analyses of several of the configurations. This paper summarizes the analyses of combustion and performance as a follow-up to a paper published in the 2008 JANNAF/LPS meeting. Combustion stability analyses are presented in a separate paper. The current paper includes test and analysis results of coaxial element injectors using liquid oxygen and liquid methane or gaseous methane propellants. Several thrust chamber configurations have been modeled, including thrust chambers with multi-element swirl coax element injectors tested at the NASA MSFC, and a uni-element chamber with shear and swirl coax injectors tested at The Pennsylvania State University. Configurations were modeled with two one-dimensional liquid rocket combustion analysis codes, the Rocket Combustor Interaction Design and Analysis (ROCCID), and the Coaxial Injector Combustion Model (CICM). Significant effort was applied to show how these codes can be used to model combustion and performance with oxygen/methane propellants a priori, and what anchoring or calibrating features need to be applied or developed in the future. This paper describes the test hardware configurations, presents the results of all the analyses, and compares the results from the two analytical methods

  7. The Relationship between Intimacy Change and Passion: A Dyadic Diary Study.

    PubMed

    Aykutoğlu, Bülent; Uysal, Ahmet

    2017-01-01

    In the current study we investigated the association between intimacy and passion by testing whether increases in intimacy generates passion (Baumeister and Bratslavsky, 1999). Furthermore, we examined whether there are partner effects in intimacy change and passion link. Couples ( N = 75) participated in a 14-day long diary study. Dyadic multilevel analyses with residualized intimacy change scores showed that both actors' and partners' intimacy change positively predicted actor's passion. However, analyses also showed that residualized passion change scores positively predicted intimacy. Although these findings provide some empirical evidence for the intimacy change model, in line with the previous research (Rubin and Campbell, 2012), they also suggest that it is not possible to discern whether intimacy increment generates passion or passion increment generates intimacy.

  8. General form of a cooperative gradual maximal covering location problem

    NASA Astrophysics Data System (ADS)

    Bagherinejad, Jafar; Bashiri, Mahdi; Nikzad, Hamideh

    2018-07-01

    Cooperative and gradual covering are two new methods for developing covering location models. In this paper, a cooperative maximal covering location-allocation model is developed (CMCLAP). In addition, both cooperative and gradual covering concepts are applied to the maximal covering location simultaneously (CGMCLP). Then, we develop an integrated form of a cooperative gradual maximal covering location problem, which is called a general CGMCLP. By setting the model parameters, the proposed general model can easily be transformed into other existing models, facilitating general comparisons. The proposed models are developed without allocation for physical signals and with allocation for non-physical signals in discrete location space. Comparison of the previously introduced gradual maximal covering location problem (GMCLP) and cooperative maximal covering location problem (CMCLP) models with our proposed CGMCLP model in similar data sets shows that the proposed model can cover more demands and acts more efficiently. Sensitivity analyses are performed to show the effect of related parameters and the model's validity. Simulated annealing (SA) and a tabu search (TS) are proposed as solution algorithms for the developed models for large-sized instances. The results show that the proposed algorithms are efficient solution approaches, considering solution quality and running time.

  9. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses

    PubMed Central

    Soares, Marta O.; Palmer, Stephen; Ades, Anthony E.; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M.

    2015-01-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. PMID:25712447

  10. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses.

    PubMed

    Welton, Nicky J; Soares, Marta O; Palmer, Stephen; Ades, Anthony E; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M

    2015-07-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. © The Author(s) 2015.

  11. A Bayesian analysis of inflationary primordial spectrum models using Planck data

    NASA Astrophysics Data System (ADS)

    Santos da Costa, Simony; Benetti, Micol; Alcaniz, Jailson

    2018-03-01

    The current available Cosmic Microwave Background (CMB) data show an anomalously low value of the CMB temperature fluctuations at large angular scales (l < 40). This lack of power is not explained by the minimal ΛCDM model, and one of the possible mechanisms explored in the literature to address this problem is the presence of features in the primordial power spectrum (PPS) motivated by the early universe physics. In this paper, we analyse a set of cutoff inflationary PPS models using a Bayesian model comparison approach in light of the latest CMB data from the Planck Collaboration. Our results show that the standard power-law parameterisation is preferred over all models considered in the analysis, which motivates the search for alternative explanations for the observed lack of power in the CMB anisotropy spectrum.

  12. Comparison of modelling accuracy with and without exploiting automated optical monitoring information in predicting the treated wastewater quality.

    PubMed

    Tomperi, Jani; Leiviskä, Kauko

    2018-06-01

    Traditionally the modelling in an activated sludge process has been based on solely the process measurements, but as the interest to optically monitor wastewater samples to characterize the floc morphology has increased, in the recent years the results of image analyses have been more frequently utilized to predict the characteristics of wastewater. This study shows that the traditional process measurements or the automated optical monitoring variables by themselves are not capable of developing the best predictive models for the treated wastewater quality in a full-scale wastewater treatment plant, but utilizing these variables together the optimal models, which show the level and changes in the treated wastewater quality, are achieved. By this early warning, process operation can be optimized to avoid environmental damages and economic losses. The study also shows that specific optical monitoring variables are important in modelling a certain quality parameter, regardless of the other input variables available.

  13. An empirical analysis of Moscovitch's reconceptualised model of social anxiety: How is it different from fear of negative evaluation?

    PubMed

    Kizilcik, Isilay N; Gregory, Bree; Baillie, Andrew J; Crome, Erica

    2016-01-01

    Cognitive-behavioural models propose that excessive fear of negative evaluation is central to social anxiety. Moscovitch (2009) instead proposes that perceived deficiencies in three self attributes: fears of showing signs of anxiety, deficits in physical appearance, or deficits in social competence are at the core of social anxiety. However, these attributes are likely to overlap with fear of negative evaluation. Responses to an online survey of 286 participants with a range of social anxiety severity were analysed using hierarchical multiple regression to identify the overall unique predictive value of Moscovitch's model. Altogether, Moscovitch's model provided improvements in the prediction of safety behaviours, types of fears and cognitions; however only the fear of showing anxiety subscale provided unique information. This research supports further investigations into the utility of this revised model, particularly related to utility of explicitly assessing and addressing fears of showing anxiety. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Continuity Between DSM-5 Section II and III Personality Disorders in a Dutch Clinical Sample.

    PubMed

    Orbons, Irene M J; Rossi, Gina; Verheul, Roel; Schoutrop, Mirjam J A; Derksen, Jan L L; Segal, Daniel L; van Alphen, Sebastiaan P J

    2018-05-14

    The goal of this study was to evaluate the continuity across the Section II personality disorders (PDs) and the proposed Section III model of PDs in the Diagnostic and Statistical Manual of Mental Disorders (5th ed. [DSM-5]; American Psychiatric Association, 2013a ). More specifically, we analyzed association between the DSM-5 Section III pathological trait facets and Section II PDs among 110 Dutch adults (M age = 35.8 years, range = 19-60 years) receiving mental health care. We administered the Structured Clinical Interview for DSM-IV Axis II Disorders to all participants. Participants also completed the self-report Personality Inventory for DSM-5 (PID-5) as a measure of pathological trait facets. The distributions underlying the dependent variable were modeled as criterion counts, using negative binomial regression. The results provided some support for the validity of the PID-5 and the DSM-5 Section III Alternative Model, although analyses did not show a perfect match. Both at the trait level and the domain level, analyses showed mixed evidence of significant relationships between the PID-5 trait facets and domains with the traditional DSM-IV PDs.

  15. The role of photorespiration during the evolution of C4 photosynthesis in the genus Flaveria.

    PubMed

    Mallmann, Julia; Heckmann, David; Bräutigam, Andrea; Lercher, Martin J; Weber, Andreas P M; Westhoff, Peter; Gowik, Udo

    2014-06-16

    C4 photosynthesis represents a most remarkable case of convergent evolution of a complex trait, which includes the reprogramming of the expression patterns of thousands of genes. Anatomical, physiological, and phylogenetic and analyses as well as computational modeling indicate that the establishment of a photorespiratory carbon pump (termed C2 photosynthesis) is a prerequisite for the evolution of C4. However, a mechanistic model explaining the tight connection between the evolution of C4 and C2 photosynthesis is currently lacking. Here we address this question through comparative transcriptomic and biochemical analyses of closely related C3, C3-C4, and C4 species, combined with Flux Balance Analysis constrained through a mechanistic model of carbon fixation. We show that C2 photosynthesis creates a misbalance in nitrogen metabolism between bundle sheath and mesophyll cells. Rebalancing nitrogen metabolism requires anaplerotic reactions that resemble at least parts of a basic C4 cycle. Our findings thus show how C2 photosynthesis represents a pre-adaptation for the C4 system, where the evolution of the C2 system establishes important C4 components as a side effect.

  16. Validated predictive modelling of the environmental resistome

    PubMed Central

    Amos, Gregory CA; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-01-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome. PMID:25679532

  17. Validated predictive modelling of the environmental resistome.

    PubMed

    Amos, Gregory C A; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-06-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome.

  18. Modelling cointegration and Granger causality network to detect long-term equilibrium and diffusion paths in the financial system.

    PubMed

    Gao, Xiangyun; Huang, Shupei; Sun, Xiaoqi; Hao, Xiaoqing; An, Feng

    2018-03-01

    Microscopic factors are the basis of macroscopic phenomena. We proposed a network analysis paradigm to study the macroscopic financial system from a microstructure perspective. We built the cointegration network model and the Granger causality network model based on econometrics and complex network theory and chose stock price time series of the real estate industry and its upstream and downstream industries as empirical sample data. Then, we analysed the cointegration network for understanding the steady long-term equilibrium relationships and analysed the Granger causality network for identifying the diffusion paths of the potential risks in the system. The results showed that the influence from a few key stocks can spread conveniently in the system. The cointegration network and Granger causality network are helpful to detect the diffusion path between the industries. We can also identify and intervene in the transmission medium to curb risk diffusion.

  19. Modelling cointegration and Granger causality network to detect long-term equilibrium and diffusion paths in the financial system

    PubMed Central

    Huang, Shupei; Sun, Xiaoqi; Hao, Xiaoqing; An, Feng

    2018-01-01

    Microscopic factors are the basis of macroscopic phenomena. We proposed a network analysis paradigm to study the macroscopic financial system from a microstructure perspective. We built the cointegration network model and the Granger causality network model based on econometrics and complex network theory and chose stock price time series of the real estate industry and its upstream and downstream industries as empirical sample data. Then, we analysed the cointegration network for understanding the steady long-term equilibrium relationships and analysed the Granger causality network for identifying the diffusion paths of the potential risks in the system. The results showed that the influence from a few key stocks can spread conveniently in the system. The cointegration network and Granger causality network are helpful to detect the diffusion path between the industries. We can also identify and intervene in the transmission medium to curb risk diffusion. PMID:29657804

  20. Exercise self-identity: interactions with social comparison and exercise behaviour.

    PubMed

    Verkooijen, Kirsten T; de Bruijn, Gert-Jan

    2013-01-01

    Possible interactions among exercise self-identity, social comparison and exercise behaviour were explored in a sample of 417 undergraduate students (mean age = 21.5, SD = 3.0; 73% female). Two models were examined using self-report data; (1) a mediation model which proposed an association between social comparison and exercise behaviour mediated by exercise self-identity and (2) a moderation model proposing an association between exercise behaviour and self-identity moderated by social comparison. Results of the mediation analyses revealed partial mediation of the social comparison--exercise behaviour relationship by self-identity in females. Results of the moderation analyses revealed in males a significant interaction of social comparison with exercise behaviour in the prediction of self-identity - the positive association between exercise behaviour and exercise self-identity showed only significant among male students who believed to exercise equally much or less than peers. Possible explanations and implications for exercise promotion are discussed.

  1. Thermal Analysis of Small Re-Entry Probe

    NASA Technical Reports Server (NTRS)

    Agrawal, Parul; Prabhu, Dinesh K.; Chen, Y. K.

    2012-01-01

    The Small Probe Reentry Investigation for TPS Engineering (SPRITE) concept was developed at NASA Ames Research Center to facilitate arc-jet testing of a fully instrumented prototype probe at flight scale. Besides demonstrating the feasibility of testing a flight-scale model and the capability of an on-board data acquisition system, another objective for this project was to investigate the capability of simulation tools to predict thermal environments of the probe/test article and its interior. This paper focuses on finite-element thermal analyses of the SPRITE probe during the arcjet tests. Several iterations were performed during the early design phase to provide critical design parameters and guidelines for testing. The thermal effects of ablation and pyrolysis were incorporated into the final higher-fidelity modeling approach by coupling the finite-element analyses with a two-dimensional thermal protection materials response code. Model predictions show good agreement with thermocouple data obtained during the arcjet test.

  2. Evolution of bone microanatomy of the tetrapod tibia and its use in palaeobiological inference.

    PubMed

    Kriloff, A; Germain, D; Canoville, A; Vincent, P; Sache, M; Laurin, M

    2008-05-01

    Bone microanatomy appears to track changes in various physiological or ecological properties of the individual or the taxon. Analyses of sections of the tibia of 99 taxa show a highly significant (P

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steiner, J.L.; Lime, J.F.; Elson, J.S.

    One dimensional TRAC transient calculations of the process inherent ultimate safety (PIUS) advanced reactor design were performed for a pump-trip SCRAM. The TRAC calculations showed that the reactor power response and shutdown were in qualitative agreement with the one-dimensional analyses presented in the PIUS Preliminary Safety Information Document (PSID) submitted by Asea Brown Boveri (ABB) to the US Nuclear Regulatory Commission for preapplication safety review. The PSID analyses were performed with the ABB-developed RIGEL code. The TRAC-calculated phenomena and trends were also similar to those calculated with another one-dimensional PIUS model, the Brookhaven National Laboratory developed PIPA code. A TRACmore » pump-trip SCRAM transient has also been calculated with a TRAC model containing a multi-dimensional representation of the PIUS intemal flow structures and core region. The results obtained using the TRAC fully one-dimensional PIUS model are compared to the RIGEL, PIPA, and TRAC multi-dimensional results.« less

  4. A brain-region-based meta-analysis method utilizing the Apriori algorithm.

    PubMed

    Niu, Zhendong; Nie, Yaoxin; Zhou, Qian; Zhu, Linlin; Wei, Jieyao

    2016-05-18

    Brain network connectivity modeling is a crucial method for studying the brain's cognitive functions. Meta-analyses can unearth reliable results from individual studies. Meta-analytic connectivity modeling is a connectivity analysis method based on regions of interest (ROIs) which showed that meta-analyses could be used to discover brain network connectivity. In this paper, we propose a new meta-analysis method that can be used to find network connectivity models based on the Apriori algorithm, which has the potential to derive brain network connectivity models from activation information in the literature, without requiring ROIs. This method first extracts activation information from experimental studies that use cognitive tasks of the same category, and then maps the activation information to corresponding brain areas by using the automatic anatomical label atlas, after which the activation rate of these brain areas is calculated. Finally, using these brain areas, a potential brain network connectivity model is calculated based on the Apriori algorithm. The present study used this method to conduct a mining analysis on the citations in a language review article by Price (Neuroimage 62(2):816-847, 2012). The results showed that the obtained network connectivity model was consistent with that reported by Price. The proposed method is helpful to find brain network connectivity by mining the co-activation relationships among brain regions. Furthermore, results of the co-activation relationship analysis can be used as a priori knowledge for the corresponding dynamic causal modeling analysis, possibly achieving a significant dimension-reducing effect, thus increasing the efficiency of the dynamic causal modeling analysis.

  5. High-resolution spectroscopy of the extended narrow-line region of IC 5063 and NGC 7212

    NASA Astrophysics Data System (ADS)

    Congiu, E.; Contini, M.; Ciroi, S.; Cracco, V.; Berton, M.; Di Mille, F.; Frezzato, M.; La Mura, G.; Rafanelli, P.

    2017-10-01

    We studied the properties of the gas of the extended narrow-line region (ENLR) of two Seyfert 2 galaxies: IC 5063 and NGC 7212. We analysed high-resolution spectra to investigate how the main properties of this region depend on the gas velocity. We divided the emission lines in velocity bins and we calculated several line ratios. Diagnostic diagrams and suma composite models (photoionization + shocks) show that in both galaxies there might be evidence of shocks significantly contributing in the gas ionization at high |V|, even though photoionization from the active nucleus remains the main ionization mechanism. In IC 5063, the ionization parameter depends on V and its trend might be explained assuming an hollow bi-conical shape for the ENLR, with one of the edges aligned with the galaxy disc. On the other hand, NGC 7212 does not show any kind of dependence. The models show that solar O/H relative abundances reproduce the observed spectra in all the analysed regions. They also revealed an high fragmentation of the gas clouds, suggesting that the complex kinematics observed in these two objects might be caused by interaction between the interstellar medium and high-velocity components, such as jets.

  6. Baicalin promotes the bacteriostatic activity of lysozyme on S. aureus in mammary glands and neutrophilic granulocytes in mice

    PubMed Central

    Zhang, Zecai; Shen, Peng; Yang, Zhengtao; Zhang, Naisheng

    2017-01-01

    Staphylococcus aureus causes mastitis as a result of community-acquired or nosocomial infections. Lysozyme (LYSO) is an enzyme that is upregulated in many organisms during the innate immune response against infection by bacterial pathogens. Baicalin is a bioactive flavonoid that can bind to enzymes, often to potentiate their effect. Here we tested the effects of baicalin on the activity of LYSO using the S. aureus mastitis mouse model and neutrophilic granulocyte model of S. aureus infection. In our experiments, S. aureus counts decreased with increasing baicalin concentration. Furthermore, qPCR and western blot analyses showed that LYSO expression was unaffected by baicalin, while fluorescence quenching and UV fluorescence spectral analyses showed that baicalin binds to LYSO. To test whether this binding increased LYSO activity, we assessed LYSO-induced bacteriostasis in the presence of baicalin. Our results showed that LYSO-induced S. aureus bacteriostasis increased with increasing concentrations of baicalin, and that baicalin binding to LYSO synergistically increased the antibacterial activity of LYSO. These results demonstrate that baicalin enhances LYSO-induced bacteriostasis during the innate immune response to S. aureus. They suggest baicalin is a potentially useful therapeutic agent for the treatment of bacterial infections. PMID:28184027

  7. Distributed consensus for discrete-time heterogeneous multi-agent systems

    NASA Astrophysics Data System (ADS)

    Zhao, Huanyu; Fei, Shumin

    2018-06-01

    This paper studies the consensus problem for a class of discrete-time heterogeneous multi-agent systems. Two kinds of consensus algorithms will be considered. The heterogeneous multi-agent systems considered are converted into equivalent error systems by a model transformation. Then we analyse the consensus problem of the original systems by analysing the stability problem of the error systems. Some sufficient conditions for consensus of heterogeneous multi-agent systems are obtained by applying algebraic graph theory and matrix theory. Simulation examples are presented to show the usefulness of the results.

  8. An overview of the 2013 Las Vegas Ozone Study (LVOS): Impact of stratospheric intrusions and long-range transport on surface air quality

    NASA Astrophysics Data System (ADS)

    Langford, A. O.; Senff, C. J.; Alvarez, R. J.; Brioude, J.; Cooper, O. R.; Holloway, J. S.; Lin, M. Y.; Marchbanks, R. D.; Pierce, R. B.; Sandberg, S. P.; Weickmann, A. M.; Williams, E. J.

    2015-05-01

    The 2013 Las Vegas Ozone Study (LVOS) was conducted in the late spring and early summer of 2013 to assess the seasonal contribution of stratosphere-to-troposphere transport (STT) and long-range transport to surface ozone in Clark County, Nevada and determine if these processes directly contribute to exceedances of the National Ambient Air Quality Standard (NAAQS) in this area. Secondary goals included the characterization of local ozone production, regional transport from the Los Angeles Basin, and impacts from wildfires. The LVOS measurement campaign took place at a former U.S. Air Force radar station ∼45 km northwest of Las Vegas on Angel Peak (∼2.7 km above mean sea level, asl) in the Spring Mountains. The study consisted of two extended periods (May 19-June 4 and June 22-28, 2013) with near daily 5-min averaged lidar measurements of ozone and backscatter profiles from the surface to ∼2.5 km above ground level (∼5.2 km asl), and continuous in situ measurements (May 20-June 28) of O3, CO, (1-min) and meteorological parameters (5-min) at the surface. These activities were guided by forecasts and analyses from the FLEXPART (FLEXible PARTticle) dispersion model and the Real Time Air Quality Modeling System (RAQMS), and the NOAA Geophysical Research Laboratory (NOAA GFDL) AM3 chemistry-climate model. In this paper, we describe the LVOS measurements and present an overview of the results. The combined measurements and model analyses show that STT directly contributed to each of the three O3 exceedances that occurred in Clark County during LVOS, with contributions to 8-h surface concentrations in excess of 30 ppbv on each of these days. The analyses show that long-range transport from Asia made smaller contributions (<10 ppbv) to surface O3 during two of those exceedances. The contribution of regional wildfires to surface O3 during the three LVOS exceedance events was found to be negligible, but wildfires were found to be a major factor during exceedance events that occurred before and after the LVOS campaign. Our analyses also shows that ozone exceedances would have occurred on more than 50% of the days during the six-week LVOS campaign if the 8-h ozone NAAQS had been 65 ppbv instead of 75 ppbv.

  9. Analytical validation of an explicit finite element model of a rolling element bearing with a localised line spall

    NASA Astrophysics Data System (ADS)

    Singh, Sarabjeet; Howard, Carl Q.; Hansen, Colin H.; Köpke, Uwe G.

    2018-03-01

    In this paper, numerically modelled vibration response of a rolling element bearing with a localised outer raceway line spall is presented. The results were obtained from a finite element (FE) model of the defective bearing solved using an explicit dynamics FE software package, LS-DYNA. Time domain vibration signals of the bearing obtained directly from the FE modelling were processed further to estimate time-frequency and frequency domain results, such as spectrogram and power spectrum, using standard signal processing techniques pertinent to the vibration-based monitoring of rolling element bearings. A logical approach to analyses of the numerically modelled results was developed with an aim to presenting the analytical validation of the modelled results. While the time and frequency domain analyses of the results show that the FE model generates accurate bearing kinematics and defect frequencies, the time-frequency analysis highlights the simulation of distinct low- and high-frequency characteristic vibration signals associated with the unloading and reloading of the rolling elements as they move in and out of the defect, respectively. Favourable agreement of the numerical and analytical results demonstrates the validation of the results from the explicit FE modelling of the bearing.

  10. Show me the data: advances in multi-model benchmarking, assimilation, and forecasting

    NASA Astrophysics Data System (ADS)

    Dietze, M.; Raiho, A.; Fer, I.; Cowdery, E.; Kooper, R.; Kelly, R.; Shiklomanov, A. N.; Desai, A. R.; Simkins, J.; Gardella, A.; Serbin, S.

    2016-12-01

    Researchers want their data to inform carbon cycle predictions, but there are considerable bottlenecks between data collection and the use of data to calibrate and validate earth system models and inform predictions. This talk highlights recent advancements in the PEcAn project aimed at it making it easier for individual researchers to confront models with their own data: (1) The development of an easily extensible site-scale benchmarking system aimed at ensuring that models capture process rather than just reproducing pattern; (2) Efficient emulator-based Bayesian parameter data assimilation to constrain model parameters; (3) A novel, generalized approach to ensemble data assimilation to estimate carbon pools and fluxes and quantify process error; (4) automated processing and downscaling of CMIP climate scenarios to support forecasts that include driver uncertainty; (5) a large expansion in the number of models supported, with new tools for conducting multi-model and multi-site analyses; and (6) a network-based architecture that allows analyses to be shared with model developers and other collaborators. Application of these methods is illustrated with data across a wide range of time scales, from eddy-covariance to forest inventories to tree rings to paleoecological pollen proxies.

  11. Cost-effectiveness model for a specific mixture of prebiotics in The Netherlands.

    PubMed

    Lenoir-Wijnkoop, I; van Aalderen, W M C; Boehm, G; Klaassen, D; Sprikkelman, A B; Nuijten, M J C

    2012-02-01

    The objective of this study was to assess the cost-effectiveness of the use of prebiotics for the primary prevention of atopic dermatitis in The Netherlands. A model was constructed using decision analytical techniques. The model was developed to estimate the health economic impact of prebiotic preventive disease management of atopic dermatitis. Data sources used include published literature, clinical trials and official price/tariff lists and national population statistics. The comparator was no supplementation with prebiotics. The primary perspective for conducting the economic evaluation was based on the situation in The Netherlands in 2009. The results show that the use of prebiotics infant formula (IMMUNOFORTIS(®)) leads to an additional cost of € 51 and an increase in Quality Adjusted Life Years (QALY) of 0.108, when compared with no prebiotics. Consequently, the use of infant formula with a specific mixture of prebiotics results in an incremental cost-effectiveness ratio (ICER) of € 472. The sensitivity analyses show that the ICER remains in all analyses far below the threshold of € 20,000/QALY. This study shows that the favourable health benefit of the use of a specific mixture of prebiotics results in positive short- and long-term health economic benefits. In addition, this study demonstrates that the use of infant formula with a specific mixture of prebiotics is a highly cost-effective way of preventing atopic dermatitis in The Netherlands.

  12. Dandelion root extract affects colorectal cancer proliferation and survival through the activation of multiple death signalling pathways

    PubMed Central

    Ovadje, Pamela; Ammar, Saleem; Guerrero, Jose-Antonio; Arnason, John Thor; Pandey, Siyaram

    2016-01-01

    Dandelion extracts have been studied extensively in recent years for its anti-depressant and anti-inflammatory activity. Recent work from our lab, with in-vitro systems, shows the anti-cancer potential of an aqueous dandelion root extract (DRE) in several cancer cell models, with no toxicity to non-cancer cells. In this study, we examined the cancer cell-killing effectiveness of an aqueous DRE in colon cancer cell models. Aqueous DRE induced programmed cell death (PCD) selectively in > 95% of colon cancer cells, irrespective of their p53 status, by 48 hours of treatment. The anti-cancer efficacy of this extract was confirmed in in-vivo studies, as the oral administration of DRE retarded the growth of human colon xenograft models by more than 90%. We found the activation of multiple death pathways in cancer cells by DRE treatment, as revealed by gene expression analyses showing the expression of genes implicated in programmed cell death. Phytochemical analyses of the extract showed complex multi-component composition of the DRE, including some known bioactive phytochemicals such as α-amyrin, β-amyrin, lupeol and taraxasterol. This suggested that this natural extract could engage and effectively target multiple vulnerabilities of cancer cells. Therefore, DRE could be a non-toxic and effective anti-cancer alternative, instrumental for reducing the occurrence of cancer cells drug-resistance. PMID:27564258

  13. Enabling joined-up decision making with geotemporal information

    NASA Astrophysics Data System (ADS)

    Smith, M. J.; Ahmed, S. E.; Purves, D. W.; Emmott, S.; Joppa, L. N.; Caldararu, S.; Visconti, P.; Newbold, T.; Formica, A. F.

    2015-12-01

    While the use of geospatial data to assist in decision making is becoming increasingly common, the use of geotemporal information: information that can be indexed by geographical space AND time, is much rarer. I will describe our scientific research and software development efforts intended to advance the availability and use of geotemporal information in general. I will show two recent examples of "stacking" geotemporal information to support land use decision making in the Brazilian Amazon and Kenya, involving data-constrained predictive models and empirically derived datasets of road development, deforestation, carbon, agricultural yields, water purification and poverty alleviation services and will show how we use trade-off analyses and constraint reasoning algorithms to explore the costs and benefits of different decisions. For the Brazilian Amazon we explore tradeoffs involved in different deforestation scenarios, while for Kenya we explore the impacts of conserving forest to support international carbon conservation initiatives (REDD+). I will also illustrate the cloud-based software tools we have developed to enable anyone to access geotemporal information, gridded (e.g. climate) or non-gridded (e.g. protected areas), for the past, present or future and incorporate such information into their analyses (e.g. www.fetchclimate.org), including how we train new predictive models to such data using Bayesian techniques: on this latter point I will show how we combine satellite and ground measured data with predictive models to forecast how crops might respond to climate change.

  14. The Adolescent's Competency for Interacting with Alcohol as a Determinant of Intake: The Role of Self-Regulation.

    PubMed

    de la Fuente, Jesús; Cubero, Inmaculada; Sánchez-Amate, Mari Carmen; Peralta, Francisco J; Garzón, Angélica; Fiz Pérez, Javier

    2017-01-01

    The competency for interacting with alcohol is a highly useful Educational Psychology model for preventing and for understanding the different behavioral levels of this interaction. Knowledge of facts, concepts and principles about alcohol use, self-regulated behavior, and attitudes toward alcohol are predictive of adequate interaction with alcohol. The objective of this study was to empirically evaluate this postulated relationship. A total of 328 Spanish adolescents participated, between the ages of 12 and 17. All were enrolled in 1st-4th year of compulsory secondary education, in the context of the ALADO Program for prevention of alcohol intake in adolescents. An ex post facto design was used, with inferential analyses and SEM analyses. Results show an interdependence relationship, with significant structural prediction between the behavioral levels defined and the level of alcohol intake, with principles, self-regulating control and attitudes carrying more weight. Analyses are presented, as are implications for psychoeducational intervention using preventive programs based on this competency model.

  15. The Adolescent's Competency for Interacting with Alcohol as a Determinant of Intake: The Role of Self-Regulation

    PubMed Central

    de la Fuente, Jesús; Cubero, Inmaculada; Sánchez-Amate, Mari Carmen; Peralta, Francisco J.; Garzón, Angélica; Fiz Pérez, Javier

    2017-01-01

    The competency for interacting with alcohol is a highly useful Educational Psychology model for preventing and for understanding the different behavioral levels of this interaction. Knowledge of facts, concepts and principles about alcohol use, self-regulated behavior, and attitudes toward alcohol are predictive of adequate interaction with alcohol. The objective of this study was to empirically evaluate this postulated relationship. A total of 328 Spanish adolescents participated, between the ages of 12 and 17. All were enrolled in 1st–4th year of compulsory secondary education, in the context of the ALADO Program for prevention of alcohol intake in adolescents. An ex post facto design was used, with inferential analyses and SEM analyses. Results show an interdependence relationship, with significant structural prediction between the behavioral levels defined and the level of alcohol intake, with principles, self-regulating control and attitudes carrying more weight. Analyses are presented, as are implications for psychoeducational intervention using preventive programs based on this competency model. PMID:29123492

  16. Relationships of Upper Tropospheric Water Vapor, Clouds and SST: MLS Observations, ECMWF Analyses and GCM Simulations

    NASA Technical Reports Server (NTRS)

    Su, Hui; Waliser, Duane E.; Jiang, Jonathan H.; Li, Jui-lin; Read, William G.; Waters, Joe W.; Tompkins, Adrian M.

    2006-01-01

    The relationships of upper tropospheric water vapor (UTWV), cloud ice and sea surface temperature (SST) are examined in the annual cycles of ECMWF analyses and simulations from 15 atmosphere-ocean coupled models which were contributed to the IPCC AR4. The results are compared with the observed relationships based on UTWV and cloud ice measurements from MLS on Aura. It is shown that the ECMWF analyses produce positive correlations between UTWV, cloud ice and SST, similar to the MLS data. The rate of the increase of cloud ice and UTWV with SST is about 30% larger than that for MLS. For the IPCC simulations, the relationships between UTWV, cloud ice and SST are qualitatively captured. However, the magnitudes of the simulated cloud ice show a considerable disagreement between models, by nearly a factor of 10. The amplitudes of the approximate linear relations between UTWV, cloud ice and SST vary by a factor up to 4.

  17. Subsurface Structure Mapping Using Geophysical Data in Candi Umbul-Telomoyo, Magelang, Central Java, Indonesia

    NASA Astrophysics Data System (ADS)

    Affanti, A. P.; Prastyani, E.; Maghfira, P. D.; Niasari, S. W.

    2018-04-01

    Candi Umbul warm spring is one of the manifestations in the Telomoyo geothermal prospect area. A geophysical survey had been conducted using VLF (Very Low Frequency) EM, VLF R and magnetic methods in the Candi Umbul-Telomoyo. VLF EM, VLF R and magnetic data were aimed to image the conductivity and magnetic anomalies distribution of the subsurface beneath the Candi Umbul-Telomoyo. VLF EM data had been mapped with Karous-Hjelt filter and analysed by tipper analysis, VLF R data had been modelled using 2layinv and analysed using impedance analysis. On the other hand, magnetic data processing was done with upward continuation. The Karous-Hjelt filter and 2layinv models show the highest conductivity distribution that located at 4800-5000 m were correlated with tipper and impedance analyses. In addition, the high-low magnetic contrast from the quantitative magnetic data interpretation indicates a fault (which could be a fluid pathway) which is closed to the Candi Umbul warm spring manifestation.

  18. Research Workforce Diversity: The Case of Balancing National versus International Postdocs in US Biomedical Research

    PubMed Central

    Ghaffarzadegan, Navid; Hawley, Joshua; Desai, Anand

    2013-01-01

    The US government has been increasingly supporting postdoctoral training in biomedical sciences to develop the domestic research workforce. However, current trends suggest that mostly international researchers benefit from the funding, many of whom might leave the USA after training. In this paper, we describe a model used to analyse the flow of national versus international researchers into and out of postdoctoral training. We calibrate our model in the case of the USA and successfully replicate the data. We use the model to conduct simulation-based analyses of effects of different policies on the diversity of postdoctoral researchers. Our model shows that capping the duration of postdoctoral careers, a policy proposed previously, favours international postdoctoral researchers. The analysis suggests that the leverage point to help the growth of domestic research workforce is in the pregraduate education area, and many policies implemented at the postgraduate level have minimal or unintended effects on diversity. PMID:25368504

  19. The care of Filipino juvenile offenders in residential facilities evaluated using the risk-need-responsivity model.

    PubMed

    Spruit, Anouk; Wissink, Inge B; Stams, Geert Jan J M

    2016-01-01

    According to the risk-need-responsivity model of offender, assessment and rehabilitation treatment should target specific factors that are related to re-offending. This study evaluates the residential care of Filipino juvenile offenders using the risk-need-responsivity model. Risk analyses and criminogenic needs assessments (parenting style, aggression, relationships with peers, empathy, and moral reasoning) have been conducted using data of 55 juvenile offenders in four residential facilities. The psychological care has been assessed using a checklist. Statistical analyses showed that juvenile offenders had a high risk of re-offending, high aggression, difficulties in making pro-social friends, and a delayed socio-moral development. The psychological programs in the residential facilities were evaluated to be poor. The availability of the psychological care in the facilities fitted poorly with the characteristics of the juvenile offenders and did not comply with the risk-need-responsivity model. Implications for research and practice are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Developing a Hierarchical Model for the Spatial Analysis of PM10 Pollution Extremes in the Mexico City Metropolitan Area.

    PubMed

    Aguirre-Salado, Alejandro Ivan; Vaquera-Huerta, Humberto; Aguirre-Salado, Carlos Arturo; Reyes-Mora, Silvia; Olvera-Cervantes, Ana Delia; Lancho-Romero, Guillermo Arturo; Soubervielle-Montalvo, Carlos

    2017-07-06

    We implemented a spatial model for analysing PM 10 maxima across the Mexico City metropolitan area during the period 1995-2016. We assumed that these maxima follow a non-identical generalized extreme value (GEV) distribution and modeled the trend by introducing multivariate smoothing spline functions into the probability GEV distribution. A flexible, three-stage hierarchical Bayesian approach was developed to analyse the distribution of the PM 10 maxima in space and time. We evaluated the statistical model's performance by using a simulation study. The results showed strong evidence of a positive correlation between the PM 10 maxima and the longitude and latitude. The relationship between time and the PM 10 maxima was negative, indicating a decreasing trend over time. Finally, a high risk of PM 10 maxima presenting levels above 1000 μ g/m 3 (return period: 25 yr) was observed in the northwestern region of the study area.

  1. A necessarily complex model to explain the biogeography of the amphibians and reptiles of Madagascar.

    PubMed

    Brown, Jason L; Cameron, Alison; Yoder, Anne D; Vences, Miguel

    2014-10-09

    Pattern and process are inextricably linked in biogeographic analyses, though we can observe pattern, we must infer process. Inferences of process are often based on ad hoc comparisons using a single spatial predictor. Here, we present an alternative approach that uses mixed-spatial models to measure the predictive potential of combinations of hypotheses. Biodiversity patterns are estimated from 8,362 occurrence records from 745 species of Malagasy amphibians and reptiles. By incorporating 18 spatially explicit predictions of 12 major biogeographic hypotheses, we show that mixed models greatly improve our ability to explain the observed biodiversity patterns. We conclude that patterns are influenced by a combination of diversification processes rather than by a single predominant mechanism. A 'one-size-fits-all' model does not exist. By developing a novel method for examining and synthesizing spatial parameters such as species richness, endemism and community similarity, we demonstrate the potential of these analyses for understanding the diversification history of Madagascar's biota.

  2. On the equivalence of case-crossover and time series methods in environmental epidemiology.

    PubMed

    Lu, Yun; Zeger, Scott L

    2007-04-01

    The case-crossover design was introduced in epidemiology 15 years ago as a method for studying the effects of a risk factor on a health event using only cases. The idea is to compare a case's exposure immediately prior to or during the case-defining event with that same person's exposure at otherwise similar "reference" times. An alternative approach to the analysis of daily exposure and case-only data is time series analysis. Here, log-linear regression models express the expected total number of events on each day as a function of the exposure level and potential confounding variables. In time series analyses of air pollution, smooth functions of time and weather are the main confounders. Time series and case-crossover methods are often viewed as competing methods. In this paper, we show that case-crossover using conditional logistic regression is a special case of time series analysis when there is a common exposure such as in air pollution studies. This equivalence provides computational convenience for case-crossover analyses and a better understanding of time series models. Time series log-linear regression accounts for overdispersion of the Poisson variance, while case-crossover analyses typically do not. This equivalence also permits model checking for case-crossover data using standard log-linear model diagnostics.

  3. The importance of age-related differences in prospective memory: Evidence from diffusion model analyses.

    PubMed

    Ball, B Hunter; Aschenbrenner, Andrew J

    2017-06-09

    Event-based prospective memory (PM) refers to relying on environmental cues to trigger retrieval of a deferred action plan from long-term memory. Considerable research has demonstrated PM declines with increased age. Despite efforts to better characterize the attentional processes that underlie these decrements, the majority of research has relied on measures of central tendency to inform theoretical accounts of PM that may not entirely capture the underlying dynamics involved in allocating attention to intention-relevant information. The purpose of the current study was to examine the utility of the diffusion model to better understand the cognitive processes underlying age-related differences in PM. Results showed that emphasizing the importance of the PM intention increased cue detection selectively for older adults. Standard cost analyses revealed that PM importance increased mean response times and accuracy, but not differentially for young and older adults. Consistent with this finding, diffusion model analyses demonstrated that PM importance increased response caution as evidenced by increased boundary separation. However, the selective benefit in cue detection for older adults may reflect peripheral target-checking processes as indicated by changes in nondecision time. These findings highlight the use of modeling techniques to better characterize the processes underlying the relations among aging, attention, and PM.

  4. Whirl Flutter Stability and Its Influence on the Design of the Distributed Electric Propeller Aircraft X- 57

    NASA Technical Reports Server (NTRS)

    Hoover, Christian B.; Shen, Jinwei; Kreshock, Andrew R.; Stanford, Bret K.; Piatak, David J.; Heeg, Jennifer

    2017-01-01

    This paper studies the whirl flutter stability of the NASA experimental electric propulsion aircraft designated the X-57 Maxwell. whirl flutter stability is studied at two flight conditions: sea level at 2700 RPM to represent take-off and landing and 8000 feet at 2250 RPM to represent cruise. Two multibody dynamics analyses are used: CAMRAD II and Dymore. The CAMRAD II model is a semi-span X-57 model with a modal representation for the wing/pylon system. The Dymore model is a semi-span wing with a propeller composed of beam elements for the wing/pylon system that airloads can be applied to. The two multibody dynamics analyses were verified by comparing structural properties between each other and the NASTRAN analysis. For whirl flutter, three design revisions of the wing and pylon mount system are studied. The predicted frequencies and damping ratio of the wing modes show good agreements between the two analyses. Dymore tended to predict a slightly lower damping ratio as velocity increased for all three dynamic modes presented. Whirl flutter for the semi-span model was not present up to 500 knots for the latest design, well above the operating range of the X-57.

  5. HIPPI: highly accurate protein family classification with ensembles of HMMs.

    PubMed

    Nguyen, Nam-Phuong; Nute, Michael; Mirarab, Siavash; Warnow, Tandy

    2016-11-11

    Given a new biological sequence, detecting membership in a known family is a basic step in many bioinformatics analyses, with applications to protein structure and function prediction and metagenomic taxon identification and abundance profiling, among others. Yet family identification of sequences that are distantly related to sequences in public databases or that are fragmentary remains one of the more difficult analytical problems in bioinformatics. We present a new technique for family identification called HIPPI (Hierarchical Profile Hidden Markov Models for Protein family Identification). HIPPI uses a novel technique to represent a multiple sequence alignment for a given protein family or superfamily by an ensemble of profile hidden Markov models computed using HMMER. An evaluation of HIPPI on the Pfam database shows that HIPPI has better overall precision and recall than blastp, HMMER, and pipelines based on HHsearch, and maintains good accuracy even for fragmentary query sequences and for protein families with low average pairwise sequence identity, both conditions where other methods degrade in accuracy. HIPPI provides accurate protein family identification and is robust to difficult model conditions. Our results, combined with observations from previous studies, show that ensembles of profile Hidden Markov models can better represent multiple sequence alignments than a single profile Hidden Markov model, and thus can improve downstream analyses for various bioinformatic tasks. Further research is needed to determine the best practices for building the ensemble of profile Hidden Markov models. HIPPI is available on GitHub at https://github.com/smirarab/sepp .

  6. The prediction of surface temperature in the new seasonal prediction system based on the MPI-ESM coupled climate model

    NASA Astrophysics Data System (ADS)

    Baehr, J.; Fröhlich, K.; Botzet, M.; Domeisen, D. I. V.; Kornblueh, L.; Notz, D.; Piontek, R.; Pohlmann, H.; Tietsche, S.; Müller, W. A.

    2015-05-01

    A seasonal forecast system is presented, based on the global coupled climate model MPI-ESM as used for CMIP5 simulations. We describe the initialisation of the system and analyse its predictive skill for surface temperature. The presented system is initialised in the atmospheric, oceanic, and sea ice component of the model from reanalysis/observations with full field nudging in all three components. For the initialisation of the ensemble, bred vectors with a vertically varying norm are implemented in the ocean component to generate initial perturbations. In a set of ensemble hindcast simulations, starting each May and November between 1982 and 2010, we analyse the predictive skill. Bias-corrected ensemble forecasts for each start date reproduce the observed surface temperature anomalies at 2-4 months lead time, particularly in the tropics. Niño3.4 sea surface temperature anomalies show a small root-mean-square error and predictive skill up to 6 months. Away from the tropics, predictive skill is mostly limited to the ocean, and to regions which are strongly influenced by ENSO teleconnections. In summary, the presented seasonal prediction system based on a coupled climate model shows predictive skill for surface temperature at seasonal time scales comparable to other seasonal prediction systems using different underlying models and initialisation strategies. As the same model underlying our seasonal prediction system—with a different initialisation—is presently also used for decadal predictions, this is an important step towards seamless seasonal-to-decadal climate predictions.

  7. Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task.

    PubMed

    Akam, Thomas; Costa, Rui; Dayan, Peter

    2015-12-01

    The recently developed 'two-step' behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects' investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues.

  8. Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task

    PubMed Central

    Akam, Thomas; Costa, Rui; Dayan, Peter

    2015-01-01

    The recently developed ‘two-step’ behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects’ investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues. PMID:26657806

  9. Assessing the ability of potential evapotranspiration models in capturing dynamics of evaporative demand across various biomes and climatic regimes with ChinaFLUX measurements

    NASA Astrophysics Data System (ADS)

    Zheng, Han; Yu, Guirui; Wang, Qiufeng; Zhu, Xianjin; Yan, Junhua; Wang, Huimin; Shi, Peili; Zhao, Fenghua; Li, Yingnian; Zhao, Liang; Zhang, Junhui; Wang, Yanfen

    2017-08-01

    Estimates of atmospheric evaporative demand have been widely required for a variety of hydrological analyses, with potential evapotranspiration (PET) being an important measure representing evaporative demand of actual vegetated surfaces under given metrological conditions. In this study, we assessed the ability of various PET models in capturing long-term (typically 2003-2011) dynamics of evaporative demand at eight ecosystems across various biomes and climatic regimes in China. Prior to assessing PET dynamics, we first examined the reasonability of fourteen PET models in representing the magnitudes of evaporative demand using eddy-covariance actual evapotranspiration (AET) as an indicator. Results showed that the robustness of the fourteen PET models differed somewhat across the sites, and only three PET models could produce reasonable magnitudes of evaporative demand (i.e., PET ≥ AET on average) for all eight sites, including the: (i) Penman; (ii) Priestly-Taylor and (iii) Linacre models. Then, we assessed the ability of these three PET models in capturing dynamics of evaporative demand by comparing the annual and seasonal trends in PET against the equivalent trends in AET and precipitation (P) for particular sites. Results indicated that nearly all the three PET models could faithfully reproduce the dynamics in evaporative demand for the energy-limited conditions at both annual and seasonal scales, while only the Penman and Linacre models could represent dynamics in evaporative demand for the water-limited conditions. However, the Linacre model was unable to reproduce the seasonal switches between water- and energy-limited states for some sites. Our findings demonstrated that the choice of PET models would be essential for the evaporative demand analyses and other related hydrological analyses at different temporal and spatial scales.

  10. The word frequency effect during sentence reading: A linear or nonlinear effect of log frequency?

    PubMed

    White, Sarah J; Drieghe, Denis; Liversedge, Simon P; Staub, Adrian

    2016-10-20

    The effect of word frequency on eye movement behaviour during reading has been reported in many experimental studies. However, the vast majority of these studies compared only two levels of word frequency (high and low). Here we assess whether the effect of log word frequency on eye movement measures is linear, in an experiment in which a critical target word in each sentence was at one of three approximately equally spaced log frequency levels. Separate analyses treated log frequency as a categorical or a continuous predictor. Both analyses showed only a linear effect of log frequency on the likelihood of skipping a word, and on first fixation duration. Ex-Gaussian analyses of first fixation duration showed similar effects on distributional parameters in comparing high- and medium-frequency words, and medium- and low-frequency words. Analyses of gaze duration and the probability of a refixation suggested a nonlinear pattern, with a larger effect at the lower end of the log frequency scale. However, the nonlinear effects were small, and Bayes Factor analyses favoured the simpler linear models for all measures. The possible roles of lexical and post-lexical factors in producing nonlinear effects of log word frequency during sentence reading are discussed.

  11. Text Mining of Journal Articles for Sleep Disorder Terminologies.

    PubMed

    Lam, Calvin; Lai, Fu-Chih; Wang, Chia-Hui; Lai, Mei-Hsin; Hsu, Nanly; Chung, Min-Huey

    2016-01-01

    Research on publication trends in journal articles on sleep disorders (SDs) and the associated methodologies by using text mining has been limited. The present study involved text mining for terms to determine the publication trends in sleep-related journal articles published during 2000-2013 and to identify associations between SD and methodology terms as well as conducting statistical analyses of the text mining findings. SD and methodology terms were extracted from 3,720 sleep-related journal articles in the PubMed database by using MetaMap. The extracted data set was analyzed using hierarchical cluster analyses and adjusted logistic regression models to investigate publication trends and associations between SD and methodology terms. MetaMap had a text mining precision, recall, and false positive rate of 0.70, 0.77, and 11.51%, respectively. The most common SD term was breathing-related sleep disorder, whereas narcolepsy was the least common. Cluster analyses showed similar methodology clusters for each SD term, except narcolepsy. The logistic regression models showed an increasing prevalence of insomnia, parasomnia, and other sleep disorders but a decreasing prevalence of breathing-related sleep disorder during 2000-2013. Different SD terms were positively associated with different methodology terms regarding research design terms, measure terms, and analysis terms. Insomnia-, parasomnia-, and other sleep disorder-related articles showed an increasing publication trend, whereas those related to breathing-related sleep disorder showed a decreasing trend. Furthermore, experimental studies more commonly focused on hypersomnia and other SDs and less commonly on insomnia, breathing-related sleep disorder, narcolepsy, and parasomnia. Thus, text mining may facilitate the exploration of the publication trends in SDs and the associated methodologies.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurt, Christopher J.; Freels, James D.; Hobbs, Randy W.

    There has been a considerable effort over the previous few years to demonstrate and optimize the production of plutonium-238 ( 238Pu) at the High Flux Isotope Reactor (HFIR). This effort has involved resources from multiple divisions and facilities at the Oak Ridge National Laboratory (ORNL) to demonstrate the fabrication, irradiation, and chemical processing of targets containing neptunium-237 ( 237Np) dioxide (NpO 2)/aluminum (Al) cermet pellets. A critical preliminary step to irradiation at the HFIR is to demonstrate the safety of the target under irradiation via documented experiment safety analyses. The steady-state thermal safety analyses of the target are simulated inmore » a finite element model with the COMSOL Multiphysics code that determines, among other crucial parameters, the limiting maximum temperature in the target. Safety analysis efforts for this model discussed in the present report include: (1) initial modeling of single and reduced-length pellet capsules in order to generate an experimental knowledge base that incorporate initial non-linear contact heat transfer and fission gas equations, (2) modeling efforts for prototypical designs of partially loaded and fully loaded targets using limited available knowledge of fabrication and irradiation characteristics, and (3) the most recent and comprehensive modeling effort of a fully coupled thermo-mechanical approach over the entire fully loaded target domain incorporating burn-up dependent irradiation behavior and measured target and pellet properties, hereafter referred to as the production model. These models are used to conservatively determine several important steady-state parameters including target stresses and temperatures, the limiting condition of which is the maximum temperature with respect to the melting point. The single pellet model results provide a basis for the safety of the irradiations, followed by parametric analyses in the initial prototypical designs that were necessary due to the limiting fabrication and irradiation data available. The calculated parameters in the final production target model are the most accurate and comprehensive, while still conservative. Over 210 permutations in irradiation time and position were evaluated, and are supported by the most recent inputs and highest fidelity methodology. The results of these analyses show that the models presented in this report provide a robust and reliable basis for previous, current and future experiment safety analyses. In addition, they reveal an evolving knowledge of the steady-state behavior of the NpO 2/Al pellets under irradiation for a variety of target encapsulations and potential conditions.« less

  13. Women's work stress and cortisol levels: a longitudinal study of the association between the psychosocial work environment and serum cortisol.

    PubMed

    Evolahti, Annika; Hultcrantz, Malou; Collins, Aila

    2006-11-01

    The aim of the present study was to investigate whether there is an association between serum cortisol and work-related stress, as defined by the demand-control model in a longitudinal design. One hundred ten women aged 47-53 years completed a health questionnaire, including the Swedish version of the Job Content Scale, and participated in a psychological interview at baseline and in a follow-up session 2 years later. Morning blood samples were drawn for analyses of cortisol. Multiple stepwise regression analyses and logistic regression analyses showed that work demands and lack of social support were significantly associated with cortisol. The results of this study showed that negative work characteristics in terms of high demands and low social support contributed significantly to the biological stress levels in middle-aged women. Participation in the study may have served as an intervention, increasing the women's awareness and thus improving their health profiles on follow-up.

  14. Optimal planning and design of a renewable energy based supply system for microgrids

    DOE PAGES

    Hafez, Omar; Bhattacharya, Kankar

    2012-03-03

    This paper presents a technique for optimal planning and design of hybrid renewable energy systems for microgrid applications. The Distributed Energy Resources Customer Adoption Model (DER-CAM) is used to determine the optimal size and type of distributed energy resources (DERs) and their operating schedules for a sample utility distribution system. Using the DER-CAM results, an evaluation is performed to evaluate the electrical performance of the distribution circuit if the DERs selected by the DER-CAM optimization analyses are incorporated. Results of analyses regarding the economic benefits of utilizing the optimal locations identified for the selected DER within the system are alsomore » presented. The actual Brookhaven National Laboratory (BNL) campus electrical network is used as an example to show the effectiveness of this approach. The results show that these technical and economic analyses of hybrid renewable energy systems are essential for the efficient utilization of renewable energy resources for microgird applications.« less

  15. Unintended pregnancy and sex education in Chile: a behavioural model.

    PubMed

    Herold, J M; Thompson, N J; Valenzuela, M S; Morris, L

    1994-10-01

    This study analysed factors associated with unintended pregnancy among adolescent and young adult women in Santiago, Chile. Three variations of a behavioural model were developed. Logistic regression showed that the effect of sex education on unintended pregnancy works through the use of contraception. Other significant effects were found for variables reflecting socioeconomic status and a woman's acceptance of her sexuality. The results also suggested that labelling affects measurement of 'unintended' pregnancy.

  16. Gravitational wave, collider and dark matter signals from a scalar singlet electroweak baryogenesis

    DOE PAGES

    Beniwal, Ankit; Lewicki, Marek; Wells, James D.; ...

    2017-08-23

    We analyse a simple extension of the SM with just an additional scalar singlet coupled to the Higgs boson. Here, we discuss the possible probes for electroweak baryogenesis in this model including collider searches, gravitational wave and direct dark matter detection signals. We show that a large portion of the model parameter space exists where the observation of gravitational waves would allow detection while the indirect collider searches would not.

  17. Gravitational wave, collider and dark matter signals from a scalar singlet electroweak baryogenesis

    NASA Astrophysics Data System (ADS)

    Beniwal, Ankit; Lewicki, Marek; Wells, James D.; White, Martin; Williams, Anthony G.

    2017-08-01

    We analyse a simple extension of the SM with just an additional scalar singlet coupled to the Higgs boson. We discuss the possible probes for electroweak baryogenesis in this model including collider searches, gravitational wave and direct dark matter detection signals. We show that a large portion of the model parameter space exists where the observation of gravitational waves would allow detection while the indirect collider searches would not.

  18. Gravitational wave, collider and dark matter signals from a scalar singlet electroweak baryogenesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beniwal, Ankit; Lewicki, Marek; Wells, James D.

    We analyse a simple extension of the SM with just an additional scalar singlet coupled to the Higgs boson. Here, we discuss the possible probes for electroweak baryogenesis in this model including collider searches, gravitational wave and direct dark matter detection signals. We show that a large portion of the model parameter space exists where the observation of gravitational waves would allow detection while the indirect collider searches would not.

  19. LIVVkit 2: An extensible land ice verification and validation toolkit for comparing observations and models?

    NASA Astrophysics Data System (ADS)

    Kennedy, J. H.; Bennett, A. R.; Evans, K. J.; Fyke, J. G.; Vargo, L.; Price, S. F.; Hoffman, M. J.

    2016-12-01

    Accurate representation of ice sheets and glaciers are essential for robust predictions of arctic climate within Earth System models. Verification and Validation (V&V) is a set of techniques used to quantify the correctness and accuracy of a model, which builds developer/modeler confidence, and can be used to enhance the credibility of the model. Fundamentally, V&V is a continuous process because each model change requires a new round of V&V testing. The Community Ice Sheet Model (CISM) development community is actively developing LIVVkit, the Land Ice Verification and Validation toolkit, which is designed to easily integrate into an ice-sheet model's development workflow (on both personal and high-performance computers) to provide continuous V&V testing.LIVVkit is a robust and extensible python package for V&V, which has components for both software V&V (construction and use) and model V&V (mathematics and physics). The model Verification component is used, for example, to verify model results against community intercomparisons such as ISMIP-HOM. The model validation component is used, for example, to generate a series of diagnostic plots showing the differences between model results against observations for variables such as thickness, surface elevation, basal topography, surface velocity, surface mass balance, etc. Because many different ice-sheet models are under active development, new validation datasets are becoming available, and new methods of analysing these models are actively being researched, LIVVkit includes a framework to easily extend the model V&V analyses by ice-sheet modelers. This allows modelers and developers to develop evaluations of parameters, implement changes, and quickly see how those changes effect the ice-sheet model and earth system model (when coupled). Furthermore, LIVVkit outputs a portable hierarchical website allowing evaluations to be easily shared, published, and analysed throughout the arctic and Earth system communities.

  20. A comparison of time dependent Cox regression, pooled logistic regression and cross sectional pooling with simulations and an application to the Framingham Heart Study.

    PubMed

    Ngwa, Julius S; Cabral, Howard J; Cheng, Debbie M; Pencina, Michael J; Gagnon, David R; LaValley, Michael P; Cupples, L Adrienne

    2016-11-03

    Typical survival studies follow individuals to an event and measure explanatory variables for that event, sometimes repeatedly over the course of follow up. The Cox regression model has been used widely in the analyses of time to diagnosis or death from disease. The associations between the survival outcome and time dependent measures may be biased unless they are modeled appropriately. In this paper we explore the Time Dependent Cox Regression Model (TDCM), which quantifies the effect of repeated measures of covariates in the analysis of time to event data. This model is commonly used in biomedical research but sometimes does not explicitly adjust for the times at which time dependent explanatory variables are measured. This approach can yield different estimates of association compared to a model that adjusts for these times. In order to address the question of how different these estimates are from a statistical perspective, we compare the TDCM to Pooled Logistic Regression (PLR) and Cross Sectional Pooling (CSP), considering models that adjust and do not adjust for time in PLR and CSP. In a series of simulations we found that time adjusted CSP provided identical results to the TDCM while the PLR showed larger parameter estimates compared to the time adjusted CSP and the TDCM in scenarios with high event rates. We also observed upwardly biased estimates in the unadjusted CSP and unadjusted PLR methods. The time adjusted PLR had a positive bias in the time dependent Age effect with reduced bias when the event rate is low. The PLR methods showed a negative bias in the Sex effect, a subject level covariate, when compared to the other methods. The Cox models yielded reliable estimates for the Sex effect in all scenarios considered. We conclude that survival analyses that explicitly account in the statistical model for the times at which time dependent covariates are measured provide more reliable estimates compared to unadjusted analyses. We present results from the Framingham Heart Study in which lipid measurements and myocardial infarction data events were collected over a period of 26 years.

  1. Psychosocial predictors of cannabis use in adolescents at risk.

    PubMed

    Hüsler, Gebhard; Plancherel, Bernard; Werlen, Egon

    2005-09-01

    This research has tested a social disintegration model in conjunction with risk and protection factors that have the power to differentiate relative, weighted interactions among variables in different socially disintegrated groups. The model was tested in a cross-sectional sample of 1082 at-risk youth in Switzerland. Structural equation analyses show significant differences between the social disintegration (low, moderate, high) groups and gender, indicating that the model works differently for groups and for gender. For the highly disintegrated adolescents results clearly show that the risk factors (negative mood, peer network, delinquency) are more important than the protective factors (family relations, secure sense of self). Family relations lose all protective value against negative peer influence, but personal variables, such as secure self, gain protective power.

  2. Prediction of moment-rotation characteristic of top- and seat-angle bolted connection incorporating prying action

    NASA Astrophysics Data System (ADS)

    Ahmed, Ali

    2017-03-01

    Finite element (FE) analyses were performed to explore the prying influence on moment-rotation behaviour and to locate yielding zones of top- and seat-angle connections in author's past research studies. The results of those FE analyses with experimental failure strategies of the connections were used to develop failure mechanisms of top- and seat-angle connections in the present study. Then a formulation was developed based on three simple failure mechanisms considering bending and shear deformations, effects of prying action on the top angle and stiffness of the tension bolts to estimate rationally the ultimate moment M u of the connection, which is a vital parameter of the proposed four-parameter power model. Applicability of the proposed formulation is assessed by comparing moment-rotation ( M- θ r ) curves and ultimate moment capacities with those measured by experiments and estimated by FE analyses and three-parameter power model. This study shows that proposed formulation and Kishi-Chen's method both achieved close approximation driving M- θ r curves of all given connections except a few cases of Kishi-Chen model, and M u estimated by the proposed formulation is more rational than that predicted by Kishi-Chen's method.

  3. Variational Continuous Assimilation of TMI and SSM/I Rain Rates: Impact on GEOS-3 Hurricane Analyses and Forecasts

    NASA Technical Reports Server (NTRS)

    Hou, Arthur Y.; Zhang, Sara Q.; Reale, Oreste

    2003-01-01

    We describe a variational continuous assimilation (VCA) algorithm for assimilating tropical rainfall data using moisture and temperature tendency corrections as the control variable to offset model deficiencies. For rainfall assimilation, model errors are of special concern since model-predicted precipitation is based on parameterized moist physics, which can have substantial systematic errors. This study examines whether a VCA scheme using the forecast model as a weak constraint offers an effective pathway to precipitation assimilation. The particular scheme we exarnine employs a '1+1' dimension precipitation observation operator based on a 6-h integration of a column model of moist physics from the Goddard Earth Observing System (GEOS) global data assimilation system DAS). In earlier studies, we tested a simplified version of this scheme and obtained improved monthly-mean analyses and better short-range forecast skills. This paper describes the full implementation ofthe 1+1D VCA scheme using background and observation error statistics, and examines how it may improve GEOS analyses and forecasts of prominent tropical weather systems such as hurricanes. Parallel assimilation experiments with and without rainfall data for Hurricanes Bonnie and Floyd show that assimilating 6-h TMI and SSM/I surfice rain rates leads to more realistic storm features in the analysis, which, in turn, provide better initial conditions for 5-day storm track prediction and precipitation forecast. These results provide evidence that addressing model deficiencies in moisture tendency may be crucial to making effective use of precipitation information in data assimilation.

  4. The Self-Description Inventory+, Part 1: Factor Structure and Convergent Validity Analyses

    DTIC Science & Technology

    2013-07-01

    measures 12 scales of personality. The current report examines the possibility of replacing the EQ with a Five Factor Model ( FFM ) measure of...Checklist. Our results show that the SDI + has scales that are intercorrelated in a manner consistent with the FFM (Experiment 1), a factor structure...met the criteria showing it to be an FFM instrument, we will conduct concurrent validity research to determine if the SDI+ has greater predictive

  5. The Flow Dimension and Aquifer Heterogeneity: Field evidence and Numerical Analyses

    NASA Astrophysics Data System (ADS)

    Walker, D. D.; Cello, P. A.; Valocchi, A. J.; Roberts, R. M.; Loftis, B.

    2008-12-01

    The Generalized Radial Flow approach to hydraulic test interpretation infers the flow dimension to describe the geometry of the flow field during a hydraulic test. Noninteger values of the flow dimension often are inferred for tests in highly heterogeneous aquifers, yet subsequent modeling studies typically ignore the flow dimension. Monte Carlo analyses of detailed numerical models of aquifer tests examine the flow dimension for several stochastic models of heterogeneous transmissivity, T(x). These include multivariate lognormal, fractional Brownian motion, a site percolation network, and discrete linear features with lengths distributed as power-law. The behavior of the simulated flow dimensions are compared to the flow dimensions observed for multiple aquifer tests in a fractured dolomite aquifer in the Great Lakes region of North America. The combination of multiple hydraulic tests, observed fracture patterns, and the Monte Carlo results are used to screen models of heterogeneity and their parameters for subsequent groundwater flow modeling. The comparison shows that discrete linear features with lengths distributed as a power-law appear to be the most consistent with observations of the flow dimension in fractured dolomite aquifers.

  6. An experimental and analytical investigation of the effect of spanwise curvature on wing flutter at Mach number of 0.7

    NASA Technical Reports Server (NTRS)

    Rivera, Jose A., Jr.

    1989-01-01

    An experimental and analytical study was conducted at Mach 0.7 to investigate the effects of spanwise curvature on flutter. Two series of rectangular planform wings of aspect ration 1.5 and curvature ranging from zero (uncurved) to 1.04/ft were flutter tested in the NASA Langley Transonic Dynamics Tunnel (TDT). One series consisted of models with a NACA 65 A010 airfoil section and the other of flat plate cross section models. Flutter analyses were conducted for correlation with the experimental results by using structural finite element methods to perform vibration analysis and two aerodynamic theories to obtain unsteady aerodynamic load calculations. The experimental results showed that for one series of models the flutter dynamic pressure increased significantly with curvature while for the other series of models the flutter dynamic pressure decreased with curvature. The flutter analyses, which generally predicted the experimental results, indicated that the difference in behavior of the two series of models was primarily due to differences in their structural properties.

  7. A Basic Bivariate Structure of Personality Attributes Evident Across Nine Languages.

    PubMed

    Saucier, Gerard; Thalmayer, Amber Gayle; Payne, Doris L; Carlson, Robert; Sanogo, Lamine; Ole-Kotikash, Leonard; Church, A Timothy; Katigbak, Marcia S; Somer, Oya; Szarota, Piotr; Szirmák, Zsofia; Zhou, Xinyue

    2014-02-01

    Here, two studies seek to characterize a parsimonious common-denominator personality structure with optimal cross-cultural replicability. Personality differences are observed in all human populations and cultures, but lexicons for personality attributes contain so many distinctions that parsimony is lacking. Models stipulating the most important attributes have been formulated by experts or by empirical studies drawing on experience in a very limited range of cultures. Factor analyses of personality lexicons of nine languages of diverse provenance (Chinese, Korean, Filipino, Turkish, Greek, Polish, Hungarian, Maasai, and Senoufo) were examined, and their common structure was compared to that of several prominent models in psychology. A parsimonious bivariate model showed evidence of substantial convergence and ubiquity across cultures. Analyses involving key markers of these dimensions in English indicate that they are broad dimensions involving the overlapping content of the interpersonal circumplex, models of communion and agency, and morality/warmth and competence. These "Big Two" dimensions-Social Self-Regulation and Dynamism-provide a common-denominator model involving the two most crucial axes of personality variation, ubiquitous across cultures. The Big Two might serve as an umbrella model serving to link diverse theoretical models and associated research literatures. © 2013 Wiley Periodicals, Inc.

  8. Quantum behaviour of open pumped and damped Bose-Hubbard trimers

    NASA Astrophysics Data System (ADS)

    Chianca, C. V.; Olsen, M. K.

    2018-01-01

    We propose and analyse analogs of optical cavities for atoms using three-well inline Bose-Hubbard models with pumping and losses. With one well pumped and one damped, we find that both the mean-field dynamics and the quantum statistics show a qualitative dependence on the choice of damped well. The systems we analyse remain far from equilibrium, although most do enter a steady-state regime. We find quadrature squeezing, bipartite and tripartite inseparability and entanglement, and states exhibiting the EPR paradox, depending on the parameter regimes. We also discover situations where the mean-field solutions of our models are noticeably different from the quantum solutions for the mean fields. Due to recent experimental advances, it should be possible to demonstrate the effects we predict and investigate in this article.

  9. Ne matrix spectra of the sym-C6Br3F3+ radical cation

    USGS Publications Warehouse

    Bondybey, V.E.; Sears, T.J.; Miller, T.A.; Vaughn, C.; English, J.H.; Shiley, R.S.

    1981-01-01

    The electronic absorption and laser excited, wavelength resolved fluorescence spectra of the title cation have been observed in solid Ne matrix and vibrationally analysed. The vibrational structure of the excited B2A2??? state shows close similarity to the parent compound. The X2E??? ground state structure is strongly perturbed and irregular owing to a large Jahn-Teller distortion. The data are analysed in terms of a recently developed, sophisticated multimode Jahn-Teller theoretical model. We have generated the sym-C6Br3F3+ cations in solid Ne matrix and obtained their wavelength resolved emission and absorption spectra. T ground electronic X2E??? state exhibits an irregular and strongly perturbed vibrational structure, which can be successfully modeled using sophisticated multimode Jahn-Teller theory. ?? 1981.

  10. The Southern Hemisphere lower stratosphere during August and September 1987 - Analyses based on the United Kingdom Meteorological Office Global Model

    NASA Technical Reports Server (NTRS)

    Mckenna, D. S.; Jones, R. L.; Buckland, A. T.; Austin, J.; Tuck, A. F.; Winkler, R. H.; Chan, K. R.

    1989-01-01

    This paper presents a series of meteorological analyses used to aid the interpretation of the in situ Airborne Antarctic Ozone Experiment (AAOE) observations obtained aboard the ER-2 and DC-8 aircraft and examines the basis and accuracy of the analytical procedure. Maps and sections of meteorological variables derived from the UK Meteorological Office Global Model are presented for ER-2 and DC-8 flight days. It is found that analyzed temperatures and winds are generally in good agreement with AAOE observations at all levels; minor discrepancies were evident only at DC-8 altitudes. Maps of potential vorticity presented on the 428-K potential temperature surface show that the vortex is essentially circumpolar, although there are periods when major distortions are apparent.

  11. Data article on the effect of work engagement strategies on faculty staff behavioural outcomes in private universities.

    PubMed

    Falola, Hezekiah Olubusayo; Olokundun, Maxwell Ayodele; Salau, Odunayo Paul; Oludayo, Olumuyiwa Akinrole; Ibidunni, Ayodotun Stephen

    2018-06-01

    The main objective of this study was to present a data article that investigate the effect of work engagement strategies on faculty behavioural outcomes. Few studies analyse how work engagement strategies could help in driving standard work behaviour particularly in higher institutions. In an attempt to bridge this gap, this study was carried out using descriptive research method and Structural Equation Model (AMOS 22) for the analysis of four hundred and forty one (441) valid questionnaire which were completed by the faculty members of the six selected private universities in Nigeria using stratified and simple random sampling techniques. Factor model which shows high-reliability and good fit was generated, while construct validity was provided through convergent and discriminant analyses.

  12. Syndromes of Self-Reported Psychopathology for Ages 18-59 in 29 Societies.

    PubMed

    Ivanova, Masha Y; Achenbach, Thomas M; Rescorla, Leslie A; Tumer, Lori V; Ahmeti-Pronaj, Adelina; Au, Alma; Maese, Carmen Avila; Bellina, Monica; Caldas, J Carlos; Chen, Yi-Chuen; Csemy, Ladislav; da Rocha, Marina M; Decoster, Jeroen; Dobrean, Anca; Ezpeleta, Lourdes; Fontaine, Johnny R J; Funabiki, Yasuko; Guðmundsson, Halldór S; Harder, Valerie S; de la Cabada, Marie Leiner; Leung, Patrick; Liu, Jianghong; Mahr, Safia; Malykh, Sergey; Maras, Jelena Srdanovic; Markovic, Jasminka; Ndetei, David M; Oh, Kyung Ja; Petot, Jean-Michel; Riad, Geylan; Sakarya, Direnc; Samaniego, Virginia C; Sebre, Sandra; Shahini, Mimoza; Silvares, Edwiges; Simulioniene, Roma; Sokoli, Elvisa; Talcott, Joel B; Vazquez, Natalia; Zasepa, Ewa

    2015-06-01

    This study tested the multi-society generalizability of an eight-syndrome assessment model derived from factor analyses of American adults' self-ratings of 120 behavioral, emotional, and social problems. The Adult Self-Report (ASR; Achenbach and Rescorla 2003) was completed by 17,152 18-59-year-olds in 29 societies. Confirmatory factor analyses tested the fit of self-ratings in each sample to the eight-syndrome model. The primary model fit index (Root Mean Square Error of Approximation) showed good model fit for all samples, while secondary indices showed acceptable to good fit. Only 5 (0.06%) of the 8,598 estimated parameters were outside the admissible parameter space. Confidence intervals indicated that sampling fluctuations could account for the deviant parameters. Results thus supported the tested model in societies differing widely in social, political, and economic systems, languages, ethnicities, religions, and geographical regions. Although other items, societies, and analytic methods might yield different results, the findings indicate that adults in very diverse societies were willing and able to rate themselves on the same standardized set of 120 problem items. Moreover, their self-ratings fit an eight-syndrome model previously derived from self-ratings by American adults. The support for the statistically derived syndrome model is consistent with previous findings for parent, teacher, and self-ratings of 1½-18-year-olds in many societies. The ASR and its parallel collateral-report instrument, the Adult Behavior Checklist (ABCL), may offer mental health professionals practical tools for the multi-informant assessment of clinical constructs of adult psychopathology that appear to be meaningful across diverse societies.

  13. Genetic analyses using GGE model and a mixed linear model approach, and stability analyses using AMMI bi-plot for late-maturity alpha-amylase activity in bread wheat genotypes.

    PubMed

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Fofana, Bourlaye

    2017-06-01

    Low falling number and discounting grain when it is downgraded in class are the consequences of excessive late-maturity α-amylase activity (LMAA) in bread wheat (Triticum aestivum L.). Grain expressing high LMAA produces poorer quality bread products. To effectively breed for low LMAA, it is necessary to understand what genes control it and how they are expressed, particularly when genotypes are grown in different environments. In this study, an International Collection (IC) of 18 spring wheat genotypes and another set of 15 spring wheat cultivars adapted to South Dakota (SD), USA were assessed to characterize the genetic component of LMAA over 5 and 13 environments, respectively. The data were analysed using a GGE model with a mixed linear model approach and stability analysis was presented using an AMMI bi-plot on R software. All estimated variance components and their proportions to the total phenotypic variance were highly significant for both sets of genotypes, which were validated by the AMMI model analysis. Broad-sense heritability for LMAA was higher in SD adapted cultivars (53%) compared to that in IC (49%). Significant genetic effects and stability analyses showed some genotypes, e.g. 'Lancer', 'Chester' and 'LoSprout' from IC, and 'Alsen', 'Traverse' and 'Forefront' from SD cultivars could be used as parents to develop new cultivars expressing low levels of LMAA. Stability analysis using an AMMI bi-plot revealed that 'Chester', 'Lancer' and 'Advance' were the most stable across environments, while in contrast, 'Kinsman', 'Lerma52' and 'Traverse' exhibited the lowest stability for LMAA across environments.

  14. Stochasticity in staged models of epidemics: quantifying the dynamics of whooping cough

    PubMed Central

    Black, Andrew J.; McKane, Alan J.

    2010-01-01

    Although many stochastic models can accurately capture the qualitative epidemic patterns of many childhood diseases, there is still considerable discussion concerning the basic mechanisms generating these patterns; much of this stems from the use of deterministic models to try to understand stochastic simulations. We argue that a systematic method of analysing models of the spread of childhood diseases is required in order to consistently separate out the effects of demographic stochasticity, external forcing and modelling choices. Such a technique is provided by formulating the models as master equations and using the van Kampen system-size expansion to provide analytical expressions for quantities of interest. We apply this method to the susceptible–exposed–infected–recovered (SEIR) model with distributed exposed and infectious periods and calculate the form that stochastic oscillations take on in terms of the model parameters. With the use of a suitable approximation, we apply the formalism to analyse a model of whooping cough which includes seasonal forcing. This allows us to more accurately interpret the results of simulations and to make a more quantitative assessment of the predictions of the model. We show that the observed dynamics are a result of a macroscopic limit cycle induced by the external forcing and resonant stochastic oscillations about this cycle. PMID:20164086

  15. A methodology for eliciting, representing, and analysing stakeholder knowledge for decision making on complex socio-ecological systems: from cognitive maps to agent-based models.

    PubMed

    Elsawah, Sondoss; Guillaume, Joseph H A; Filatova, Tatiana; Rook, Josefine; Jakeman, Anthony J

    2015-03-15

    This paper aims to contribute to developing better ways for incorporating essential human elements in decision making processes for modelling of complex socio-ecological systems. It presents a step-wise methodology for integrating perceptions of stakeholders (qualitative) into formal simulation models (quantitative) with the ultimate goal of improving understanding and communication about decision making in complex socio-ecological systems. The methodology integrates cognitive mapping and agent based modelling. It cascades through a sequence of qualitative/soft and numerical methods comprising: (1) Interviews to elicit mental models; (2) Cognitive maps to represent and analyse individual and group mental models; (3) Time-sequence diagrams to chronologically structure the decision making process; (4) All-encompassing conceptual model of decision making, and (5) computational (in this case agent-based) Model. We apply the proposed methodology (labelled ICTAM) in a case study of viticulture irrigation in South Australia. Finally, we use strengths-weakness-opportunities-threats (SWOT) analysis to reflect on the methodology. Results show that the methodology leverages the use of cognitive mapping to capture the richness of decision making and mental models, and provides a combination of divergent and convergent analysis methods leading to the construction of an Agent Based Model. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. [Dimensions of Empathy in Ex-Combatants of the Colombian Armed Conflict Using a Standardized Scale].

    PubMed

    Pineda, David A; Aguirre-Acevedo, Daniel Camilo; Trujillo, Natalia; Valencia, Ana María; Pareja, Ángela; Tobón, Carlos; Velilla, Lina; Ibáñez, Agustín

    2013-03-01

    Empathy is one of the main concepts of in social neurosciences. It is defined as a trait with multiple dimensions allowing individuals to place themselves in the emotional states of others. Colombia has an irregular, internal and long-lasting armed conflict which has been increasing its cruelty levels. to assess the empathy dimensions of 285 ex-combatants from the internal Colombian conflict, using the Interpersonal Reactivity Index(IRI) in Spanish. METHODOLOGY AND SUBJECTS: a sample of 285 male ex-combatants, 241 (84, 6%) males: 85,3% paramilitaries, and 14,7% guerillas. The 28 Item IRI questionnaires were administered. 3 exploratory factor analyses (EFA) were performed. Confirmatory factor analyses (CFA) were developed using structural equation procedures. The first EFA obtained 9 factors (KMO=0,74, variance 54,7% and internal consistency (IC): 0,22 - 0,63). The second EFA produced 20 items with burdens above 0,4 and showed a 6-factor structure (KMO=0,70, variance 50,3%, IC: 0,37 - 0,63). The third EFA forced the 4 original IRI dimensions (KMO=0,74, variance 33,77, IC: 0,44 - 0,77. CFAs showed goodness of adjustment indexes adequate for the three models. The 4-factor model obtained the lowest value, while the 6-factor model obtained the highest. The 4- factor model showed the best IC. The Spanish IRI administered to ex-combatants of the Colombian conflict has possible structures of 4, 6 and 9 factors. The best adjustment was for the 6-fctor. The 4-factor model exhibited the best IC. Copyright © 2013 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.

  17. Evaluating Intervention Programs with a Pretest-Posttest Design: A Structural Equation Modeling Approach

    PubMed Central

    Alessandri, Guido; Zuffianò, Antonio; Perinelli, Enrico

    2017-01-01

    A common situation in the evaluation of intervention programs is the researcher's possibility to rely on two waves of data only (i.e., pretest and posttest), which profoundly impacts on his/her choice about the possible statistical analyses to be conducted. Indeed, the evaluation of intervention programs based on a pretest-posttest design has been usually carried out by using classic statistical tests, such as family-wise ANOVA analyses, which are strongly limited by exclusively analyzing the intervention effects at the group level. In this article, we showed how second order multiple group latent curve modeling (SO-MG-LCM) could represent a useful methodological tool to have a more realistic and informative assessment of intervention programs with two waves of data. We offered a practical step-by-step guide to properly implement this methodology, and we outlined the advantages of the LCM approach over classic ANOVA analyses. Furthermore, we also provided a real-data example by re-analyzing the implementation of the Young Prosocial Animation, a universal intervention program aimed at promoting prosociality among youth. In conclusion, albeit there are previous studies that pointed to the usefulness of MG-LCM to evaluate intervention programs (Muthén and Curran, 1997; Curran and Muthén, 1999), no previous study showed that it is possible to use this approach even in pretest-posttest (i.e., with only two time points) designs. Given the advantages of latent variable analyses in examining differences in interindividual and intraindividual changes (McArdle, 2009), the methodological and substantive implications of our proposed approach are discussed. PMID:28303110

  18. Evaluating Intervention Programs with a Pretest-Posttest Design: A Structural Equation Modeling Approach.

    PubMed

    Alessandri, Guido; Zuffianò, Antonio; Perinelli, Enrico

    2017-01-01

    A common situation in the evaluation of intervention programs is the researcher's possibility to rely on two waves of data only (i.e., pretest and posttest), which profoundly impacts on his/her choice about the possible statistical analyses to be conducted. Indeed, the evaluation of intervention programs based on a pretest-posttest design has been usually carried out by using classic statistical tests, such as family-wise ANOVA analyses, which are strongly limited by exclusively analyzing the intervention effects at the group level. In this article, we showed how second order multiple group latent curve modeling (SO-MG-LCM) could represent a useful methodological tool to have a more realistic and informative assessment of intervention programs with two waves of data. We offered a practical step-by-step guide to properly implement this methodology, and we outlined the advantages of the LCM approach over classic ANOVA analyses. Furthermore, we also provided a real-data example by re-analyzing the implementation of the Young Prosocial Animation, a universal intervention program aimed at promoting prosociality among youth. In conclusion, albeit there are previous studies that pointed to the usefulness of MG-LCM to evaluate intervention programs (Muthén and Curran, 1997; Curran and Muthén, 1999), no previous study showed that it is possible to use this approach even in pretest-posttest (i.e., with only two time points) designs. Given the advantages of latent variable analyses in examining differences in interindividual and intraindividual changes (McArdle, 2009), the methodological and substantive implications of our proposed approach are discussed.

  19. Effectiveness of a worksite mindfulness-based multi-component intervention on lifestyle behaviors

    PubMed Central

    2014-01-01

    Introduction Overweight and obesity are associated with an increased risk of morbidity. Mindfulness training could be an effective strategy to optimize lifestyle behaviors related to body weight gain. The aim of this study was to evaluate the effectiveness of a worksite mindfulness-based multi-component intervention on vigorous physical activity in leisure time, sedentary behavior at work, fruit intake and determinants of these behaviors. The control group received information on existing lifestyle behavior- related facilities that were already available at the worksite. Methods In a randomized controlled trial design (n = 257), 129 workers received a mindfulness training, followed by e-coaching, lunch walking routes and fruit. Outcome measures were assessed at baseline and after 6 and 12 months using questionnaires. Physical activity was also measured using accelerometers. Effects were analyzed using linear mixed effect models according to the intention-to-treat principle. Linear regression models (complete case analyses) were used as sensitivity analyses. Results There were no significant differences in lifestyle behaviors and determinants of these behaviors between the intervention and control group after 6 or 12 months. The sensitivity analyses showed effect modification for gender in sedentary behavior at work at 6-month follow-up, although the main analyses did not. Conclusions This study did not show an effect of a worksite mindfulness-based multi-component intervention on lifestyle behaviors and behavioral determinants after 6 and 12 months. The effectiveness of a worksite mindfulness-based multi-component intervention as a health promotion intervention for all workers could not be established. PMID:24467802

  20. Probabilistic flood damage modelling at the meso-scale

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2014-05-01

    Decisions on flood risk management and adaptation are usually based on risk analyses. Such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments. Most damage models have in common that complex damaging processes are described by simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood damage models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we show how the model BT-FLEMO (Bagging decision Tree based Flood Loss Estimation MOdel) can be applied on the meso-scale, namely on the basis of ATKIS land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany. The application of BT-FLEMO provides a probability distribution of estimated damage to residential buildings per municipality. Validation is undertaken on the one hand via a comparison with eight other damage models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official damage data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of damage estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation model BT-FLEMO is that it inherently provides quantitative information about the uncertainty of the prediction. Reference: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64.

  1. Evaluation of CMIP5 continental precipitation simulations relative to satellite-based gauge-adjusted observations

    DOE PAGES

    Mehran, Ali; AghaKouchak, Amir; Phillips, Thomas J.

    2014-02-25

    Numerous studies have emphasized that climate simulations are subject to various biases and uncertainties. The objective of this study is to cross-validate 34 Coupled Model Intercomparison Project Phase 5 (CMIP5) historical simulations of precipitation against the Global Precipitation Climatology Project (GPCP) data, quantifying model pattern discrepancies and biases for both entire data distributions and their upper tails. The results of the Volumetric Hit Index (VHI) analysis of the total monthly precipitation amounts show that most CMIP5 simulations are in good agreement with GPCP patterns in many areas, but that their replication of observed precipitation over arid regions and certain sub-continentalmore » regions (e.g., northern Eurasia, eastern Russia, central Australia) is problematical. Overall, the VHI of the multi-model ensemble mean and median also are superior to that of the individual CMIP5 models. However, at high quantiles of reference data (e.g., the 75th and 90th percentiles), all climate models display low skill in simulating precipitation, except over North America, the Amazon, and central Africa. Analyses of total bias (B) in CMIP5 simulations reveal that most models overestimate precipitation over regions of complex topography (e.g. western North and South America and southern Africa and Asia), while underestimating it over arid regions. Also, while most climate model simulations show low biases over Europe, inter-model variations in bias over Australia and Amazonia are considerable. The Quantile Bias (QB) analyses indicate that CMIP5 simulations are even more biased at high quantiles of precipitation. Lastly, we found that a simple mean-field bias removal improves the overall B and VHI values, but does not make a significant improvement in these model performance metrics at high quantiles of precipitation.« less

  2. Commercial aspects of semi-reusable launch systems

    NASA Astrophysics Data System (ADS)

    Obersteiner, M. H.; Müller, H.; Spies, H.

    2003-07-01

    This paper presents a business planning model for a commercial space launch system. The financing model is based on market analyses and projections combined with market capture models. An operations model is used to derive the annual cash income. Parametric cost modeling, development and production schedules are used for quantifying the annual expenditures, the internal rate of return, break even point of positive cash flow and the respective prices per launch. Alternative consortia structures, cash flow methods, capture rates and launch prices are used to examine the sensitivity of the model. Then the model is applied for a promising semi-reusable launcher concept, showing the general achievability of the commercial approach and the necessary pre-conditions.

  3. Modelling ventricular fibrillation coarseness during cardiopulmonary resuscitation by mixed effects stochastic differential equations.

    PubMed

    Gundersen, Kenneth; Kvaløy, Jan Terje; Eftestøl, Trygve; Kramer-Johansen, Jo

    2015-10-15

    For patients undergoing cardiopulmonary resuscitation (CPR) and being in a shockable rhythm, the coarseness of the electrocardiogram (ECG) signal is an indicator of the state of the patient. In the current work, we show how mixed effects stochastic differential equations (SDE) models, commonly used in pharmacokinetic and pharmacodynamic modelling, can be used to model the relationship between CPR quality measurements and ECG coarseness. This is a novel application of mixed effects SDE models to a setting quite different from previous applications of such models and where using such models nicely solves many of the challenges involved in analysing the available data. Copyright © 2015 John Wiley & Sons, Ltd.

  4. Nonlinear Analysis of the Space Shuttle Superlightweight LO2 Tank. Part 2; Behavior Under 3g End-of-Flight Loads

    NASA Technical Reports Server (NTRS)

    Nemeth, Michael P.; Young, Richard D.; Collins, Timothy J.; Starnes, James H.,Jr.

    1998-01-01

    Results of linear bifurcation and nonlinear analyses of the Space Shuttle super lightweight (SLWT) external liquid-oxygen (LO2) tank are presented for an important end-of-flight loading condition. These results illustrate an important type of response mode for thin-walled shells, that are subjected to combined mechanical and thermal loads, that may be encountered in the design of other liquid-fuel launch vehicles. Linear bifurcation analyses are presented that predict several nearly equal eigenvalues that correspond to local buckling modes in the aft dome of the LO2 tank. In contrast, the nonlinear response phenomenon is shown to consist of a short-wavelength bending deformation in the aft elliptical dome of the LO2 tank that grows in amplitude in a stable manner with increasing load. Imperfection sensitivity analyses are presented that show that the presence of several nearly equal eigenvalues does not lead to a premature general instability mode for the aft dome. For the linear bifurcation and nonlinear analyses, the results show that accurate predictions of the response of the shell generally require a large-scale, high fidelity finite-element model. Results are also presented that show that the SLWT LO2 tank can support loads in excess of approximately 1.9 times the values of the operational loads considered.

  5. Neptune Aerocapture Systems Analysis

    NASA Technical Reports Server (NTRS)

    Lockwood, Mary Kae

    2004-01-01

    A Neptune Aerocapture Systems Analysis is completed to determine the feasibility, benefit and risk of an aeroshell aerocapture system for Neptune and to identify technology gaps and technology performance goals. The high fidelity systems analysis is completed by a five center NASA team and includes the following disciplines and analyses: science; mission design; aeroshell configuration screening and definition; interplanetary navigation analyses; atmosphere modeling; computational fluid dynamics for aerodynamic performance and database definition; initial stability analyses; guidance development; atmospheric flight simulation; computational fluid dynamics and radiation analyses for aeroheating environment definition; thermal protection system design, concepts and sizing; mass properties; structures; spacecraft design and packaging; and mass sensitivities. Results show that aerocapture can deliver 1.4 times more mass to Neptune orbit than an all-propulsive system for the same launch vehicle. In addition aerocapture results in a 3-4 year reduction in trip time compared to all-propulsive systems. Aerocapture is feasible and performance is adequate for the Neptune aerocapture mission. Monte Carlo simulation results show 100% successful capture for all cases including conservative assumptions on atmosphere and navigation. Enabling technologies for this mission include TPS manufacturing; and aerothermodynamic methods and validation for determining coupled 3-D convection, radiation and ablation aeroheating rates and loads, and the effects on surface recession.

  6. Modelling the transient behaviour of pulsed current tungsten-inert-gas weldpools

    NASA Astrophysics Data System (ADS)

    Wu, C. S.; Zheng, W.; Wu, L.

    1999-01-01

    A three-dimensional model is established to simulate the pulsed current tungsten-inert-gas (TIG) welding process. The goal is to analyse the cyclic variation of fluid flow and heat transfer in weldpools under periodic arc heat input. To this end, an algorithm, which is capable of handling the transience, nonlinearity, multiphase and strong coupling encountered in this work, is developed. The numerical simulations demonstrate the transient behaviour of weldpools under pulsed current. Experimental data are compared with numerical results to show the effectiveness of the developed model.

  7. Climate Shocks and Migration: An Agent-Based Modeling Approach.

    PubMed

    Entwisle, Barbara; Williams, Nathalie E; Verdery, Ashton M; Rindfuss, Ronald R; Walsh, Stephen J; Malanson, George P; Mucha, Peter J; Frizzelle, Brian G; McDaniel, Philip M; Yao, Xiaozheng; Heumann, Benjamin W; Prasartkul, Pramote; Sawangdee, Yothin; Jampaklay, Aree

    2016-09-01

    This is a study of migration responses to climate shocks. We construct an agent-based model that incorporates dynamic linkages between demographic behaviors, such as migration, marriage, and births, and agriculture and land use, which depend on rainfall patterns. The rules and parameterization of our model are empirically derived from qualitative and quantitative analyses of a well-studied demographic field site, Nang Rong district, Northeast Thailand. With this model, we simulate patterns of migration under four weather regimes in a rice economy: 1) a reference, 'normal' scenario; 2) seven years of unusually wet weather; 3) seven years of unusually dry weather; and 4) seven years of extremely variable weather. Results show relatively small impacts on migration. Experiments with the model show that existing high migration rates and strong selection factors, which are unaffected by climate change, are likely responsible for the weak migration response.

  8. The urgency-gating model can explain the effects of early evidence.

    PubMed

    Carland, Matthew A; Thura, David; Cisek, Paul

    2015-12-01

    In a recent report, Winkel, Keuken, van Maanen, Wagenmakers & Forstmann (Psychonomics Bulletin and Review 21(3): 777-784, 2014) show that during a random-dot motion discrimination task, early differences in motion evidence can influence reaction times (RTs) and error rates in human subjects. They use this as an argument in favor of the drift-diffusion model and against the urgency-gating model. However, their implementation of the urgency-gating model is incomplete, as it lacks the low-pass filter that is necessary to deal with noisy input such as the motion signal used in their experimental task. Furthermore, by focusing analyses solely on comparison of mean RTs they overestimate how long early information influences individual trials. Here, we show that if the urgency-gating model is correctly implemented, including a low-pass filter with a 250 ms time constant, it can successfully reproduce the results of the Winkel et al. experiment.

  9. Inflated Uncertainty in Multimodel-Based Regional Climate Projections.

    PubMed

    Madsen, Marianne Sloth; Langen, Peter L; Boberg, Fredrik; Christensen, Jens Hesselbjerg

    2017-11-28

    Multimodel ensembles are widely analyzed to estimate the range of future regional climate change projections. For an ensemble of climate models, the result is often portrayed by showing maps of the geographical distribution of the multimodel mean results and associated uncertainties represented by model spread at the grid point scale. Here we use a set of CMIP5 models to show that presenting statistics this way results in an overestimation of the projected range leading to physically implausible patterns of change on global but also on regional scales. We point out that similar inconsistencies occur in impact analyses relying on multimodel information extracted using statistics at the regional scale, for example, when a subset of CMIP models is selected to represent regional model spread. Consequently, the risk of unwanted impacts may be overestimated at larger scales as climate change impacts will never be realized as the worst (or best) case everywhere.

  10. Climate Shocks and Migration: An Agent-Based Modeling Approach

    PubMed Central

    Entwisle, Barbara; Williams, Nathalie E.; Verdery, Ashton M.; Rindfuss, Ronald R.; Walsh, Stephen J.; Malanson, George P.; Mucha, Peter J.; Frizzelle, Brian G.; McDaniel, Philip M.; Yao, Xiaozheng; Heumann, Benjamin W.; Prasartkul, Pramote; Sawangdee, Yothin; Jampaklay, Aree

    2016-01-01

    This is a study of migration responses to climate shocks. We construct an agent-based model that incorporates dynamic linkages between demographic behaviors, such as migration, marriage, and births, and agriculture and land use, which depend on rainfall patterns. The rules and parameterization of our model are empirically derived from qualitative and quantitative analyses of a well-studied demographic field site, Nang Rong district, Northeast Thailand. With this model, we simulate patterns of migration under four weather regimes in a rice economy: 1) a reference, ‘normal’ scenario; 2) seven years of unusually wet weather; 3) seven years of unusually dry weather; and 4) seven years of extremely variable weather. Results show relatively small impacts on migration. Experiments with the model show that existing high migration rates and strong selection factors, which are unaffected by climate change, are likely responsible for the weak migration response. PMID:27594725

  11. Spatiotemporal dynamics of random stimuli account for trial-to-trial variability in perceptual decision making

    PubMed Central

    Park, Hame; Lueckmann, Jan-Matthis; von Kriegstein, Katharina; Bitzer, Sebastian; Kiebel, Stefan J.

    2016-01-01

    Decisions in everyday life are prone to error. Standard models typically assume that errors during perceptual decisions are due to noise. However, it is unclear how noise in the sensory input affects the decision. Here we show that there are experimental tasks for which one can analyse the exact spatio-temporal details of a dynamic sensory noise and better understand variability in human perceptual decisions. Using a new experimental visual tracking task and a novel Bayesian decision making model, we found that the spatio-temporal noise fluctuations in the input of single trials explain a significant part of the observed responses. Our results show that modelling the precise internal representations of human participants helps predict when perceptual decisions go wrong. Furthermore, by modelling precisely the stimuli at the single-trial level, we were able to identify the underlying mechanism of perceptual decision making in more detail than standard models. PMID:26752272

  12. The use of motion analysis to measure pain-related behaviour in a rat model of degenerative tendon injuries.

    PubMed

    Fu, Sai-Chuen; Chan, Kai-Ming; Chan, Lai-Shan; Fong, Daniel Tik-Pui; Lui, Po-Yee Pauline

    2009-05-15

    Chronic tendinopathy is characterized with longstanding activity-related pain with degenerative tendon injuries. An objective tool to measure painful responses in animal models is essential for the development of effective treatment for tendinopathy. Gait analysis has been developed to monitor the inflammatory pain in small animals. We reported the use of motion analysis to monitor gait changes in a rat model of degenerative tendon injury. Intratendinous injection of collagenase into the left patellar tendon of Sprague Dawley rat was used to induce degenerative tendon injury, while an equal volume of saline was injected in the control groups. Motion analyses with a high speed video camera were performed on all rats at pre-injury, 2, 4, 8, 12 or 16 weeks post injection. In the end-point study, the rats were sacrificed to obtain tendon samples for histological examination after motion analyses. In the follow-up study, repeated motion analyses were performed on another group of collagenase-treated and saline-treated rats. The results showed that rats with injured patellar tendon exhibited altered walking gait as compared to the controls. The change in double stance duration in the collagenase-treated rats was reversible by administration of buprenorphrine (p=0.029), it suggested that the detected gait changes were associated with pain. Comparisons of end-point and follow-up studies revealed the confounding effects of training, which led to higher gait velocities and probably a different adaptive response to tendon pain in the trained rats. The results showed that motion analysis could be used to measure activity-related chronic tendon pain.

  13. Evaluating Measurement of Dynamic Constructs: Defining a Measurement Model of Derivatives

    PubMed Central

    Estabrook, Ryne

    2015-01-01

    While measurement evaluation has been embraced as an important step in psychological research, evaluating measurement structures with longitudinal data is fraught with limitations. This paper defines and tests a measurement model of derivatives (MMOD), which is designed to assess the measurement structure of latent constructs both for analyses of between-person differences and for the analysis of change. Simulation results indicate that MMOD outperforms existing models for multivariate analysis and provides equivalent fit to data generation models. Additional simulations show MMOD capable of detecting differences in between-person and within-person factor structures. Model features, applications and future directions are discussed. PMID:24364383

  14. Moisture Forecast Bias Correction in GEOS DAS

    NASA Technical Reports Server (NTRS)

    Dee, D.

    1999-01-01

    Data assimilation methods rely on numerous assumptions about the errors involved in measuring and forecasting atmospheric fields. One of the more disturbing of these is that short-term model forecasts are assumed to be unbiased. In case of atmospheric moisture, for example, observational evidence shows that the systematic component of errors in forecasts and analyses is often of the same order of magnitude as the random component. we have implemented a sequential algorithm for estimating forecast moisture bias from rawinsonde data in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The algorithm is designed to remove the systematic component of analysis errors and can be easily incorporated in an existing statistical data assimilation system. We will present results of initial experiments that show a significant reduction of bias in the GEOS DAS moisture analyses.

  15. The Relationship between Intimacy Change and Passion: A Dyadic Diary Study

    PubMed Central

    Aykutoğlu, Bülent; Uysal, Ahmet

    2017-01-01

    In the current study we investigated the association between intimacy and passion by testing whether increases in intimacy generates passion (Baumeister and Bratslavsky, 1999). Furthermore, we examined whether there are partner effects in intimacy change and passion link. Couples (N = 75) participated in a 14-day long diary study. Dyadic multilevel analyses with residualized intimacy change scores showed that both actors’ and partners’ intimacy change positively predicted actor’s passion. However, analyses also showed that residualized passion change scores positively predicted intimacy. Although these findings provide some empirical evidence for the intimacy change model, in line with the previous research (Rubin and Campbell, 2012), they also suggest that it is not possible to discern whether intimacy increment generates passion or passion increment generates intimacy. PMID:29312093

  16. Meta-analyses on intra-aortic balloon pump in cardiogenic shock complicating acute myocardial infarction may provide biased results.

    PubMed

    Acconcia, M C; Caretta, Q; Romeo, F; Borzi, M; Perrone, M A; Sergi, D; Chiarotti, F; Calabrese, C M; Sili Scavalli, A; Gaudio, C

    2018-04-01

    Intra-aortic balloon pump (IABP) is the device most commonly investigated in patients with cardiogenic shock (CS) complicating acute myocardial infarction (AMI). Recently meta-analyses on this topic showed opposite results: some complied with the actual guideline recommendations, while others did not, due to the presence of bias. We investigated the reasons for the discrepancy among meta-analyses and strategies employed to avoid the potential source of bias. Scientific databases were searched for meta-analyses of IABP support in AMI complicated by CS. The presence of clinical diversity, methodological diversity and statistical heterogeneity were analyzed. When we found clinical or methodological diversity, we reanalyzed the data by comparing the patients selected for homogeneous groups. When the fixed effect model was employed despite the presence of statistical heterogeneity, the meta-analysis was repeated adopting the random effect model, with the same estimator used in the original meta-analysis. Twelve meta-analysis were selected. Six meta-analyses of randomized controlled trials (RCTs) were inconclusive because underpowered to detect the IABP effect. Five included RCTs and observational studies (Obs) and one only Obs. Some meta-analyses on RCTs and Obs had biased results due to presence of clinical and/or methodological diversity. The reanalysis of data reallocated for homogeneous groups was no more in contrast with guidelines recommendations. Meta-analyses performed without controlling for clinical and/or methodological diversity, represent a confounding message against a good clinical practice. The reanalysis of data demonstrates the validity of the current guidelines recommendations in addressing clinical decision making in providing IABP support in AMI complicated by CS.

  17. Spatial distribution of psychotic disorders in an urban area of France: an ecological study.

    PubMed

    Pignon, Baptiste; Schürhoff, Franck; Baudin, Grégoire; Ferchiou, Aziz; Richard, Jean-Romain; Saba, Ghassen; Leboyer, Marion; Kirkbride, James B; Szöke, Andrei

    2016-05-18

    Previous analyses of neighbourhood variations of non-affective psychotic disorders (NAPD) have focused mainly on incidence. However, prevalence studies provide important insights on factors associated with disease evolution as well as for healthcare resource allocation. This study aimed to investigate the distribution of prevalent NAPD cases in an urban area in France. The number of cases in each neighbourhood was modelled as a function of potential confounders and ecological variables, namely: migrant density, economic deprivation and social fragmentation. This was modelled using statistical models of increasing complexity: frequentist models (using Poisson and negative binomial regressions), and several Bayesian models. For each model, assumptions validity were checked and compared as to how this fitted to the data, in order to test for possible spatial variation in prevalence. Data showed significant overdispersion (invalidating the Poisson regression model) and residual autocorrelation (suggesting the need to use Bayesian models). The best Bayesian model was Leroux's model (i.e. a model with both strong correlation between neighbouring areas and weaker correlation between areas further apart), with economic deprivation as an explanatory variable (OR = 1.13, 95% CI [1.02-1.25]). In comparison with frequentist methods, the Bayesian model showed a better fit. The number of cases showed non-random spatial distribution and was linked to economic deprivation.

  18. Dual-model automatic detection of nerve-fibres in corneal confocal microscopy images.

    PubMed

    Dabbah, M A; Graham, J; Petropoulos, I; Tavakoli, M; Malik, R A

    2010-01-01

    Corneal Confocal Microscopy (CCM) imaging is a non-invasive surrogate of detecting, quantifying and monitoring diabetic peripheral neuropathy. This paper presents an automated method for detecting nerve-fibres from CCM images using a dual-model detection algorithm and compares the performance to well-established texture and feature detection methods. The algorithm comprises two separate models, one for the background and another for the foreground (nerve-fibres), which work interactively. Our evaluation shows significant improvement (p approximately 0) in both error rate and signal-to-noise ratio of this model over the competitor methods. The automatic method is also evaluated in comparison with manual ground truth analysis in assessing diabetic neuropathy on the basis of nerve-fibre length, and shows a strong correlation (r = 0.92). Both analyses significantly separate diabetic patients from control subjects (p approximately 0).

  19. Understanding the prevalence of lifetime abstinence from alcohol: An ecological study.

    PubMed

    Probst, Charlotte; Manthey, Jakob; Rehm, Jürgen

    2017-09-01

    The level of alcohol consumption and related burden in a country are strongly impacted by the prevalence of abstinence from alcohol use. The objective of this study was to characterize the association of lifetime abstinence from alcohol use with economic wealth (as measured in the gross domestic product [GDP]) and Muslim religion on a country level. An ecological study was performed using aggregate data of 183 countries for the year 2010. Lifetime abstinence among men and women was predicted using fractional response regression models with the natural logarithm of GDP-PPP (purchasing power parity) and the proportion of Muslim population as predictors. The models were further adjusted by the country's median age and World Health Organization region. Precision of prediction was investigated. Descriptive analyses showed a strong negative association between GDP-PPP and lifetime abstinence in countries without a Muslim majority and a GDP-PPP up to 20,000 international dollars. Regression models confirmed the negative association with GDP-PPP and showed a strong positive association between lifetime abstinence and the proportion of Muslim population. Stratified sensitivity analyses showed that in countries without a Muslim majority only GDP-PPP showed a statistically significant association whereas in Muslim majority countries only the proportion of Muslims was associated with the prevalence of lifetime abstinence. Particularly in countries with a lower GDP and without Muslim majority the prevalence of lifetime abstinence from alcohol use is strongly negatively associated with GDP-PPP. Future research should analyze the accordance in trends of GDP and lifetime abstinence over time. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. The sensitivity of the Arctic sea ice to orbitally induced insolation changes: a study of the mid-Holocene Paleoclimate Modelling Intercomparison Project 2 and 3 simulations

    NASA Astrophysics Data System (ADS)

    Berger, M.; Brandefelt, J.; Nilsson, J.

    2013-04-01

    In the present work the Arctic sea ice in the mid-Holocene and the pre-industrial climates are analysed and compared on the basis of climate-model results from the Paleoclimate Modelling Intercomparison Project phase 2 (PMIP2) and phase 3 (PMIP3). The PMIP3 models generally simulate smaller and thinner sea-ice extents than the PMIP2 models both for the pre-industrial and the mid-Holocene climate. Further, the PMIP2 and PMIP3 models all simulate a smaller and thinner Arctic summer sea-ice cover in the mid-Holocene than in the pre-industrial control climate. The PMIP3 models also simulate thinner winter sea ice than the PMIP2 models. The winter sea-ice extent response, i.e. the difference between the mid-Holocene and the pre-industrial climate, varies among both PMIP2 and PMIP3 models. Approximately one half of the models simulate a decrease in winter sea-ice extent and one half simulates an increase. The model-mean summer sea-ice extent is 11 % (21 %) smaller in the mid-Holocene than in the pre-industrial climate simulations in the PMIP2 (PMIP3). In accordance with the simple model of Thorndike (1992), the sea-ice thickness response to the insolation change from the pre-industrial to the mid-Holocene is stronger in models with thicker ice in the pre-industrial climate simulation. Further, the analyses show that climate models for which the Arctic sea-ice responses to increasing atmospheric CO2 concentrations are similar may simulate rather different sea-ice responses to the change in solar forcing between the mid-Holocene and the pre-industrial. For two specific models, which are analysed in detail, this difference is found to be associated with differences in the simulated cloud fractions in the summer Arctic; in the model with a larger cloud fraction the effect of insolation change is muted. A sub-set of the mid-Holocene simulations in the PMIP ensemble exhibit open water off the north-eastern coast of Greenland in summer, which can provide a fetch for surface waves. This is in broad agreement with recent analyses of sea-ice proxies, indicating that beach-ridges formed on the north-eastern coast of Greenland during the early- to mid-Holocene.

  1. Ecological models supporting environmental decision making: a strategy for the future

    USGS Publications Warehouse

    Schmolke, Amelie; Thorbek, Pernille; DeAngelis, Donald L.; Grimm, Volker

    2010-01-01

    Ecological models are important for environmental decision support because they allow the consequences of alternative policies and management scenarios to be explored. However, current modeling practice is unsatisfactory. A literature review shows that the elements of good modeling practice have long been identified but are widely ignored. The reasons for this might include lack of involvement of decision makers, lack of incentives for modelers to follow good practice, and the use of inconsistent terminologies. As a strategy for the future, we propose a standard format for documenting models and their analyses: transparent and comprehensive ecological modeling (TRACE) documentation. This standard format will disclose all parts of the modeling process to scrutiny and make modeling itself more efficient and coherent.

  2. Remote sensing and GIS-based landslide hazard analysis and cross-validation using multivariate logistic regression model on three test areas in Malaysia

    NASA Astrophysics Data System (ADS)

    Pradhan, Biswajeet

    2010-05-01

    This paper presents the results of the cross-validation of a multivariate logistic regression model using remote sensing data and GIS for landslide hazard analysis on the Penang, Cameron, and Selangor areas in Malaysia. Landslide locations in the study areas were identified by interpreting aerial photographs and satellite images, supported by field surveys. SPOT 5 and Landsat TM satellite imagery were used to map landcover and vegetation index, respectively. Maps of topography, soil type, lineaments and land cover were constructed from the spatial datasets. Ten factors which influence landslide occurrence, i.e., slope, aspect, curvature, distance from drainage, lithology, distance from lineaments, soil type, landcover, rainfall precipitation, and normalized difference vegetation index (ndvi), were extracted from the spatial database and the logistic regression coefficient of each factor was computed. Then the landslide hazard was analysed using the multivariate logistic regression coefficients derived not only from the data for the respective area but also using the logistic regression coefficients calculated from each of the other two areas (nine hazard maps in all) as a cross-validation of the model. For verification of the model, the results of the analyses were then compared with the field-verified landslide locations. Among the three cases of the application of logistic regression coefficient in the same study area, the case of Selangor based on the Selangor logistic regression coefficients showed the highest accuracy (94%), where as Penang based on the Penang coefficients showed the lowest accuracy (86%). Similarly, among the six cases from the cross application of logistic regression coefficient in other two areas, the case of Selangor based on logistic coefficient of Cameron showed highest (90%) prediction accuracy where as the case of Penang based on the Selangor logistic regression coefficients showed the lowest accuracy (79%). Qualitatively, the cross application model yields reasonable results which can be used for preliminary landslide hazard mapping.

  3. IMPACT: a generic tool for modelling and simulating public health policy.

    PubMed

    Ainsworth, J D; Carruthers, E; Couch, P; Green, N; O'Flaherty, M; Sperrin, M; Williams, R; Asghar, Z; Capewell, S; Buchan, I E

    2011-01-01

    Populations are under-served by local health policies and management of resources. This partly reflects a lack of realistically complex models to enable appraisal of a wide range of potential options. Rising computing power coupled with advances in machine learning and healthcare information now enables such models to be constructed and executed. However, such models are not generally accessible to public health practitioners who often lack the requisite technical knowledge or skills. To design and develop a system for creating, executing and analysing the results of simulated public health and healthcare policy interventions, in ways that are accessible and usable by modellers and policy-makers. The system requirements were captured and analysed in parallel with the statistical method development for the simulation engine. From the resulting software requirement specification the system architecture was designed, implemented and tested. A model for Coronary Heart Disease (CHD) was created and validated against empirical data. The system was successfully used to create and validate the CHD model. The initial validation results show concordance between the simulation results and the empirical data. We have demonstrated the ability to connect health policy-modellers and policy-makers in a unified system, thereby making population health models easier to share, maintain, reuse and deploy.

  4. Research and application of surface heat treatment for multipulse laser ablation of materials

    NASA Astrophysics Data System (ADS)

    Cai, Song; Chen, Genyu; Zhou, Cong

    2015-11-01

    This study analysed a laser ablation platform and built heat transfer equations for multipulse laser ablation of materials. The equations include three parts: laser emission after the material melt and gasification; end of laser emission after the material melts and there is the presence of a super-hot layer and solid-phase heat transfer changes during material ablation. For each of the three parts, the effects of evaporation, plasma shielding and energy accumulation under the pulse interval were considered. The equations are reasonable, and all the required parameters are only related to the laser parameters and material properties, allowing the model to have a certain versatility and practicability. The model was applied for numerical simulation of the heat transfer characteristics in the multipulse laser ablation of bronze and diamond. Next, experiments were conducted to analyse the topography of a bronze-bonded diamond grinding wheel after multipulse laser ablation. The theoretical analysis and experimental results showed that multipulse laser can merge the truing and dressing on a bronze-bonded diamond grinding wheel. This study provides theoretical guidance for optimising the process parameters in the laser ablation of a bronze-bonded diamond grinding wheel. A comparative analysis showed that the numerical solution to the model is in good agreement with the experimental data, thus verifying the correctness and feasibility of the heat transfer model.

  5. Evaluating the Dimensionality of Self-Determination Theory's Relative Autonomy Continuum.

    PubMed

    Sheldon, Kennon M; Osin, Evgeny N; Gordeeva, Tamara O; Suchkov, Dmitry D; Sychev, Oleg A

    2017-09-01

    We conducted a theoretical and psychometric evaluation of self-determination theory's "relative autonomy continuum" (RAC), an important aspect of the theory whose validity has recently been questioned. We first derived a Comprehensive Relative Autonomy Index (C-RAI) containing six subscales and 24 items, by conducting a paired paraphrase content analysis of existing RAI measures. We administered the C-RAI to multiple U.S. and Russian samples, assessing motivation to attend class, study a major, and take responsibility. Item-level and scale-level multidimensional scaling analyses, confirmatory factor analyses, and simplex/circumplex modeling analyses reaffirmed the validity of the RAC, across multiple samples, stems, and studies. Validation analyses predicting subjective well-being and trait autonomy from the six separate subscales, in combination with various higher order composites (weighted and unweighted), showed that an aggregate unweighted RAI score provides the most unbiased and efficient indicator of the overall quality of motivation within the behavioral domain being assessed.

  6. Investigated serious occupational accidents in the Netherlands, 1998-2009.

    PubMed

    Bellamy, Linda J; Manuel, Henk Jan; Oh, Joy I H

    2014-01-01

    Since 2003, a project has been underway to analyse the most serious occupational accidents in The Netherlands. All the serious occupational accidents investigated by the Dutch Labour Inspectorate for the 12 years of 1998-2009 inclusive have been entered into a database, a total of 20 030 investigations. This database uses a model of safety barriers supported by barrier tasks and management delivery systems such that, when combined with sector and year information, trends in the data can be analysed for their underlying causes. The trend analyses show that while the number of victims of serious reportable accidents is significantly decreasing, this is due to specific sectors, hazards and underlying causes. The significant results could not easily be directly associated with any specific regulation or action undertaken in The Netherlands although there have been many different approaches to reducing accidents during the period analysed, which could be contributing to the effect.

  7. Lower Stratospheric Temperature Differences Between Meteorological Analyses in two cold Arctic Winters and their Impact on Polar Processing Studies

    NASA Technical Reports Server (NTRS)

    Manney, Gloria L.; Sabutis, Joseph L.; Pawson, Steven; Santee, Michelle L.; Naujokat, Barbara; Swinbank, Richard; Gelman, Melvyn E.; Ebisuzaki, Wesley; Atlas, Robert (Technical Monitor)

    2001-01-01

    A quantitative intercomparison of six meteorological analyses is presented for the cold 1999-2000 and 1995-1996 Arctic winters. The impacts of using different analyzed temperatures in calculations of polar stratospheric cloud (PSC) formation potential, and of different winds in idealized trajectory-based temperature histories, are substantial. The area with temperatures below a PSC formation threshold commonly varies by approximately 25% among the analyses, with differences of over 50% at some times/locations. Freie University at Berlin analyses are often colder than others at T is less than or approximately 205 K. Biases between analyses vary from year to year; in January 2000. U.K. Met Office analyses were coldest and National Centers for Environmental Prediction (NCEP) analyses warmest. while NCEP analyses were usually coldest in 1995-1996 and Met Office or NCEP[National Center for Atmospheric Research Reanalysis (REAN) warmest. European Centre for Medium Range Weather Forecasting (ECMWF) temperatures agreed better with other analyses in 1999-2000, after improvements in the assimilation model. than in 1995-1996. Case-studies of temperature histories show substantial differences using Met Office, NCEP, REAN and NASA Data Assimilation Office (DAO) analyses. In January 2000 (when a large cold region was centered in the polar vortex), qualitatively similar results were obtained for all analyses. However, in February 2000 (a much warmer period) and in January and February 1996 (comparably cold to January 2000 but with large cold regions near the polar vortex edge), distributions of "potential PSC lifetimes" and total time spent below a PSC formation threshold varied significantly among the analyses. Largest peaks in "PSC lifetime" distributions in January 2000 were at 4-6 and 11-14 days. while in the 1996 periods, they were at 1-3 days. Thus different meteorological conditions in comparably cold winters had a large impact on expectations for PSC formation and on the discrepancies between different meteorological analyses. Met Office. NCEP, REAN, ECMWF and DAO analyses are commonly used for trajectory calculations and in chemical transport models; the choice of which analysis to use can strongly influence the results of such studies.

  8. Ataxia Telangiectasia–Mutated Gene Polymorphisms and Acute Normal Tissue Injuries in Cancer Patients After Radiation Therapy: A Systematic Review and Meta-analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Lihua; Cui, Jingkun; Tang, Fengjiao

    Purpose: Studies of the association between ataxia telangiectasia–mutated (ATM) gene polymorphisms and acute radiation injuries are often small in sample size, and the results are inconsistent. We conducted the first meta-analysis to provide a systematic review of published findings. Methods and Materials: Publications were identified by searching PubMed up to April 25, 2014. Primary meta-analysis was performed for all acute radiation injuries, and subgroup meta-analyses were based on clinical endpoint. The influence of sample size and radiation injury incidence on genetic effects was estimated in sensitivity analyses. Power calculations were also conducted. Results: The meta-analysis was conducted on the ATMmore » polymorphism rs1801516, including 5 studies with 1588 participants. For all studies, the cut-off for differentiating cases from controls was grade 2 acute radiation injuries. The primary meta-analysis showed a significant association with overall acute radiation injuries (allelic model: odds ratio = 1.33, 95% confidence interval: 1.04-1.71). Subgroup analyses detected an association between the rs1801516 polymorphism and a significant increase in urinary and lower gastrointestinal injuries and an increase in skin injury that was not statistically significant. There was no between-study heterogeneity in any meta-analyses. In the sensitivity analyses, small studies did not show larger effects than large studies. In addition, studies with high incidence of acute radiation injuries showed larger effects than studies with low incidence. Power calculations revealed that the statistical power of the primary meta-analysis was borderline, whereas there was adequate power for the subgroup analysis of studies with high incidence of acute radiation injuries. Conclusions: Our meta-analysis showed a consistency of the results from the overall and subgroup analyses. We also showed that the genetic effect of the rs1801516 polymorphism on acute radiation injuries was dependent on the incidence of the injury. These support the evidence of an association between the rs1801516 polymorphism and acute radiation injuries, encouraging further research of this topic.« less

  9. A Comparison of Forest Survey Data with Forest Dynamics Simulators FORCLIM and ZELIG along Climatic Gradients in the Pacific Northwest

    USGS Publications Warehouse

    Busing, Richard T.; Solomon, Allen M.

    2004-01-01

    Two forest dynamics simulators are compared along climatic gradients in the Pacific Northwest. The ZELIG and FORCLIM models are tested against forest survey data from western Oregon. Their ability to generate accurate patterns of forest basal area and species composition is evaluated for series of sites with contrasting climate. Projections from both models approximate the basal area and composition patterns for three sites along the elevation gradient at H.J. Andrews Experimental Forest in the western Cascade Range. The ZELIG model is somewhat more accurate than FORCLIM at the two low-elevation sites. Attempts to project forest composition along broader climatic gradients reveal limitations of ZELIG, however. For example, ZELIG is less accurate than FORCLIM at projecting the average composition of a west Cascades ecoregion selected for intensive analysis. Also, along a gradient consisting of several sites on an east to west transect at 44.1oN latitude, both the FORCLIM model and the actual data show strong changes in composition and total basal area, but the ZELIG model shows a limited response. ZELIG does not simulate the declines in forest basal area and the diminished dominance of mesic coniferous species east of the Cascade crest. We conclude that ZELIG is suitable for analyses of certain sites for which it has been calibrated. FORCLIM can be applied in analyses involving a range of climatic conditions without requiring calibration for specific sites.

  10. A test-retest assessment of the effects of mental load on ratings of affect, arousal and perceived exertion during submaximal cycling.

    PubMed

    Vera, Jesús; Perales, José C; Jiménez, Raimundo; Cárdenas, David

    2018-04-24

    This study aimed to test the effects of mental (i.e. executive) load during a dual physical-mental task on ratings of perceived exertion (RPE), affective valence, and arousal. The protocol included two dual tasks with matched physical demands but different executive demands (2-back and oddball), carried out on different days. The procedure was run twice to assess the sensitivity and stability of RPE, valence and arousal across the two trials. Linear mixed-effects analyses showed less positive valence (-0.44 points on average in a 1-9 scale; R β 2  = 0.074 [CI90%, 0.052-0.098]), and heightened arousal (+0.13 points on average in a 1-9 scale; R β 2  = 0.006 [CI90%, 0.001-0.015]), for the high executive load condition, but showed no effect of mental load on RPE. Separated analyses for the two task trials yielded best-fitting models that were identical across trials for RPE and valence, but not for arousal. Model fitting was improved by assuming a 1-level autoregressive covariance structure for all analyses. In conclusion, executive load during a dual physical-mental task modulates the emotional response to effort, but not RPE. The autoregressive covariance suggests that people tend to anchor estimates on prior ones, which imposes certain limits on scales' usability.

  11. Fractal and multifractal analyses of bipartite networks

    NASA Astrophysics Data System (ADS)

    Liu, Jin-Long; Wang, Jian; Yu, Zu-Guo; Xie, Xian-Hua

    2017-03-01

    Bipartite networks have attracted considerable interest in various fields. Fractality and multifractality of unipartite (classical) networks have been studied in recent years, but there is no work to study these properties of bipartite networks. In this paper, we try to unfold the self-similarity structure of bipartite networks by performing the fractal and multifractal analyses for a variety of real-world bipartite network data sets and models. First, we find the fractality in some bipartite networks, including the CiteULike, Netflix, MovieLens (ml-20m), Delicious data sets and (u, v)-flower model. Meanwhile, we observe the shifted power-law or exponential behavior in other several networks. We then focus on the multifractal properties of bipartite networks. Our results indicate that the multifractality exists in those bipartite networks possessing fractality. To capture the inherent attribute of bipartite network with two types different nodes, we give the different weights for the nodes of different classes, and show the existence of multifractality in these node-weighted bipartite networks. In addition, for the data sets with ratings, we modify the two existing algorithms for fractal and multifractal analyses of edge-weighted unipartite networks to study the self-similarity of the corresponding edge-weighted bipartite networks. The results show that our modified algorithms are feasible and can effectively uncover the self-similarity structure of these edge-weighted bipartite networks and their corresponding node-weighted versions.

  12. Fractal and multifractal analyses of bipartite networks.

    PubMed

    Liu, Jin-Long; Wang, Jian; Yu, Zu-Guo; Xie, Xian-Hua

    2017-03-31

    Bipartite networks have attracted considerable interest in various fields. Fractality and multifractality of unipartite (classical) networks have been studied in recent years, but there is no work to study these properties of bipartite networks. In this paper, we try to unfold the self-similarity structure of bipartite networks by performing the fractal and multifractal analyses for a variety of real-world bipartite network data sets and models. First, we find the fractality in some bipartite networks, including the CiteULike, Netflix, MovieLens (ml-20m), Delicious data sets and (u, v)-flower model. Meanwhile, we observe the shifted power-law or exponential behavior in other several networks. We then focus on the multifractal properties of bipartite networks. Our results indicate that the multifractality exists in those bipartite networks possessing fractality. To capture the inherent attribute of bipartite network with two types different nodes, we give the different weights for the nodes of different classes, and show the existence of multifractality in these node-weighted bipartite networks. In addition, for the data sets with ratings, we modify the two existing algorithms for fractal and multifractal analyses of edge-weighted unipartite networks to study the self-similarity of the corresponding edge-weighted bipartite networks. The results show that our modified algorithms are feasible and can effectively uncover the self-similarity structure of these edge-weighted bipartite networks and their corresponding node-weighted versions.

  13. Fractal and multifractal analyses of bipartite networks

    PubMed Central

    Liu, Jin-Long; Wang, Jian; Yu, Zu-Guo; Xie, Xian-Hua

    2017-01-01

    Bipartite networks have attracted considerable interest in various fields. Fractality and multifractality of unipartite (classical) networks have been studied in recent years, but there is no work to study these properties of bipartite networks. In this paper, we try to unfold the self-similarity structure of bipartite networks by performing the fractal and multifractal analyses for a variety of real-world bipartite network data sets and models. First, we find the fractality in some bipartite networks, including the CiteULike, Netflix, MovieLens (ml-20m), Delicious data sets and (u, v)-flower model. Meanwhile, we observe the shifted power-law or exponential behavior in other several networks. We then focus on the multifractal properties of bipartite networks. Our results indicate that the multifractality exists in those bipartite networks possessing fractality. To capture the inherent attribute of bipartite network with two types different nodes, we give the different weights for the nodes of different classes, and show the existence of multifractality in these node-weighted bipartite networks. In addition, for the data sets with ratings, we modify the two existing algorithms for fractal and multifractal analyses of edge-weighted unipartite networks to study the self-similarity of the corresponding edge-weighted bipartite networks. The results show that our modified algorithms are feasible and can effectively uncover the self-similarity structure of these edge-weighted bipartite networks and their corresponding node-weighted versions. PMID:28361962

  14. South African Teachers' Ability to Argue: The Emergence of Inclusive Argumentation

    ERIC Educational Resources Information Center

    Scholtz, Zena; Braund, Martin; Hodges, Merle; Koopman, Robert; Lubben, Fred

    2008-01-01

    This paper explores the argumentation ability of ten science teachers in two South African schools on opposite ends of the resource spectrum. Toulmin's model is used to analyse individual contributions in six group discussions. The findings show that levels of argumentation improve with teachers' involvement in the development of teaching…

  15. Teacher Education in Portugal: Analysing Changes using the ATEE-RDC19 Scenario Methodology.

    ERIC Educational Resources Information Center

    Sousa, Jesus Maria

    2003-01-01

    Reviews the development of teacher education in Portugal since the 1974 revolution, which brought the country to democracy. Using the Association for Teacher Education in Europe's scenario model, the paper describes the hidden philosophies underlying changes that are occurring and shows how teacher education has evolved from a romantic, idealistic…

  16. Do Nondomestic Undergraduates Choose a Major Field in Order to Maximize Grade Point Averages?

    ERIC Educational Resources Information Center

    Bergman, Matthew E.; Fass-Holmes, Barry

    2016-01-01

    The authors investigated whether undergraduates attending an American West Coast public university who were not U.S. citizens (nondomestic) maximized their grade point averages (GPA) through their choice of major field. Multiple regression hierarchical linear modeling analyses showed that major field's effect size was small for these…

  17. Determinants of Student Attitudes toward Team Exams

    ERIC Educational Resources Information Center

    Reinig, Bruce A.; Horowitz, Ira; Whittenburg, Gene

    2014-01-01

    We examine how student attitudes toward their group, learning method, and perceived development of professional skills are initially shaped and subsequently evolve through multiple uses of team exams. Using a Tobit regression model to analyse a sequence of 10 team quizzes given in a graduate-level tax accounting course, we show that there is an…

  18. Early Experience and the Development of Cognitive Competence: Some Theoretical and Methodological Issues.

    ERIC Educational Resources Information Center

    Ulvund, Stein Erik

    1982-01-01

    Argues that in analyzing effects of early experience on development of cognitive competence, theoretical analyses as well as empirical investigations should be based on a transactional model of development. Shows optimal stimulation hypothesis, particularly the enhancement prediction, seems to represent a transactional approach to the study of…

  19. The Integration of Genetic Propensities into Social-Control Models of Delinquency and Violence among Male Youths

    ERIC Educational Resources Information Center

    Guo, Guang; Roettger, Michael E.; Cai, Tianji

    2008-01-01

    This study, drawing on approximately 1,100 males from the National Longitudinal Study of Adolescent Health, demonstrates the importance of genetics, and genetic-environmental interactions, for understanding adolescent delinquency and violence. Our analyses show that three genetic polymorphisms--specifically, the 30-bp promoter-region variable…

  20. Age, Period and Cohort Effects on Social Capital

    ERIC Educational Resources Information Center

    Schwadel, Philip; Stout, Michael

    2012-01-01

    Researchers hypothesize that social capital in the United States is not just declining, but that it is declining across "generations" or birth cohorts. Testing this proposition, we examine changes in social capital using age-period-cohort intrinsic estimator models. Results from analyses of 1972-2010 General Social Survey data show (1)…

  1. Analysis of Radiation Exposure for Troop Observers, Exercise Desert Rock V, Operation Upshot-Knothole.

    DTIC Science & Technology

    1981-04-28

    on initial doses. Residual doses are determined through an automiated procedure that utilizes raw data in regression analyses to fit space-time models...show their relationship to the observer positions. The computer-calculated doses do not reflect the presence of the human body in the radiological

  2. Grape stalks biomass as raw material for activated carbon production: synthesis, characterization and adsorption ability

    NASA Astrophysics Data System (ADS)

    Hashemi Shahraki, Zahra; Sharififard, Hakimeh; Lashanizadegan, Asghar

    2018-05-01

    In order to produce activated carbon from grape stalks, this biomass was activated chemically with KOH. Identification methods including FTIR, BET, SEM, Boehm titration and pHzpc measurement were applied to characterize the produced carbon. The adsorption ability of produced activated carbon toward cadmium removal from aqueous solution was evaluated by using Central Composite Design methodology and the effects of process parameters were analysed, as well as, the optimum processing conditions were determined using statistical methods. In order to characterize the equilibrium behaviour of adsorption process, the equilibrium data were analysed by Langmuir, Freundlich, and R-D isotherm models. Results indicated that the adsorption process is a monolayer process and the adsorption capacity of prepared activated carbon was 140.84 mg L‑1. Analysis of kinetics data showed that the pseudo-second-order and Elovich models were well fitted with the kinetics results and this suggests the domination of chemical adsorption. The regenerability results showed that the prepared activated carbon has a reasonable adsorption capacity toward cadmium after five adsorption/desorption cycles.

  3. Effectiveness of the Flipped Classroom Model in Anatomy and Physiology Laboratory Courses at a Hispanic Serving Institution

    NASA Astrophysics Data System (ADS)

    Sanchez, Gerardo

    A flipped laboratory model involves significant preparation by the students on lab material prior to entry to the laboratory. This allows laboratory time to be focused on active learning through experiments. The aim of this study was to observe changes in student performance through the transition from a traditional laboratory format, to a flipped format. The data showed that for both Anatomy and Physiology (I and II) laboratories a more normal distribution of grades was observed once labs were flipped and lecture grade averages increased. Chi square and analysis of variance tests showed grade changes to a statistically significant degree, with a p value of less than 0.05 on both analyses. Regression analyses gave decreasing numbers after the flipped labs were introduced with an r. 2 value of .485 for A&P I, and .564 for A&P II. Results indicate improved scores for the lecture part of the A&P course, decreased outlying scores above 100, and all score distributions approached a more normal distribution.

  4. A multidisciplinary selection model for youth soccer: the Ghent Youth Soccer Project

    PubMed Central

    Vaeyens, R; Malina, R M; Janssens, M; Van Renterghem, B; Bourgois, J; Vrijens, J; Philippaerts, R M

    2006-01-01

    Objectives To determine the relationships between physical and performance characteristics and level of skill in youth soccer players aged 12–16 years. Methods Anthropometry, maturity status, functional and sport‐specific parameters were assessed in elite, sub‐elite, and non‐elite youth players in four age groups: U13 (n = 117), U14 (n = 136), U15 (n = 138) and U16 (n = 99). Results Multivariate analyses of covariance by age group with maturity status as the covariate showed that elite players scored better than the non‐elite players on strength, flexibility, speed, aerobic endurance, anaerobic capacity and several technical skills (p<0.05). Stepwise discriminant analyses showed that running speed and technical skills were the most important characteristics in U13 and U14 players, while cardiorespiratory endurance was more important in U15 and U16 players. The results suggest that discriminating characteristics change with competitive age levels. Conclusions Characteristics that discriminate youth soccer players vary by age group. Talent identification models should thus be dynamic and provide opportunities for changing parameters in a long‐term developmental context. PMID:16980535

  5. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    PubMed

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  6. MultiMetEval: Comparative and Multi-Objective Analysis of Genome-Scale Metabolic Models

    PubMed Central

    Gevorgyan, Albert; Kierzek, Andrzej M.; Breitling, Rainer; Takano, Eriko

    2012-01-01

    Comparative metabolic modelling is emerging as a novel field, supported by the development of reliable and standardized approaches for constructing genome-scale metabolic models in high throughput. New software solutions are needed to allow efficient comparative analysis of multiple models in the context of multiple cellular objectives. Here, we present the user-friendly software framework Multi-Metabolic Evaluator (MultiMetEval), built upon SurreyFBA, which allows the user to compose collections of metabolic models that together can be subjected to flux balance analysis. Additionally, MultiMetEval implements functionalities for multi-objective analysis by calculating the Pareto front between two cellular objectives. Using a previously generated dataset of 38 actinobacterial genome-scale metabolic models, we show how these approaches can lead to exciting novel insights. Firstly, after incorporating several pathways for the biosynthesis of natural products into each of these models, comparative flux balance analysis predicted that species like Streptomyces that harbour the highest diversity of secondary metabolite biosynthetic gene clusters in their genomes do not necessarily have the metabolic network topology most suitable for compound overproduction. Secondly, multi-objective analysis of biomass production and natural product biosynthesis in these actinobacteria shows that the well-studied occurrence of discrete metabolic switches during the change of cellular objectives is inherent to their metabolic network architecture. Comparative and multi-objective modelling can lead to insights that could not be obtained by normal flux balance analyses. MultiMetEval provides a powerful platform that makes these analyses straightforward for biologists. Sources and binaries of MultiMetEval are freely available from https://github.com/PiotrZakrzewski/MetEval/downloads. PMID:23272111

  7. Implications of cellular models of dopamine neurons for disease

    PubMed Central

    Evans, Rebekah C.; Oster, Andrew M.; Pissadaki, Eleftheria K.; Drion, Guillaume; Kuznetsov, Alexey S.; Gutkin, Boris S.

    2016-01-01

    This review addresses the present state of single-cell models of the firing pattern of midbrain dopamine neurons and the insights that can be gained from these models into the underlying mechanisms for diseases such as Parkinson's, addiction, and schizophrenia. We will explain the analytical technique of separation of time scales and show how it can produce insights into mechanisms using simplified single-compartment models. We also use morphologically realistic multicompartmental models to address spatially heterogeneous aspects of neural signaling and neural metabolism. Separation of time scale analyses are applied to pacemaking, bursting, and depolarization block in dopamine neurons. Differences in subpopulations with respect to metabolic load are addressed using multicompartmental models. PMID:27582295

  8. Dynamics and spatial structure of ENSO from re-analyses versus CMIP5 models

    NASA Astrophysics Data System (ADS)

    Serykh, Ilya; Sonechkin, Dmitry

    2016-04-01

    Basing on a mathematical idea about the so-called strange nonchaotic attractor (SNA) in the quasi-periodically forced dynamical systems, the currently available re-analyses data are considered. It is found that the El Niño - Southern Oscillation (ENSO) is driven not only by the seasonal heating, but also by three more external periodicities (incommensurate to the annual period) associated with the ~18.6-year lunar-solar nutation of the Earth rotation axis, ~11-year sunspot activity cycle and the ~14-month Chandler wobble in the Earth's pole motion. Because of the incommensurability of their periods all four forces affect the system in inappropriate time moments. As a result, the ENSO time series look to be very complex (strange in mathematical terms) but nonchaotic. The power spectra of ENSO indices reveal numerous peaks located at the periods that are multiples of the above periodicities as well as at their sub- and super-harmonic. In spite of the above ENSO complexity, a mutual order seems to be inherent to the ENSO time series and their spectra. This order reveals itself in the existence of a scaling of the power spectrum peaks and respective rhythms in the ENSO dynamics that look like the power spectrum and dynamics of the SNA. It means there are no limits to forecast ENSO, in principle. In practice, it opens a possibility to forecast ENSO for several years ahead. Global spatial structures of anomalies during El Niño and power spectra of ENSO indices from re-analyses are compared with the respective output quantities in the CMIP5 climate models (the Historical experiment). It is found that the models reproduce global spatial structures of the near surface temperature and sea level pressure anomalies during El Niño very similar to these fields in the re-analyses considered. But the power spectra of the ENSO indices from the CMIP5 models show no peaks at the same periods as the re-analyses power spectra. We suppose that it is possible to improve modeled rhythms if the afore-mentioned external periodicities are taken in an explicit consideration in the models.

  9. New substitution models for rooting phylogenetic trees.

    PubMed

    Williams, Tom A; Heaps, Sarah E; Cherlin, Svetlana; Nye, Tom M W; Boys, Richard J; Embley, T Martin

    2015-09-26

    The root of a phylogenetic tree is fundamental to its biological interpretation, but standard substitution models do not provide any information on its position. Here, we describe two recently developed models that relax the usual assumptions of stationarity and reversibility, thereby facilitating root inference without the need for an outgroup. We compare the performance of these models on a classic test case for phylogenetic methods, before considering two highly topical questions in evolutionary biology: the deep structure of the tree of life and the root of the archaeal radiation. We show that all three alignments contain meaningful rooting information that can be harnessed by these new models, thus complementing and extending previous work based on outgroup rooting. In particular, our analyses exclude the root of the tree of life from the eukaryotes or Archaea, placing it on the bacterial stem or within the Bacteria. They also exclude the root of the archaeal radiation from several major clades, consistent with analyses using other rooting methods. Overall, our results demonstrate the utility of non-reversible and non-stationary models for rooting phylogenetic trees, and identify areas where further progress can be made. © 2015 The Authors.

  10. Prediction on fracture risk of femur with Osteogenesis Imperfecta using finite element models: Preliminary study

    NASA Astrophysics Data System (ADS)

    Wanna, S. B. C.; Basaruddin, K. S.; Mat Som, M. H.; Mohamad Hashim, M. S.; Daud, R.; Majid, M. S. Abdul; Sulaiman, A. R.

    2017-10-01

    Osteogenesis imperfecta (OI) is a genetic disease which affecting the bone geometry. In a severe case, this disease can cause death to patients. The main issue of this disease is the prediction on bone fracture by the orthopaedic surgeons. The resistance of the bone to withstand the force before the bones fracture often become the main concern. Therefore, the objective of the present preliminary study was to investigate the fracture risk associated with OI bone, particularly in femur, when subjected to the self-weight. Finite element (FEA) was employed to reconstruct the OI bone model and analyse the mechanical stress response of femur before it fractures. Ten deformed models with different severity of OI bones were developed and the force that represents patient self-weight was applied to the reconstructed models in static analysis. Stress and fracture risk were observed and analysed throughout the simulation. None of the deformed model were observed experienced fracture. The fracture risk increased with increased severity of the deformed bone. The results showed that all deformed femur models were able to bear the force without experienced fracture when subjected to only the self-weight.

  11. SaLEM (v1.0) - the Soil and Landscape Evolution Model (SaLEM) for simulation of regolith depth in periglacial environments

    NASA Astrophysics Data System (ADS)

    Bock, Michael; Conrad, Olaf; Günther, Andreas; Gehrt, Ernst; Baritz, Rainer; Böhner, Jürgen

    2018-04-01

    We propose the implementation of the Soil and Landscape Evolution Model (SaLEM) for the spatiotemporal investigation of soil parent material evolution following a lithologically differentiated approach. Relevant parts of the established Geomorphic/Orogenic Landscape Evolution Model (GOLEM) have been adapted for an operational Geographical Information System (GIS) tool within the open-source software framework System for Automated Geoscientific Analyses (SAGA), thus taking advantage of SAGA's capabilities for geomorphometric analyses. The model is driven by palaeoclimatic data (temperature, precipitation) representative of periglacial areas in northern Germany over the last 50 000 years. The initial conditions have been determined for a test site by a digital terrain model and a geological model. Weathering, erosion and transport functions are calibrated using extrinsic (climatic) and intrinsic (lithologic) parameter data. First results indicate that our differentiated SaLEM approach shows some evidence for the spatiotemporal prediction of important soil parental material properties (particularly its depth). Future research will focus on the validation of the results against field data, and the influence of discrete events (mass movements, floods) on soil parent material formation has to be evaluated.

  12. [A cost-benefit analysis of different therapeutic methods in menorrhagia].

    PubMed

    Kirschner, R

    1995-02-20

    When deciding the right forms of treatment for various medical conditions it has been usual to consider medical knowledge, norms and experience. Increasingly, economic factors and principles are being introduced by the management, in the form of health economics and pharmaco-economic analyses, enforced as budgetary cuts and demands for rationalisation and measures to increase efficiency. Economic evaluations require construction of models for analyses. We have used DRG-information, National Health reimbursements and pharmacological retail prices to make a cost-efficiency analysis of treatments of menorrhagia. The analysis showed better cost-efficiency for certain pharmacological treatments than for surgery.

  13. Performance of time-varying predictors in multilevel models under an assumption of fixed or random effects.

    PubMed

    Baird, Rachel; Maxwell, Scott E

    2016-06-01

    Time-varying predictors in multilevel models are a useful tool for longitudinal research, whether they are the research variable of interest or they are controlling for variance to allow greater power for other variables. However, standard recommendations to fix the effect of time-varying predictors may make an assumption that is unlikely to hold in reality and may influence results. A simulation study illustrates that treating the time-varying predictor as fixed may allow analyses to converge, but the analyses have poor coverage of the true fixed effect when the time-varying predictor has a random effect in reality. A second simulation study shows that treating the time-varying predictor as random may have poor convergence, except when allowing negative variance estimates. Although negative variance estimates are uninterpretable, results of the simulation show that estimates of the fixed effect of the time-varying predictor are as accurate for these cases as for cases with positive variance estimates, and that treating the time-varying predictor as random and allowing negative variance estimates performs well whether the time-varying predictor is fixed or random in reality. Because of the difficulty of interpreting negative variance estimates, 2 procedures are suggested for selection between fixed-effect and random-effect models: comparing between fixed-effect and constrained random-effect models with a likelihood ratio test or fitting a fixed-effect model when an unconstrained random-effect model produces negative variance estimates. The performance of these 2 procedures is compared. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  14. Long-Branch Attraction Bias and Inconsistency in Bayesian Phylogenetics

    PubMed Central

    Kolaczkowski, Bryan; Thornton, Joseph W.

    2009-01-01

    Bayesian inference (BI) of phylogenetic relationships uses the same probabilistic models of evolution as its precursor maximum likelihood (ML), so BI has generally been assumed to share ML's desirable statistical properties, such as largely unbiased inference of topology given an accurate model and increasingly reliable inferences as the amount of data increases. Here we show that BI, unlike ML, is biased in favor of topologies that group long branches together, even when the true model and prior distributions of evolutionary parameters over a group of phylogenies are known. Using experimental simulation studies and numerical and mathematical analyses, we show that this bias becomes more severe as more data are analyzed, causing BI to infer an incorrect tree as the maximum a posteriori phylogeny with asymptotically high support as sequence length approaches infinity. BI's long branch attraction bias is relatively weak when the true model is simple but becomes pronounced when sequence sites evolve heterogeneously, even when this complexity is incorporated in the model. This bias—which is apparent under both controlled simulation conditions and in analyses of empirical sequence data—also makes BI less efficient and less robust to the use of an incorrect evolutionary model than ML. Surprisingly, BI's bias is caused by one of the method's stated advantages—that it incorporates uncertainty about branch lengths by integrating over a distribution of possible values instead of estimating them from the data, as ML does. Our findings suggest that trees inferred using BI should be interpreted with caution and that ML may be a more reliable framework for modern phylogenetic analysis. PMID:20011052

  15. Long-branch attraction bias and inconsistency in Bayesian phylogenetics.

    PubMed

    Kolaczkowski, Bryan; Thornton, Joseph W

    2009-12-09

    Bayesian inference (BI) of phylogenetic relationships uses the same probabilistic models of evolution as its precursor maximum likelihood (ML), so BI has generally been assumed to share ML's desirable statistical properties, such as largely unbiased inference of topology given an accurate model and increasingly reliable inferences as the amount of data increases. Here we show that BI, unlike ML, is biased in favor of topologies that group long branches together, even when the true model and prior distributions of evolutionary parameters over a group of phylogenies are known. Using experimental simulation studies and numerical and mathematical analyses, we show that this bias becomes more severe as more data are analyzed, causing BI to infer an incorrect tree as the maximum a posteriori phylogeny with asymptotically high support as sequence length approaches infinity. BI's long branch attraction bias is relatively weak when the true model is simple but becomes pronounced when sequence sites evolve heterogeneously, even when this complexity is incorporated in the model. This bias--which is apparent under both controlled simulation conditions and in analyses of empirical sequence data--also makes BI less efficient and less robust to the use of an incorrect evolutionary model than ML. Surprisingly, BI's bias is caused by one of the method's stated advantages--that it incorporates uncertainty about branch lengths by integrating over a distribution of possible values instead of estimating them from the data, as ML does. Our findings suggest that trees inferred using BI should be interpreted with caution and that ML may be a more reliable framework for modern phylogenetic analysis.

  16. Evaluation of CMIP5 continental precipitation simulations relative to satellite-based gauge-adjusted observations

    NASA Astrophysics Data System (ADS)

    Mehran, A.; AghaKouchak, A.; Phillips, T. J.

    2014-02-01

    The objective of this study is to cross-validate 34 Coupled Model Intercomparison Project Phase 5 (CMIP5) historical simulations of precipitation against the Global Precipitation Climatology Project (GPCP) data, quantifying model pattern discrepancies, and biases for both entire distributions and their upper tails. The results of the volumetric hit index (VHI) analysis of the total monthly precipitation amounts show that most CMIP5 simulations are in good agreement with GPCP patterns in many areas but that their replication of observed precipitation over arid regions and certain subcontinental regions (e.g., northern Eurasia, eastern Russia, and central Australia) is problematical. Overall, the VHI of the multimodel ensemble mean and median also are superior to that of the individual CMIP5 models. However, at high quantiles of reference data (75th and 90th percentiles), all climate models display low skill in simulating precipitation, except over North America, the Amazon, and Central Africa. Analyses of total bias (B) in CMIP5 simulations reveal that most models overestimate precipitation over regions of complex topography (e.g., western North and South America and southern Africa and Asia), while underestimating it over arid regions. Also, while most climate model simulations show low biases over Europe, intermodel variations in bias over Australia and Amazonia are considerable. The quantile bias analyses indicate that CMIP5 simulations are even more biased at high quantiles of precipitation. It is found that a simple mean field bias removal improves the overall B and VHI values but does not make a significant improvement at high quantiles of precipitation.

  17. Forecasting and prediction of scorpion sting cases in Biskra province, Algeria, using a seasonal autoregressive integrated moving average model.

    PubMed

    Selmane, Schehrazad; L'Hadj, Mohamed

    2016-01-01

    The aims of this study were to highlight some epidemiological aspects of scorpion envenomations, to analyse and interpret the available data for Biskra province, Algeria, and to develop a forecasting model for scorpion sting cases in Biskra province, which records the highest number of scorpion stings in Algeria. In addition to analysing the epidemiological profile of scorpion stings that occurred throughout the year 2013, we used the Box-Jenkins approach to fit a seasonal autoregressive integrated moving average (SARIMA) model to the monthly recorded scorpion sting cases in Biskra from 2000 to 2012. The epidemiological analysis revealed that scorpion stings were reported continuously throughout the year, with peaks in the summer months. The most affected age group was 15 to 49 years old, with a male predominance. The most prone human body areas were the upper and lower limbs. The majority of cases (95.9%) were classified as mild envenomations. The time series analysis showed that a (5,1,0)×(0,1,1) 12 SARIMA model offered the best fit to the scorpion sting surveillance data. This model was used to predict scorpion sting cases for the year 2013, and the fitted data showed considerable agreement with the actual data. SARIMA models are useful for monitoring scorpion sting cases, and provide an estimate of the variability to be expected in future scorpion sting cases. This knowledge is helpful in predicting whether an unusual situation is developing or not, and could therefore assist decision-makers in strengthening the province's prevention and control measures and in initiating rapid response measures.

  18. Method for the technical, financial, economic and environmental pre-feasibility study of geothermal power plants by RETScreen - Ecuador's case study.

    PubMed

    Moya, Diego; Paredes, Juan; Kaparaju, Prasad

    2018-01-01

    RETScreen presents a proven focused methodology on pre-feasibility studies. Although this tool has been used to carry out a number of pre-feasibility studies of solar, wind, and hydropower projects; that is not the case for geothermal developments. This method paper proposes a systematic methodology to cover all the necessary inputs of the RETScreen-International Geothermal Project Model. As case study, geothermal power plant developments in the Ecuadorian context were analysed by RETScreen-International Geothermal Project Model. Three different scenarios were considered for analyses. Scenario I and II considered incentives of 132.1 USD/MWh for electricity generation and grants of 3 million USD. Scenario III considered the geothermal project with an electricity export price of 49.3 USD/MWh. Scenario III was further divided into IIIA and IIIB case studies. Scenario IIIA considered a 3 million USD grant while Scenario IIIB considered an income of 8.9 USD/MWh for selling heat in direct applications. Modelling results showed that binary power cycle was the most suitable geothermal technology to produce electricity along with aquaculture and greenhouse heating for direct use applications in all scenarios. Financial analyses showed that the debt payment would be 5.36 million USD/year under in Scenario I and III. The correspindig values for Scenario II was 7.06 million USD/year. Net Present Value was positive for all studied scenarios except for Scenario IIIA. Overall, Scenario II was identified as the most feasible project due to positive NPV with short payback period. Scenario IIIB could become financially attractive by selling heat for direct applications. The total initial investment for a 22 MW geothermal power plant was 114.3 million USD (at 2017 costs). Economic analysis showed an annual savings of 24.3 million USD by avoiding fossil fuel electricity generation. More than 184,000 tCO 2 eq. could be avoided annually.

  19. Evidence of a Major Gene From Bayesian Segregation Analyses of Liability to Osteochondral Diseases in Pigs

    PubMed Central

    Kadarmideen, Haja N.; Janss, Luc L. G.

    2005-01-01

    Bayesian segregation analyses were used to investigate the mode of inheritance of osteochondral lesions (osteochondrosis, OC) in pigs. Data consisted of 1163 animals with OC and their pedigrees included 2891 animals. Mixed-inheritance threshold models (MITM) and several variants of MITM, in conjunction with Markov chain Monte Carlo methods, were developed for the analysis of these (categorical) data. Results showed major genes with significant and substantially higher variances (range 1.384–37.81), compared to the polygenic variance (\\documentclass[10pt]{article} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\pagestyle{empty} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} \\begin{equation*}{\\mathrm{{\\sigma}}}_{{\\mathrm{u}}}^{2}\\end{equation*}\\end{document}). Consequently, heritabilities for a mixed inheritance (range 0.65–0.90) were much higher than the heritabilities from the polygenes. Disease allele frequencies range was 0.38–0.88. Additional analyses estimating the transmission probabilities of the major gene showed clear evidence for Mendelian segregation of a major gene affecting osteochondrosis. The variants, MITM with informative prior on \\documentclass[10pt]{article} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\pagestyle{empty} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} \\begin{equation*}{\\mathrm{{\\sigma}}}_{{\\mathrm{u}}}^{2}\\end{equation*}\\end{document}, showed significant improvement in marginal distributions and accuracy of parameters. MITM with a “reduced polygenic model” for parameterization of polygenic effects avoided convergence problems and poor mixing encountered in an “individual polygenic model.” In all cases, “shrinkage estimators” for fixed effects avoided unidentifiability for these parameters. The mixed-inheritance linear model (MILM) was also applied to all OC lesions and compared with the MITM. This is the first study to report evidence of major genes for osteochondral lesions in pigs; these results may also form a basis for underpinning the genetic inheritance of this disease in other animals as well as in humans. PMID:16020792

  20. Microstructural characterization, petrophysics and upscaling - from porous media to fractural media

    NASA Astrophysics Data System (ADS)

    Liu, J.; Liu, K.; Regenauer-Lieb, K.

    2017-12-01

    We present an integrated study for the characterization of complex geometry, fluid transport features and mechanical deformation at micro-scale and the upscaling of properties using microtomographic data: We show how to integrate microstructural characterization by the volume fraction, specific surface area, connectivity (percolation), shape and orientation of microstructures with identification of individual fractures from a 3D fractural network. In a first step we use stochastic analyses of microstructures to determine the geometric RVE (representative volume element) of samples. We proceed by determining the size of a thermodynamic RVE by computing upper/lower bounds of entropy production through Finite Element (FE) analyses on a series of models with increasing sizes. The minimum size for thermodynamic RVE's is identified on the basis of the convergence criteria of the FE simulations. Petrophysical properties (permeability and mechanical parameters, including plastic strength) are then computed numerically if thermodynamic convergence criteria are fulfilled. Upscaling of properties is performed by means of percolation theory. The percolation threshold is detected by using a shrinking/expanding algorithm on static micro-CT images of rocks. Parameters of the scaling laws can be extracted from quantitative analyses and/or numerical simulations on a series of models with similar structures but different porosities close to the percolation threshold. Different rock samples are analyzed. Characterizing parameters of porous/fractural rocks are obtained. Synthetic derivative models of the microstructure are used to estimate the relationships between porosity and mechanical properties. Results obtained from synthetic sandstones show that yield stress, cohesion and the angle of friction are linearly proportional to porosity. Our integrated study shows that digital rock technology can provide meaningful parameters for effective upscaling if thermodynamic volume averaging satisfies the convergence criteria. For strongly heterogeneous rocks, however, thermodynamic convergence criteria may not meet; a continuum approach cannot be justified in this case.

  1. Finite difference time domain (FDTD) method for modeling the effect of switched gradients on the human body in MRI.

    PubMed

    Zhao, Huawei; Crozier, Stuart; Liu, Feng

    2002-12-01

    Numerical modeling of the eddy currents induced in the human body by the pulsed field gradients in MRI presents a difficult computational problem. It requires an efficient and accurate computational method for high spatial resolution analyses with a relatively low input frequency. In this article, a new technique is described which allows the finite difference time domain (FDTD) method to be efficiently applied over a very large frequency range, including low frequencies. This is not the case in conventional FDTD-based methods. A method of implementing streamline gradients in FDTD is presented, as well as comparative analyses which show that the correct source injection in the FDTD simulation plays a crucial rule in obtaining accurate solutions. In particular, making use of the derivative of the input source waveform is shown to provide distinct benefits in accuracy over direct source injection. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and the source injection method has been verified against examples with analytical solutions. Results are presented showing the spatial distribution of gradient-induced electric fields and eddy currents in a complete body model. Copyright 2002 Wiley-Liss, Inc.

  2. The role of photorespiration during the evolution of C4 photosynthesis in the genus Flaveria

    PubMed Central

    Mallmann, Julia; Heckmann, David; Bräutigam, Andrea; Lercher, Martin J; Weber, Andreas PM; Westhoff, Peter; Gowik, Udo

    2014-01-01

    C4 photosynthesis represents a most remarkable case of convergent evolution of a complex trait, which includes the reprogramming of the expression patterns of thousands of genes. Anatomical, physiological, and phylogenetic and analyses as well as computational modeling indicate that the establishment of a photorespiratory carbon pump (termed C2 photosynthesis) is a prerequisite for the evolution of C4. However, a mechanistic model explaining the tight connection between the evolution of C4 and C2 photosynthesis is currently lacking. Here we address this question through comparative transcriptomic and biochemical analyses of closely related C3, C3–C4, and C4 species, combined with Flux Balance Analysis constrained through a mechanistic model of carbon fixation. We show that C2 photosynthesis creates a misbalance in nitrogen metabolism between bundle sheath and mesophyll cells. Rebalancing nitrogen metabolism requires anaplerotic reactions that resemble at least parts of a basic C4 cycle. Our findings thus show how C2 photosynthesis represents a pre-adaptation for the C4 system, where the evolution of the C2 system establishes important C4 components as a side effect. DOI: http://dx.doi.org/10.7554/eLife.02478.001 PMID:24935935

  3. [Partial nucleotomy of the ovine disc as an in vivo model for disc degeneration].

    PubMed

    Guder, E; Hill, S; Kandziora, F; Schnake, K J

    2009-01-01

    The aim of this study was to develop a suitable animal model for the clinical situation of progressive disc degeneration after microsurgical nucleotomy. Twenty sheep underwent standardised partial anterolateral nucleotomy at lumbar segment 3/4. After randomisation, 10 animals were sacrificed after 12 weeks (group 1). The remainder was sacrificed after 48 weeks (group 2). For radiological examination X-rays, MRI and post-mortem CT scans were performed. Lumbar discs L 3/4 with adjacent subchondral trabecular bone were harvested and analysed macroscopically and histologically. An image-analysing computer program was used to measure histomorphometric indices of bone structure. 17 segments could be evaluated. After 12 weeks (group 1) histological and radiological degenerative disc changes were noted. After 48 weeks (group 2), radiological signs in MRI reached statistical significance. Furthermore, group 2 showed significantly more osteophyte formations in CT scans. Histomorphometric changes of the disc and the adjacent vertebral bone structure suggest a significant progressive degenerative remodelling. The facet joints did not show any osteoarthrosis after 48 weeks. Partial nucleotomy of the ovine lumbar disc leads to radiological and histological signs of disc degeneration similar to those seen in humans after microsurgical nucleotomy. The presented in vivo model may be useful to evaluate new orthopaedic treatment strategies.

  4. Systemically Transplanted Bone Marrow-derived Cells Contribute to Dental Pulp Regeneration in a Chimeric Mouse Model.

    PubMed

    Xu, Wenan; Jiang, Shan; Chen, Qiuyue; Ye, Yanyan; Chen, Jiajing; Heng, Boon Chin; Jiang, Qianli; Wu, Buling; Ding, Zihai; Zhang, Chengfei

    2016-02-01

    Migratory cells via blood circulation or cells adjacent to the root apex may potentially participate in dental pulp tissue regeneration or renewal. This study investigated whether systemically transplanted bone marrow cells can contribute to pulp regeneration in a chimeric mouse model. A chimeric mouse model was created through the injection of bone marrow cells from green fluorescent protein (GFP) transgenic C57BL/6 mice into the tail veins of recipient wild-type C57BL/6 mice that had been irradiated with a lethal dose of 8.5 Gy from a high-frequency linear accelerator. These mice were subjected to pulpectomy and pulp revascularization. At 1, 4, and 8 weeks after surgery, in vivo animal imaging and histologic analyses were conducted. In vivo animal imaging showed that the green biofluorescence signal from the transplanted GFP+ cells increased significantly and was maintained at a high level during the first 4 weeks after surgery. Immunofluorescence analyses of tooth specimens collected at 8 weeks postsurgery showed the presence of nestin+/GFP+, α smooth muscle actin (α-SMA)/GFP+, and NeuN/GFP+ cells within the regenerated pulplike tissue. These data confirm that transplanted bone marrow-derived cells can contribute to dental pulp regeneration. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  5. Statistical assessment of changes in extreme maximum temperatures over Saudi Arabia, 1985-2014

    NASA Astrophysics Data System (ADS)

    Raggad, Bechir

    2018-05-01

    In this study, two statistical approaches were adopted in the analysis of observed maximum temperature data collected from fifteen stations over Saudi Arabia during the period 1985-2014. In the first step, the behavior of extreme temperatures was analyzed and their changes were quantified with respect to the Expert Team on Climate Change Detection Monitoring indices. The results showed a general warming trend over most stations, in maximum temperature-related indices, during the period of analysis. In the second step, stationary and non-stationary extreme-value analyses were conducted for the temperature data. The results revealed that the non-stationary model with increasing linear trend in its location parameter outperforms the other models for two-thirds of the stations. Additionally, the 10-, 50-, and 100-year return levels were found to change with time considerably and that the maximum temperature could start to reappear in the different T-year return period for most stations. This analysis shows the importance of taking account the change over time in the estimation of return levels and therefore justifies the use of the non-stationary generalized extreme value distribution model to describe most of the data. Furthermore, these last findings are in line with the result of significant warming trends found in climate indices analyses.

  6. Modelling of intermittent microwave convective drying: parameter sensitivity

    NASA Astrophysics Data System (ADS)

    Zhang, Zhijun; Qin, Wenchao; Shi, Bin; Gao, Jingxin; Zhang, Shiwei

    2017-06-01

    The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  7. Comparison of Prediction Model for Cardiovascular Autonomic Dysfunction Using Artificial Neural Network and Logistic Regression Analysis

    PubMed Central

    Zeng, Fangfang; Li, Zhongtao; Yu, Xiaoling; Zhou, Linuo

    2013-01-01

    Background This study aimed to develop the artificial neural network (ANN) and multivariable logistic regression (LR) analyses for prediction modeling of cardiovascular autonomic (CA) dysfunction in the general population, and compare the prediction models using the two approaches. Methods and Materials We analyzed a previous dataset based on a Chinese population sample consisting of 2,092 individuals aged 30–80 years. The prediction models were derived from an exploratory set using ANN and LR analysis, and were tested in the validation set. Performances of these prediction models were then compared. Results Univariate analysis indicated that 14 risk factors showed statistically significant association with the prevalence of CA dysfunction (P<0.05). The mean area under the receiver-operating curve was 0.758 (95% CI 0.724–0.793) for LR and 0.762 (95% CI 0.732–0.793) for ANN analysis, but noninferiority result was found (P<0.001). The similar results were found in comparisons of sensitivity, specificity, and predictive values in the prediction models between the LR and ANN analyses. Conclusion The prediction models for CA dysfunction were developed using ANN and LR. ANN and LR are two effective tools for developing prediction models based on our dataset. PMID:23940593

  8. Gene Expression Analysis to Assess the Relevance of Rodent Models to Human Lung Injury.

    PubMed

    Sweeney, Timothy E; Lofgren, Shane; Khatri, Purvesh; Rogers, Angela J

    2017-08-01

    The relevance of animal models to human diseases is an area of intense scientific debate. The degree to which mouse models of lung injury recapitulate human lung injury has never been assessed. Integrating data from both human and animal expression studies allows for increased statistical power and identification of conserved differential gene expression across organisms and conditions. We sought comprehensive integration of gene expression data in experimental acute lung injury (ALI) in rodents compared with humans. We performed two separate gene expression multicohort analyses to determine differential gene expression in experimental animal and human lung injury. We used correlational and pathway analyses combined with external in vitro gene expression data to identify both potential drivers of underlying inflammation and therapeutic drug candidates. We identified 21 animal lung tissue datasets and three human lung injury bronchoalveolar lavage datasets. We show that the metasignatures of animal and human experimental ALI are significantly correlated despite these widely varying experimental conditions. The gene expression changes among mice and rats across diverse injury models (ozone, ventilator-induced lung injury, LPS) are significantly correlated with human models of lung injury (Pearson r = 0.33-0.45, P < 1E -16 ). Neutrophil signatures are enriched in both animal and human lung injury. Predicted therapeutic targets, peptide ligand signatures, and pathway analyses are also all highly overlapping. Gene expression changes are similar in animal and human experimental ALI, and provide several physiologic and therapeutic insights to the disease.

  9. The value of information for woodland management: Updating a state–transition model

    USGS Publications Warehouse

    Morris, William K.; Runge, Michael C.; Vesk, Peter A.

    2017-01-01

    Value of information (VOI) analyses reveal the expected benefit of reducing uncertainty to a decision maker. Most ecological VOI analyses have focused on population models rarely addressing more complex community models. We performed a VOI analysis for a complex state–transition model of Box-Ironbark Forest and Woodland management. With three management alternatives (limited harvest/firewood removal (HF), ecological thinning (ET), and no management), managing the system optimally (for 150 yr) with the original information would, on average, increase the amount of forest in a desirable state from 19% to 35% (a 16-percentage point increase). Resolving all uncertainty would, on average, increase the final percentage to 42% (a 19-percentage point increase). However, only resolving the uncertainty for a single parameter was worth almost two-thirds the value of resolving all uncertainty. We found the VOI to depend on the number of management options, increasing as the management flexibility increased. Our analyses show it is more cost-effective to monitor low-density regrowth forest than other states and more cost-effective to experiment with the no-management alternative than the other management alternatives. Importantly, the most cost-effective strategies did not include either the most desired forest states or the least understood management strategy, ET. This implies that managers cannot just rely on intuition to tell them where the most VOI will lie, as critical uncertainties in a complex system are sometimes cryptic.

  10. Angular Baryon Acoustic Oscillation measure at z=2.225 from the SDSS quasar survey

    NASA Astrophysics Data System (ADS)

    de Carvalho, E.; Bernui, A.; Carvalho, G. C.; Novaes, C. P.; Xavier, H. S.

    2018-04-01

    Following a quasi model-independent approach we measure the transversal BAO mode at high redshift using the two-point angular correlation function (2PACF). The analyses done here are only possible now with the quasar catalogue from the twelfth data release (DR12Q) from the Sloan Digital Sky Survey, because it is spatially dense enough to allow the measurement of the angular BAO signature with moderate statistical significance and acceptable precision. Our analyses with quasars in the redshift interval z in [2.20,2.25] produce the angular BAO scale θBAO = 1.77° ± 0.31° with a statistical significance of 2.12 σ (i.e., 97% confidence level), calculated through a likelihood analysis performed using the theoretical covariance matrix sourced by the analytical power spectra expected in the ΛCDM concordance model. Additionally, we show that the BAO signal is robust—although with less statistical significance—under diverse bin-size choices and under small displacements of the quasars' angular coordinates. Finally, we also performed cosmological parameter analyses comparing the θBAO predictions for wCDM and w(a)CDM models with angular BAO data available in the literature, including the measurement obtained here, jointly with CMB data. The constraints on the parameters ΩM, w0 and wa are in excellent agreement with the ΛCDM concordance model.

  11. Anisotropic singularities in modified gravity models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Figueiro, Michele Ferraz; Saa, Alberto; Departamento de Matematica Aplicada, IMECC-UNICAMP, C.P. 6065, 13083-859 Campinas, SP

    2009-09-15

    We show that the common singularities present in generic modified gravity models governed by actions of the type S={integral}d{sup 4}x{radical}(-g)f(R,{phi},X), with X=-(1/2)g{sup ab}{partial_derivative}{sub a}{phi}{partial_derivative}{sub b}{phi}, are essentially the same anisotropic instabilities associated to the hypersurface F({phi})=0 in the case of a nonminimal coupling of the type F({phi})R, enlightening the physical origin of such singularities that typically arise in rather complex and cumbersome inhomogeneous perturbation analyses. We show, moreover, that such anisotropic instabilities typically give rise to dynamically unavoidable singularities, precluding completely the possibility of having physically viable models for which the hypersurface ({partial_derivative}f/{partial_derivative}R)=0 is attained. Some examples are explicitly discussed.

  12. A comparison of three approaches to modeling leaf gas exchange in annually drought-stressed ponderosa pine forests.

    PubMed

    Misson, Laurent; Panek, Jeanne A; Goldstein, Allen H

    2004-05-01

    We tested, compared and modified three models of stomatal conductance at the leaf level in a forest ecosystem where drought stress is a major factor controlling forest productivity. The models were tested against 2 years (1999 and 2000) of leaf-level measurements on ponderosa pine (Pinus ponderosa Dougl. ex Laws.) growing in the Mediterranean climate of California, USA. The Ball, Woodrow and Berry (1987) (BWB) model was modified to account for soil water stress. Among the models, results of the modified BWB model were in the closest agreement with observations (r2 = 0.71). The Jarvis (1976) model showed systematic simulation errors related to vapor pressure deficit (r2 = 0.65). Results of the Williams, Rastetter, Fernandes et al. (1996) (SPA) model showed the poorest correlation with empirical data, but this model has only one calibration parameter (r2 = 0.60). Sensitivity analyses showed that, in all three models, predictions of stomatal conductance were most responsive to photosynthetically active radiation and soil water content. Stomatal conductance showed little sensitivity to vapor pressure deficit in the Jarvis model, whereas in both the BWB and SPA models, vapor pressure deficit (or relative humidity) was the third most important variable. Parameterization of the SPA model was in accordance with the parameterization of the modified BWB model, although the two models differ greatly. Measured and modeled results indicate that stomatal behavior is not water conservative during spring; however, during summer, when soil water content is low and vapor pressure deficit is high, stomatal conductance decreases and, according to the models, intrinsic water- use efficiency increases.

  13. Modeling and Simulations for the High Flux Isotope Reactor Cycle 400

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilas, Germina; Chandler, David; Ade, Brian J

    2015-03-01

    A concerted effort over the past few years has been focused on enhancing the core model for the High Flux Isotope Reactor (HFIR), as part of a comprehensive study for HFIR conversion from high-enriched uranium (HEU) to low-enriched uranium (LEU) fuel. At this time, the core model used to perform analyses in support of HFIR operation is an MCNP model for the beginning of Cycle 400, which was documented in detail in a 2005 technical report. A HFIR core depletion model that is based on current state-of-the-art methods and nuclear data was needed to serve as reference for the designmore » of an LEU fuel for HFIR. The recent enhancements in modeling and simulations for HFIR that are discussed in the present report include: (1) revision of the 2005 MCNP model for the beginning of Cycle 400 to improve the modeling data and assumptions as necessary based on appropriate primary reference sources HFIR drawings and reports; (2) improvement of the fuel region model, including an explicit representation for the involute fuel plate geometry that is characteristic to HFIR fuel; and (3) revision of the Monte Carlo-based depletion model for HFIR in use since 2009 but never documented in detail, with the development of a new depletion model for the HFIR explicit fuel plate representation. The new HFIR models for Cycle 400 are used to determine various metrics of relevance to reactor performance and safety assessments. The calculated metrics are compared, where possible, with measurement data from preconstruction critical experiments at HFIR, data included in the current HFIR safety analysis report, and/or data from previous calculations performed with different methods or codes. The results of the analyses show that the models presented in this report provide a robust and reliable basis for HFIR analyses.« less

  14. Learning strategies and general cognitive ability as predictors of gender- specific academic achievement

    PubMed Central

    Ruffing, Stephanie; Wach, F. -Sophie; Spinath, Frank M.; Brünken, Roland; Karbach, Julia

    2015-01-01

    Recent research has revealed that learning behavior is associated with academic achievement at the college level, but the impact of specific learning strategies on academic success as well as gender differences therein are still not clear. Therefore, the aim of this study was to investigate gender differences in the incremental contribution of learning strategies over general cognitive ability in the prediction of academic achievement. The relationship between these variables was examined by correlation analyses. A set of t-tests was used to test for gender differences in learning strategies, whereas structural equation modeling as well as multi-group analyses were applied to investigate the incremental contribution of learning strategies for male and female students’ academic performance. The sample consisted of 461 students (mean age = 21.2 years, SD = 3.2). Correlation analyses revealed that general cognitive ability as well as the learning strategies effort, attention, and learning environment were positively correlated with academic achievement. Gender differences were found in the reported application of many learning strategies. Importantly, the prediction of achievement in structural equation modeling revealed that only effort explained incremental variance (10%) over general cognitive ability. Results of multi-group analyses showed no gender differences in this prediction model. This finding provides further knowledge regarding gender differences in learning research and the specific role of learning strategies for academic achievement. The incremental assessment of learning strategy use as well as gender-differences in their predictive value contributes to the understanding and improvement of successful academic development. PMID:26347698

  15. Learning strategies and general cognitive ability as predictors of gender- specific academic achievement.

    PubMed

    Ruffing, Stephanie; Wach, F-Sophie; Spinath, Frank M; Brünken, Roland; Karbach, Julia

    2015-01-01

    Recent research has revealed that learning behavior is associated with academic achievement at the college level, but the impact of specific learning strategies on academic success as well as gender differences therein are still not clear. Therefore, the aim of this study was to investigate gender differences in the incremental contribution of learning strategies over general cognitive ability in the prediction of academic achievement. The relationship between these variables was examined by correlation analyses. A set of t-tests was used to test for gender differences in learning strategies, whereas structural equation modeling as well as multi-group analyses were applied to investigate the incremental contribution of learning strategies for male and female students' academic performance. The sample consisted of 461 students (mean age = 21.2 years, SD = 3.2). Correlation analyses revealed that general cognitive ability as well as the learning strategies effort, attention, and learning environment were positively correlated with academic achievement. Gender differences were found in the reported application of many learning strategies. Importantly, the prediction of achievement in structural equation modeling revealed that only effort explained incremental variance (10%) over general cognitive ability. Results of multi-group analyses showed no gender differences in this prediction model. This finding provides further knowledge regarding gender differences in learning research and the specific role of learning strategies for academic achievement. The incremental assessment of learning strategy use as well as gender-differences in their predictive value contributes to the understanding and improvement of successful academic development.

  16. Pattern formation in superdiffusion Oregonator model

    NASA Astrophysics Data System (ADS)

    Feng, Fan; Yan, Jia; Liu, Fu-Cheng; He, Ya-Feng

    2016-10-01

    Pattern formations in an Oregonator model with superdiffusion are studied in two-dimensional (2D) numerical simulations. Stability analyses are performed by applying Fourier and Laplace transforms to the space fractional reaction-diffusion systems. Antispiral, stable turing patterns, and travelling patterns are observed by changing the diffusion index of the activator. Analyses of Floquet multipliers show that the limit cycle solution loses stability at the wave number of the primitive vector of the travelling hexagonal pattern. We also observed a transition between antispiral and spiral by changing the diffusion index of the inhibitor. Project supported by the National Natural Science Foundation of China (Grant Nos. 11205044 and 11405042), the Research Foundation of Education Bureau of Hebei Province, China (Grant Nos. Y2012009 and ZD2015025), the Program for Young Principal Investigators of Hebei Province, China, and the Midwest Universities Comprehensive Strength Promotion Project.

  17. Determination of polyphenolic compounds of red wines by UV-VIS-NIR spectroscopy and chemometrics tools.

    PubMed

    Martelo-Vidal, M J; Vázquez, M

    2014-09-01

    Spectral analysis is a quick and non-destructive method to analyse wine. In this work, trans-resveratrol, oenin, malvin, catechin, epicatechin, quercetin and syringic acid were determined in commercial red wines from DO Rías Baixas and DO Ribeira Sacra (Spain) by UV-VIS-NIR spectroscopy. Calibration models were developed using principal component regression (PCR) or partial least squares (PLS) regression. HPLC was used as reference method. The results showed that reliable PLS models were obtained to quantify all polyphenols for Rías Baixas wines. For Ribeira Sacra, feasible models were obtained to determine quercetin, epicatechin, oenin and syringic acid. PCR calibration models showed worst reliable of prediction than PLS models. For red wines from mencía grapes, feasible models were obtained for catechin and oenin, regardless the geographical origin. The results obtained demonstrate that UV-VIS-NIR spectroscopy can be used to determine individual polyphenolic compounds in red wines. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Effects of human running cadence and experimental validation of the bouncing ball model

    NASA Astrophysics Data System (ADS)

    Bencsik, László; Zelei, Ambrus

    2017-05-01

    The biomechanical analysis of human running is a complex problem, because of the large number of parameters and degrees of freedom. However, simplified models can be constructed, which are usually characterized by some fundamental parameters, like step length, foot strike pattern and cadence. The bouncing ball model of human running is analysed theoretically and experimentally in this work. It is a minimally complex dynamic model when the aim is to estimate the energy cost of running and the tendency of ground-foot impact intensity as a function of cadence. The model shows that cadence has a direct effect on energy efficiency of running and ground-foot impact intensity. Furthermore, it shows that higher cadence implies lower risk of injury and better energy efficiency. An experimental data collection of 121 amateur runners is presented. The experimental results validate the model and provides information about the walk-to-run transition speed and the typical development of cadence and grounded phase ratio in different running speed ranges.

  19. Photochemical modeling and analysis of meteorological parameters during ozone episodes in Kaohsiung, Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, K. S.; Ho, Y. T.; Lai, C. H.; Chou, Youn-Min

    The events of high ozone concentrations and meteorological conditions covering the Kaohsiung metropolitan area were investigated based on data analysis and model simulation. A photochemical grid model was employed to analyze two ozone episodes in autumn (2000) and winter (2001) seasons, each covering three consecutive days (or 72 h) in the Kaohsiung City. The potential influence of the initial and boundary conditions on model performance was assessed. Model performance can be improved by separately considering the daytime and nighttime ozone concentrations on the lateral boundary conditions of the model domain. The sensitivity analyses of ozone concentrations to the emission reductions in volatile organic compounds (VOC) and nitrogen oxides (NO x) show a VOC-sensitive regime for emission reductions to lower than 30-40% VOC and 30-50% NO x and a NO x-sensitive regime for larger percentage reductions. Meteorological parameters show that warm temperature, sufficient sunlight, low wind, and high surface pressure are distinct parameters that tend to trigger ozone episodes in polluted urban areas, like Kaohsiung.

  20. [Family and school violence: the mediator role of self-esteem and attitudes towards institutional authority].

    PubMed

    Cava, María Jesús; Musitu, Gonzalo; Murgui, Sergio

    2006-08-01

    This study analyses the influence of family communication and parental valuation of school on adolescent violent behaviour at school. By means of a structural equation model, both its direct and indirect influence through school and family self-esteem of the adolescent and his attitude towards school authority are analysed. The sample is composed of 665 adolescents whose ages range from 12 to 16 years old. The results confirm the existence of an indirect relationship but not direct influence of the family on school violence. The attitude of the adolescent towards school authority is the mediator variable which shows the strongest direct effect on school violence. Also, the two dimensions of self-esteem considered are significant intermediate variables. These results and their implications are analysed.

  1. An application of adaptive neuro-fuzzy inference system to landslide susceptibility mapping (Klang valley, Malaysia)

    NASA Astrophysics Data System (ADS)

    Sezer, Ebru; Pradhan, Biswajeet; Gokceoglu, Candan

    2010-05-01

    Landslides are one of the recurrent natural hazard problems throughout most of Malaysia. Recently, the Klang Valley area of Selangor state has faced numerous landslide and mudflow events and much damage occurred in these areas. However, only little effort has been made to assess or predict these events which resulted in serious damages. Through scientific analyses of these landslides, one can assess and predict landslide-susceptible areas and even the events as such, and thus reduce landslide damages through proper preparation and/or mitigation. For this reason , the purpose of the present paper is to produce landslide susceptibility maps of a part of the Klang Valley areas in Malaysia by employing the results of the adaptive neuro-fuzzy inference system (ANFIS) analyses. Landslide locations in the study area were identified by interpreting aerial photographs and satellite images, supported by extensive field surveys. Landsat TM satellite imagery was used to map vegetation index. Maps of topography, lineaments and NDVI were constructed from the spatial datasets. Seven landslide conditioning factors such as altitude, slope angle, plan curvature, distance from drainage, soil type, distance from faults and NDVI were extracted from the spatial database. These factors were analyzed using an ANFIS to construct the landslide susceptibility maps. During the model development works, total 5 landslide susceptibility models were obtained by using ANFIS results. For verification, the results of the analyses were then compared with the field-verified landslide locations. Additionally, the ROC curves for all landslide susceptibility models were drawn and the area under curve values was calculated. Landslide locations were used to validate results of the landslide susceptibility map and the verification results showed 98% accuracy for the model 5 employing all parameters produced in the present study as the landslide conditioning factors. The validation results showed sufficient agreement between the obtained susceptibility map and the existing data on landslide areas. Qualitatively, the model yields reasonable results which can be used for preliminary land-use planning purposes. As a final conclusion, the results obtained from the study showed that the ANFIS modeling is a very useful and powerful tool for the regional landslide susceptibility assessments. However, the results to be obtained from the ANFIS modeling should be assessed carefully because the overlearning may cause misleading results. To prevent overlerning, the numbers of membership functions of inputs and the number of training epochs should be selected optimally and carefully.

  2. A comparison of operational and LANDSAT-aided snow water content estimation systems. [Feather River Basin, California

    NASA Technical Reports Server (NTRS)

    Sharp, J. M.; Thomas, R. W.

    1975-01-01

    How LANDSAT imagery can be cost effectively employed to augment an operational hydrologic model is described. Attention is directed toward the estimation of snow water content, a major predictor variable in the volumetric runoff forecasting model. A stratified double sampling scheme is supplemented with qualitative and quantitative analyses of existing operations to develop a comparison between the existing and satellite-aided approaches to snow water content estimation. Results show a decided advantage for the LANDSAT-aided approach.

  3. Optics of retinal oil droplets: a model of light collection and polarization detection in the avian retina.

    PubMed

    Young, S R; Martin, G R

    1984-01-01

    A wave optical model was used to analyse the scattering properties of avian retinal oil droplets. Computations for the near field region showed that oil droplets perform significant light collection in cone photoreceptors and so enhance outer segment photon capture rates. Scattering by the oil droplet of the principal cone of a double cone pair, combined with accessory cone dichroic absorption under conditions of transverse illumination, may mediate avian polarization sensitivity.

  4. Using structural equation modeling for network meta-analysis.

    PubMed

    Tu, Yu-Kang; Wu, Yun-Chun

    2017-07-14

    Network meta-analysis overcomes the limitations of traditional pair-wise meta-analysis by incorporating all available evidence into a general statistical framework for simultaneous comparisons of several treatments. Currently, network meta-analyses are undertaken either within the Bayesian hierarchical linear models or frequentist generalized linear mixed models. Structural equation modeling (SEM) is a statistical method originally developed for modeling causal relations among observed and latent variables. As random effect is explicitly modeled as a latent variable in SEM, it is very flexible for analysts to specify complex random effect structure and to make linear and nonlinear constraints on parameters. The aim of this article is to show how to undertake a network meta-analysis within the statistical framework of SEM. We used an example dataset to demonstrate the standard fixed and random effect network meta-analysis models can be easily implemented in SEM. It contains results of 26 studies that directly compared three treatment groups A, B and C for prevention of first bleeding in patients with liver cirrhosis. We also showed that a new approach to network meta-analysis based on the technique of unrestricted weighted least squares (UWLS) method can also be undertaken using SEM. For both the fixed and random effect network meta-analysis, SEM yielded similar coefficients and confidence intervals to those reported in the previous literature. The point estimates of two UWLS models were identical to those in the fixed effect model but the confidence intervals were greater. This is consistent with results from the traditional pairwise meta-analyses. Comparing to UWLS model with common variance adjusted factor, UWLS model with unique variance adjusted factor has greater confidence intervals when the heterogeneity was larger in the pairwise comparison. The UWLS model with unique variance adjusted factor reflects the difference in heterogeneity within each comparison. SEM provides a very flexible framework for univariate and multivariate meta-analysis, and its potential as a powerful tool for advanced meta-analysis is still to be explored.

  5. SPH numerical investigation of the characteristics of an oscillating hydraulic jump at an abrupt drop

    NASA Astrophysics Data System (ADS)

    De Padova, Diana; Mossa, Michele; Sibilla, Stefano

    2018-02-01

    This paper shows the results of the smooth particle hydrodynamics (SPH) modelling of the hydraulic jump at an abrupt drop, where the transition from supercritical to subcritical flow is characterised by several flow patterns depending upon the inflow and tailwater conditions. SPH simulations are obtained by a pseudo-compressible XSPH scheme with pressure smoothing; turbulent stresses are represented either by an algebraic mixing-length model, or by a two-equation k- ɛ model. The numerical model is applied to analyse the occurrence of oscillatory flow conditions between two different jump types characterised by quasi-periodic oscillation, and the results are compared with experiments performed at the hydraulics laboratory of Bari Technical University. The purpose of this paper is to obtain a deeper understanding of the physical features of a flow which is in general difficult to be reproduced numerically, owing to its unstable character: in particular, vorticity and turbulent kinetic energy fields, velocity, water depth and pressure spectra downstream of the jump, and velocity and pressure cross-correlations can be computed and analysed.

  6. Elucidating the fate of a mixed toluene, DHM, methanol, and i-propanol plume during in situ bioremediation

    NASA Astrophysics Data System (ADS)

    Verardo, E.; Atteia, O.; Prommer, H.

    2017-06-01

    Organic pollutants such as solvents or petroleum products are widespread contaminants in soil and groundwater systems. In-situ bioremediation is a commonly used remediation technology to clean up the subsurface to eliminate the risks of toxic substances to reach potential receptors in surface waters or drinking water wells. This study discusses the development of a subsurface model to analyse the performance of an actively operating field-scale enhanced bioremediation scheme. The study site was affected by a mixed toluene, dihydromyrcenol (DHM), methanol, and i-propanol plume. A high-resolution, time-series of data was used to constrain the model development and calibration. The analysis shows that the observed failure of the treatment system is linked to an inefficient oxygen injection pattern. Moreover, the model simulations also suggest that additional contaminant spillages have occurred in 2012. Those additional spillages and their associated additional oxygen demand resulted in a significant increase in contaminant fluxes that remained untreated. The study emphasises the important role that reactive transport modelling can play in data analyses and for enhancing remediation efficiency.

  7. Modelling volatility recurrence intervals in the Chinese commodity futures market

    NASA Astrophysics Data System (ADS)

    Zhou, Weijie; Wang, Zhengxin; Guo, Haiming

    2016-09-01

    The law of extreme event occurrence attracts much research. The volatility recurrence intervals of Chinese commodity futures market prices are studied: the results show that the probability distributions of the scaled volatility recurrence intervals have a uniform scaling curve for different thresholds q. So we can deduce the probability distribution of extreme events from normal events. The tail of a scaling curve can be well fitted by a Weibull form, which is significance-tested by KS measures. Both short-term and long-term memories are present in the recurrence intervals with different thresholds q, which denotes that the recurrence intervals can be predicted. In addition, similar to volatility, volatility recurrence intervals also have clustering features. Through Monte Carlo simulation, we artificially synthesise ARMA, GARCH-class sequences similar to the original data, and find out the reason behind the clustering. The larger the parameter d of the FIGARCH model, the stronger the clustering effect is. Finally, we use the Fractionally Integrated Autoregressive Conditional Duration model (FIACD) to analyse the recurrence interval characteristics. The results indicated that the FIACD model may provide a method to analyse volatility recurrence intervals.

  8. Climate change impacts on crop yield in the Euro-Mediterranean region

    NASA Astrophysics Data System (ADS)

    Toreti, Andrea; Ceglar, Andrej; Dentener, Frank; Niemeyer, Stefan; Dosio, Alessandro; Fumagalli, Davide

    2017-04-01

    Agriculture is strongly influenced by climate variability, climate extremes and climate changes. Recent studies on past decades have identified and analysed the effects of climate variability and extremes on crop yields in the Euro-Mediterranean region. As these effects could be amplified in a changing climate context, it is essential to analyse available climate projections and investigate the possible impacts on European agriculture in terms of crop yield. In this study, five model runs from the Euro-CORDEX initiative under two scenarios (RCP4.5 and RCP8.5) have been used. Climate model data have been bias corrected and then used to feed a mechanistic crop growth model. The crop model has been run under different settings to better sample the intrinsic uncertainties. Among the main results, it is worth to report a weak but significant and spatially homogeneous increase in potential wheat yield at mid-century (under a CO2 fertilisation effect scenario). While more complex changes seem to characterise potential maize yield, with large areas in the region showing a weak-to-moderate decrease.

  9. The Vibration Analysis of Tube Bundles Induced by Fluid Elastic Excitation in Shell Side of Heat Exchanger

    NASA Astrophysics Data System (ADS)

    Bao, Minle; Wang, Lu; Li, Wenyao; Gao, Tianze

    2017-09-01

    Fluid elastic excitation in shell side of heat exchanger was deduced theoretically in this paper. Model foundation was completed by using Pro / Engineer software. The finite element model was constructed and imported into the FLUENT module. The flow field simulation adopted the dynamic mesh model, RNG k-ε model and no-slip boundary conditions. Analysing different positions vibration of tube bundles by selecting three regions in shell side of heat exchanger. The results show that heat exchanger tube bundles at the inlet of the shell side are more likely to be failure due to fluid induced vibration.

  10. How transfer flights shape the structure of the airline network.

    PubMed

    Ryczkowski, Tomasz; Fronczak, Agata; Fronczak, Piotr

    2017-07-17

    In this paper, we analyse the gravity model in the global passenger air-transport network. We show that in the standard form, the model is inadequate for correctly describing the relationship between passenger flows and typical geo-economic variables that characterize connected countries. We propose a model for transfer flights that allows exploitation of these discrepancies in order to discover hidden subflows in the network. We illustrate its usefulness by retrieving the distance coefficient in the gravity model, which is one of the determinants of the globalization process. Finally, we discuss the correctness of the presented approach by comparing the distance coefficient to several well-known economic events.

  11. The Impact of Satellite-Derived Land Surface Temperatures on Numerical Weather Prediction Analyses and Forecasts

    NASA Astrophysics Data System (ADS)

    Candy, B.; Saunders, R. W.; Ghent, D.; Bulgin, C. E.

    2017-09-01

    Land surface temperature (LST) observations from a variety of satellite instruments operating in the infrared have been compared to estimates of surface temperature from the Met Office operational numerical weather prediction (NWP) model. The comparisons show that during the day the NWP model can underpredict the surface temperature by up to 10 K in certain regions such as the Sahel and southern Africa. By contrast at night the differences are generally smaller. Matchups have also been performed between satellite LSTs and observations from an in situ radiometer located in Southern England within a region of mixed land use. These matchups demonstrate good agreement at night and suggest that the satellite uncertainties in LST are less than 2 K. The Met Office surface analysis scheme has been adapted to utilize nighttime LST observations. Experiments using these analyses in an NWP model have shown a benefit to the resulting forecasts of near-surface air temperature, particularly over Africa.

  12. Climate sensitivity to the lower stratospheric ozone variations

    NASA Astrophysics Data System (ADS)

    Kilifarska, N. A.

    2012-12-01

    The strong sensitivity of the Earth's radiation balance to variations in the lower stratospheric ozone—reported previously—is analysed here by the use of non-linear statistical methods. Our non-linear model of the land air temperature (T)—driven by the measured Arosa total ozone (TOZ)—explains 75% of total variability of Earth's T variations during the period 1926-2011. We have analysed also the factors which could influence the TOZ variability and found that the strongest impact belongs to the multi-decadal variations of galactic cosmic rays. Constructing a statistical model of the ozone variability, we have been able to predict the tendency in the land air T evolution till the end of the current decade. Results show that Earth is facing a weak cooling of the surface T by 0.05-0.25 K (depending on the ozone model) until the end of the current solar cycle. A new mechanism for O3 influence on climate is proposed.

  13. Developing a Hierarchical Model for the Spatial Analysis of PM10 Pollution Extremes in the Mexico City Metropolitan Area

    PubMed Central

    Aguirre-Salado, Alejandro Ivan; Vaquera-Huerta, Humberto; Aguirre-Salado, Carlos Arturo; Reyes-Mora, Silvia; Olvera-Cervantes, Ana Delia; Lancho-Romero, Guillermo Arturo; Soubervielle-Montalvo, Carlos

    2017-01-01

    We implemented a spatial model for analysing PM10 maxima across the Mexico City metropolitan area during the period 1995–2016. We assumed that these maxima follow a non-identical generalized extreme value (GEV) distribution and modeled the trend by introducing multivariate smoothing spline functions into the probability GEV distribution. A flexible, three-stage hierarchical Bayesian approach was developed to analyse the distribution of the PM10 maxima in space and time. We evaluated the statistical model’s performance by using a simulation study. The results showed strong evidence of a positive correlation between the PM10 maxima and the longitude and latitude. The relationship between time and the PM10 maxima was negative, indicating a decreasing trend over time. Finally, a high risk of PM10 maxima presenting levels above 1000 μg/m3 (return period: 25 yr) was observed in the northwestern region of the study area. PMID:28684720

  14. Analysis of Italian regulations on pathways of care for patients in a vegetative or minimally conscious state

    PubMed Central

    Sattin, Davide; De Torres, Laura; Dolce, Giuliano; Arcuri, Francesco; Estraneo, Anna; Cardinale, Viviana; Piperno, Roberto; Zavatta, Elena; Formisano, Rita; D’Ippolito, Mariagrazia; Vassallo, Claudio; Dessi, Barbara; Lamberti, Gianfranco; Antoniono, Elena; Lanzillotti, Crocifissa; Navarro, Jorge; Bramanti, Placido; Marino, Silvia; Zampolini, Mauro; Scarponi, Federico; Avesani, Renato; Salvi, Luca; Ferro, Salvatore; Mazza, Luigi; Fogar, Paolo; Feller, Sandro; De Nigris, Fulvio; Martinuzzi, Andrea; Buffoni, Mara; Pessina, Adriano; Corsico, Paolo; Leonardi, Matilde

    2017-01-01

    Summary Different rehabilitation models for persons diagnosed with disorders of consciousness have been proposed in Europe during the last decade. In Italy, the Ministry of Health has defined a national healthcare model, although, to date, there is a lack of information on how this has been implemented at regional level. The INCARICO project collected information on different regional regulations, analysing ethical aspects and mapping care facilities (numbers of beds and medical units) in eleven regional territories. The researchers found a total of 106 laws; differences emerged both between regions and versus the national model, showing that patients with the same diagnosis may follow different pathways of care. An ongoing cultural shift from a treatment-oriented medical approach towards a care-oriented integrated biopsychosocial approach was found in all the welfare and healthcare systems analysed. Future studies are needed to explore the relationship between healthcare systems and the quality of services provided. PMID:29042005

  15. A zebrafish (Danio rerio) model of infectious spleen and kidney necrosis virus (ISKNV) infection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu Xiaopeng; Zhang Lichun; Weng Shaoping

    2008-06-20

    Zebrafish is a model animal for studies of genetics, development, toxicology, oncology, and immunology. In this study, infectious spleen and kidney necrosis virus (ISKNV) was used to establish an infection in zebrafish, and the experimental conditions were established and characterized. Mortality of adult zebrafish infected with ISKNV by intraperitoneal (i.p.) injection exceeded 60%. ISKNV can be passed stably in zebrafish for over ten passages. The ailing zebrafish displayed petechial hemorrhaging and scale protrusion. Histological analysis of moribund fish revealed necrosis of tissue and enlarged cells in kidney and spleen. The real-time RT-PCR analysis of mRNA level confirmed that ISKNV wasmore » replicated in zebrafish. Immunohistochemistry and immunofluorescence analyses further confirmed the presence of ISKNV-infected cells in almost all organs of the infected fish. Electron microscope analyses showed that the ISKNV particle was present in the infected tissues. The establishment of zebrafish infection model of ISKNV can offer a valuable tool for studying the interactions between ISKNV and its host.« less

  16. Phylogeny of sipunculan worms: A combined analysis of four gene regions and morphology.

    PubMed

    Schulze, Anja; Cutler, Edward B; Giribet, Gonzalo

    2007-01-01

    The intra-phyletic relationships of sipunculan worms were analyzed based on DNA sequence data from four gene regions and 58 morphological characters. Initially we analyzed the data under direct optimization using parsimony as optimality criterion. An implied alignment resulting from the direct optimization analysis was subsequently utilized to perform a Bayesian analysis with mixed models for the different data partitions. For this we applied a doublet model for the stem regions of the 18S rRNA. Both analyses support monophyly of Sipuncula and most of the same clades within the phylum. The analyses differ with respect to the relationships among the major groups but whereas the deep nodes in the direct optimization analysis generally show low jackknife support, they are supported by 100% posterior probability in the Bayesian analysis. Direct optimization has been useful for handling sequences of unequal length and generating conservative phylogenetic hypotheses whereas the Bayesian analysis under mixed models provided high resolution in the basal nodes of the tree.

  17. Sediment heterogeneity and mobility in the morphodynamic modelling of gravel-bed braided rivers

    NASA Astrophysics Data System (ADS)

    Singh, Umesh; Crosato, Alessandra; Giri, Sanjay; Hicks, Murray

    2017-06-01

    The effects of sediment heterogeneity and sediment mobility on the morphology of braided rivers are still poorly studied, especially when the partial sediment mobility occurs. Nevertheless, increasing the bed sediment heterogeneity by coarse sediment supply is becoming a common practice in river restoration projects and habitat improvement all over the world. This research provides a step forward in the identification of the effects of sediment sorting on the evolution of sediment bars and braiding geometry of gravel-bed rivers. A two-dimensional morphodynamic model was used to simulate the long-term developments of a hypothetical braided system with discharge regime and morphodynamic parameters derived from the Waimakariri River, New Zealand. Several scenarios, differing in bed sediment heterogeneity and sediment mobility, were considered. The results agree with the tendencies already identified in linear analyses and experimental studies, showing that a larger sediment heterogeneity increases the braiding indes and reduces the bars length and height. The analyses allowed identifying the applicability limits of uniform sediment and variable discharge modelling approaches.

  18. Control volume analyses of glottal flow using a fully-coupled numerical fluid-structure interaction model

    NASA Astrophysics Data System (ADS)

    Yang, Jubiao; Krane, Michael; Zhang, Lucy

    2013-11-01

    Vocal fold vibrations and the glottal jet are successfully simulated using the modified Immersed Finite Element method (mIFEM), a fully coupled dynamics approach to model fluid-structure interactions. A self-sustained and steady vocal fold vibration is captured given a constant pressure input at the glottal entrance. The flow rates at different axial locations in the glottis are calculated, showing small variations among them due to the vocal fold motion and deformation. To further facilitate the understanding of the phonation process, two control volume analyses, specifically with Bernoulli's equation and Newton's 2nd law, are carried out for the glottal flow based on the simulation results. A generalized Bernoulli's equation is derived to interpret the correlations between the velocity and pressure temporally and spatially along the center line which is a streamline using a half-space model with symmetry boundary condition. A specialized Newton's 2nd law equation is developed and divided into terms to help understand the driving mechanism of the glottal flow.

  19. Spinning projectile's attitude measurement with LW infrared radiation under sea-sky background

    NASA Astrophysics Data System (ADS)

    Xu, Miaomiao; Bu, Xiongzhu; Yu, Jing; He, Zilu

    2018-05-01

    With the further development of infrared radiation research in sea-sky background and the requirement of spinning projectile's attitude measurement, the sea-sky infrared radiation field is used to carry out spinning projectile's attitude angle instead of inertial sensors. Firstly, the generation mechanism of sea-sky infrared radiation is analysed. The mathematical model of sea-sky infrared radiation is deduced in LW (long wave) infrared 8 ∼ 14 μm band by calculating the sea surface and sky infrared radiation. Secondly, according to the movement characteristics of spinning projectile, the attitude measurement model of infrared sensors on projectile's three axis is established. And the feasibility of the model is analysed by simulation. Finally, the projectile's attitude calculation algorithm is designed to improve the attitude angle estimation accuracy. The results of semi-physical experiments show that the segmented interactive algorithm estimation error of pitch and roll angle is within ±1.5°. The attitude measurement method is effective and feasible, and provides accurate measurement basis for the guidance of spinning projectile.

  20. The analysis sensitivity to tropical winds from the Global Weather Experiment

    NASA Technical Reports Server (NTRS)

    Paegle, J.; Paegle, J. N.; Baker, W. E.

    1986-01-01

    The global scale divergent and rotational flow components of the Global Weather Experiment (GWE) are diagnosed from three different analyses of the data. The rotational flow shows closer agreement between the analyses than does the divergent flow. Although the major outflow and inflow centers are similarly placed in all analyses, the global kinetic energy of the divergent wind varies by about a factor of 2 between different analyses while the global kinetic energy of the rotational wind varies by only about 10 percent between the analyses. A series of real data assimilation experiments has been performed with the GLA general circulation model using different amounts of tropical wind data during the First Special Observing Period of the Global Weather Experiment. In exeriment 1, all available tropical wind data were used; in the second experiment, tropical wind data were suppressed; while, in the third and fourth experiments, only tropical wind data with westerly and easterly components, respectively, were assimilated. The rotational wind appears to be more sensitive to the presence or absence of tropical wind data than the divergent wind. It appears that the model, given only extratropical observations, generates excessively strong upper tropospheric westerlies. These biases are sufficiently pronounced to amplify the globally integrated rotational flow kinetic energy by about 10 percent and the global divergent flow kinetic energy by about a factor of 2. Including only easterly wind data in the tropics is more effective in controlling the model error than including only westerly wind data. This conclusion is especially noteworthy because approximately twice as many upper tropospheric westerly winds were available in these cases as easterly winds.

  1. Cost-effectiveness of prucalopride in the treatment of chronic constipation in the Netherlands

    PubMed Central

    Nuijten, Mark J. C.; Dubois, Dominique J.; Joseph, Alain; Annemans, Lieven

    2015-01-01

    Objective: To assess the cost-effectiveness of prucalopride vs. continued laxative treatment for chronic constipation in patients in the Netherlands in whom laxatives have failed to provide adequate relief. Methods: A Markov model was developed to estimate the cost-effectiveness of prucalopride in patients with chronic constipation receiving standard laxative treatment from the perspective of Dutch payers in 2011. Data sources included published prucalopride clinical trials, published Dutch price/tariff lists, and national population statistics. The model simulated the clinical and economic outcomes associated with prucalopride vs. standard treatment and had a cycle length of 1 month and a follow-up time of 1 year. Response to treatment was defined as the proportion of patients who achieved “normal bowel function”. One-way and probabilistic sensitivity analyses were conducted to test the robustness of the base case. Results: In the base case analysis, the cost of prucalopride relative to continued laxative treatment was € 9015 per quality-adjusted life-year (QALY). Extensive sensitivity analyses and scenario analyses confirmed that the base case cost-effectiveness estimate was robust. One-way sensitivity analyses showed that the model was most sensitive in response to prucalopride; incremental cost-effectiveness ratios ranged from € 6475 to 15,380 per QALY. Probabilistic sensitivity analyses indicated that there is a greater than 80% probability that prucalopride would be cost-effective compared with continued standard treatment, assuming a willingness-to-pay threshold of € 20,000 per QALY from a Dutch societal perspective. A scenario analysis was performed for women only, which resulted in a cost-effectiveness ratio of € 7773 per QALY. Conclusion: Prucalopride was cost-effective in a Dutch patient population, as well as in a women-only subgroup, who had chronic constipation and who obtained inadequate relief from laxatives. PMID:25926794

  2. Shortwave radiative forcing, rapid adjustment, and feedback to the surface by sulfate geoengineering: analysis of the Geoengineering Model Intercomparison Project G4 scenario

    DOE PAGES

    Kashimura, Hiroki; Abe, Manabu; Watanabe, Shingo; ...

    2017-03-08

    This paper evaluates the forcing, rapid adjustment, and feedback of net shortwave radiation at the surface in the G4 experiment of the Geoengineering Model Intercomparison Project by analysing outputs from six participating models. G4 involves injection of 5 Tg yr -1 of SO 2, a sulfate aerosol precursor, into the lower stratosphere from year 2020 to 2069 against a background scenario of RCP4.5. A single-layer atmospheric model for shortwave radiative transfer is used to estimate the direct forcing of solar radiation management (SRM), and rapid adjustment and feedbacks from changes in the water vapour amount, cloud amount, and surface albedo (compared with RCP4.5). The analysismore » shows that the globally and temporally averaged SRM forcing ranges from -3.6 to -1.6 W m -2, depending on the model. The sum of the rapid adjustments and feedback effects due to changes in the water vapour and cloud amounts increase the downwelling shortwave radiation at the surface by approximately 0.4 to 1.5 W m -2 and hence weaken the effect of SRM by around 50 %. The surface albedo changes decrease the net shortwave radiation at the surface; it is locally strong (~-4 W m -2) in snow and sea ice melting regions, but minor for the global average. The analyses show that the results of the G4 experiment, which simulates sulfate geoengineering, include large inter-model variability both in the direct SRM forcing and the shortwave rapid adjustment from change in the cloud amount, and imply a high uncertainty in modelled processes of sulfate aerosols and clouds.« less

  3. Syndromes of Self-Reported Psychopathology for Ages 18–59 in 29 Societies

    PubMed Central

    Achenbach, Thomas M.; Rescorla, Leslie A.; Tumer, Lori V.; Ahmeti-Pronaj, Adelina; Au, Alma; Maese, Carmen Avila; Bellina, Monica; Caldas, J. Carlos; Chen, Yi-Chuen; Csemy, Ladislav; da Rocha, Marina M.; Decoster, Jeroen; Dobrean, Anca; Ezpeleta, Lourdes; Fontaine, Johnny R. J.; Funabiki, Yasuko; Guðmundsson, Halldór S.; Harder, Valerie s; de la Cabada, Marie Leiner; Leung, Patrick; Liu, Jianghong; Mahr, Safia; Malykh, Sergey; Maras, Jelena Srdanovic; Markovic, Jasminka; Ndetei, David M.; Oh, Kyung Ja; Petot, Jean-Michel; Riad, Geylan; Sakarya, Direnc; Samaniego, Virginia C.; Sebre, Sandra; Shahini, Mimoza; Silvares, Edwiges; Simulioniene, Roma; Sokoli, Elvisa; Talcott, Joel B.; Vazquez, Natalia; Zasepa, Ewa

    2017-01-01

    This study tested the multi-society generalizability of an eight-syndrome assessment model derived from factor analyses of American adults’ self-ratings of 120 behavioral, emotional, and social problems. The Adult Self-Report (ASR; Achenbach and Rescorla 2003) was completed by 17,152 18–59-year-olds in 29 societies. Confirmatory factor analyses tested the fit of self-ratings in each sample to the eight-syndrome model. The primary model fit index (Root Mean Square Error of Approximation) showed good model fit for all samples, while secondary indices showed acceptable to good fit. Only 5 (0.06%) of the 8,598 estimated parameters were outside the admissible parameter space. Confidence intervals indicated that sampling fluctuations could account for the deviant parameters. Results thus supported the tested model in societies differing widely in social, political, and economic systems, languages, ethnicities, religions, and geographical regions. Although other items, societies, and analytic methods might yield different results, the findings indicate that adults in very diverse societies were willing and able to rate themselves on the same standardized set of 120 problem items. Moreover, their self-ratings fit an eight-syndrome model previously derived from self-ratings by American adults. The support for the statistically derived syndrome model is consistent with previous findings for parent, teacher, and self-ratings of 1½–18-year-olds in many societies. The ASR and its parallel collateral-report instrument, the Adult Behavior Checklist (ABCL), may offer mental health professionals practical tools for the multi-informant assessment of clinical constructs of adult psychopathology that appear to be meaningful across diverse societies. PMID:29805197

  4. Confirmatory Factor Analysis of the Minnesota Nicotine Withdrawal Scale

    PubMed Central

    Toll, Benjamin A.; O’Malley, Stephanie S.; McKee, Sherry A.; Salovey, Peter; Krishnan-Sarin, Suchitra

    2008-01-01

    The authors examined the factor structure of the Minnesota Nicotine Withdrawal Scale (MNWS) using confirmatory factor analysis in clinical research samples of smokers trying to quit (n = 723). Three confirmatory factor analytic models, based on previous research, were tested with each of the 3 study samples at multiple points in time. A unidimensional model including all 8 MNWS items was found to be the best explanation of the data. This model produced fair to good internal consistency estimates. Additionally, these data revealed that craving should be included in the total score of the MNWS. Factor scores derived from this single-factor, 8-item model showed that increases in withdrawal were associated with poor smoking outcome for 2 of the clinical studies. Confirmatory factor analyses of change scores showed that the MNWS symptoms cohere as a syndrome over time. Future investigators should report a total score using all of the items from the MNWS. PMID:17563141

  5. LHC signals of radiatively-induced neutrino masses and implications for the Zee-Babu model

    NASA Astrophysics Data System (ADS)

    Alcaide, Julien; Chala, Mikael; Santamaria, Arcadi

    2018-04-01

    Contrary to the see-saw models, extended Higgs sectors leading to radiatively-induced neutrino masses do require the extra particles to be at the TeV scale. However, these new states have often exotic decays, to which experimental LHC searches performed so far, focused on scalars decaying into pairs of same-sign leptons, are not sensitive. In this paper we show that their experimental signatures can start to be tested with current LHC data if dedicated multi-region analyses correlating different observables are used. We also provide high-accuracy estimations of the complicated Standard Model backgrounds involved. For the case of the Zee-Babu model, we show that regions not yet constrained by neutrino data and low-energy experiments can be already probed, while most of the parameter space could be excluded at the 95% C.L. in a high-luminosity phase of the LHC.

  6. Probing the solar corona with very long baseline interferometry.

    PubMed

    Soja, B; Heinkelmann, R; Schuh, H

    2014-06-20

    Understanding and monitoring the solar corona and solar wind is important for many applications like telecommunications or geomagnetic studies. Coronal electron density models have been derived by various techniques over the last 45 years, principally by analysing the effect of the corona on spacecraft tracking. Here we show that recent observational data from very long baseline interferometry (VLBI), a radio technique crucial for astrophysics and geodesy, could be used to develop electron density models of the Sun's corona. The VLBI results agree well with previous models from spacecraft measurements. They also show that the simple spherical electron density model is violated by regional density variations and that on average the electron density in active regions is about three times that of low-density regions. Unlike spacecraft tracking, a VLBI campaign would be possible on a regular basis and would provide highly resolved spatial-temporal samplings over a complete solar cycle.

  7. Acoustic test and analyses of three advanced turboprop models

    NASA Technical Reports Server (NTRS)

    Brooks, B. M.; Metzger, F. B.

    1980-01-01

    Results of acoustic tests of three 62.2 cm (24.5 inch) diameter models of the prop-fan (a small diameter, highly loaded. Multi-bladed variable pitch advanced turboprop) are presented. Results show that there is little difference in the noise produced by unswept and slightly swept designs. However, the model designed for noise reduction produces substantially less noise at test conditions simulating 0.8 Mach number cruise speed or at conditions simulating takeoff and landing. In the near field at cruise conditions the acoustically designed. In the far field at takeoff and landing conditions the acoustically designed model is 5 db quieter than unswept or slightly swept designs. Correlation between noise measurement and theoretical predictions as well as comparisons between measured and predicted acoustic pressure pulses generated by the prop-fan blades are discussed. The general characteristics of the pulses are predicted. Shadowgraph measurements were obtained which showed the location of bow and trailing waves.

  8. An evaluation of tannery industry wastewater treatment sludge gasification by artificial neural network modeling.

    PubMed

    Ongen, Atakan; Ozcan, H Kurtulus; Arayıcı, Semiha

    2013-12-15

    This paper reports on the calorific value of synthetic gas (syngas) produced by gasification of dewatered sludge derived from treatment of tannery wastewater. Proximate and ultimate analyses of samples were performed. Thermochemical conversion alters the chemical structure of the waste. Dried air was used as a gasification agent at varying flow rates, which allowed the feedstock to be quickly converted into gas by means of different heterogeneous reactions. A lab-scale updraft fixed-bed steel reactor was used for thermochemical conversion of sludge samples. Artificial neural network (ANN) modeling techniques were used to observe variations in the syngas related to operational conditions. Modeled outputs showed that temporal changes of model predictions were in close accordance with real values. Correlation coefficients (r) showed that the ANN used in this study gave results with high sensitivity. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. The spectra of WC9 stars: evolution and dust formation

    NASA Astrophysics Data System (ADS)

    Williams, P. M.; Crowther, P. A.; van der Hucht, K. A.

    2015-05-01

    We present analyses of new optical spectra of three WC9 stars, WR 88, WR 92 and WR 103 to test the suggestion that they exemplify an evolutionary sequence amongst the WC9 stars. The spectrum of WR 88 shows conspicuous lines of N III and N IV, leading to classification as a transitional WN8o/WC9 star. The three stars show a sequence of increasing O II and O III line strengths, confirming and extending earlier studies. The spectra were analysed using CMFGEN models, finding greater abundances of oxygen and carbon in WR 103 than in WR 92 and, especially, in WR 88. Of the three stars, only WR 103 makes circumstellar dust. We suggest that oxygen itself does not enhance this process but that it is its higher carbon abundance that allows WR 103 to make dust.

  10. Violence against teachers: prevalence and consequences.

    PubMed

    Wilson, Catherine M; Douglas, Kevin S; Lyon, David R

    2011-08-01

    Data collected from 731 teachers were used to examine the consequences of violence directed toward teachers while in the workplace. Analyses showed that the majority of respondents (n = 585, 80.0%) had experienced school-related violence—broadly defined—at one point in their careers. Serious violence (actual, attempted, or threatened physical violence) was less common, but still common enough to be of concern (n = 202, 27.6%). Violence predicted physical and emotional effects, as well as teaching-related functioning. In addition, a model with fear as a potential mediator revealed that both fear and violence were independently predictive of these negative outcomes. Finally, analyses showed that, in general, women reported higher levels of physical symptoms compared to men. We discuss the implications of violence against teachers in terms of personal consequences and the implications for mental health professionals working in an educational setting.

  11. Validation of High Frequency (HF) Propagation Prediction Models in the Arctic region

    NASA Astrophysics Data System (ADS)

    Athieno, R.; Jayachandran, P. T.

    2014-12-01

    Despite the emergence of modern techniques for long distance communication, Ionospheric communication in the high frequency (HF) band (3-30 MHz) remains significant to both civilian and military users. However, the efficient use of the ever-varying ionosphere as a propagation medium is dependent on the reliability of ionospheric and HF propagation prediction models. Most available models are empirical implying that data collection has to be sufficiently large to provide good intended results. The models we present were developed with little data from the high latitudes which necessitates their validation. This paper presents the validation of three long term High Frequency (HF) propagation prediction models over a path within the Arctic region. Measurements of the Maximum Usable Frequency for a 3000 km range (MUF (3000) F2) for Resolute, Canada (74.75° N, 265.00° E), are obtained from hand-scaled ionograms generated by the Canadian Advanced Digital Ionosonde (CADI). The observations have been compared with predictions obtained from the Ionospheric Communication Enhanced Profile Analysis Program (ICEPAC), Voice of America Coverage Analysis Program (VOACAP) and International Telecommunication Union Recommendation 533 (ITU-REC533) for 2009, 2011, 2012 and 2013. A statistical analysis shows that the monthly predictions seem to reproduce the general features of the observations throughout the year though it is more evident in the winter and equinox months. Both predictions and observations show a diurnal and seasonal variation. The analysed models did not show large differences in their performances. However, there are noticeable differences across seasons for the entire period analysed: REC533 gives a better performance in winter months while VOACAP has a better performance for both equinox and summer months. VOACAP gives a better performance in the daily predictions compared to ICEPAC though, in general, the monthly predictions seem to agree more with the observations compared to the daily predictions.

  12. pyres: a Python wrapper for electrical resistivity modeling with R2

    NASA Astrophysics Data System (ADS)

    Befus, Kevin M.

    2018-04-01

    A Python package, pyres, was written to handle common as well as specialized input and output tasks for the R2 electrical resistivity (ER) modeling program. Input steps including handling field data, creating quadrilateral or triangular meshes, and data filtering allow repeatable and flexible ER modeling within a programming environment. pyres includes non-trivial routines and functions for locating and constraining specific known or separately-parameterized regions in both quadrilateral and triangular meshes. Three basic examples of how to run forward and inverse models with pyres are provided. The importance of testing mesh convergence and model sensitivity are also addressed with higher-level examples that show how pyres can facilitate future research-grade ER analyses.

  13. Understanding independence

    NASA Astrophysics Data System (ADS)

    Annan, James; Hargreaves, Julia

    2016-04-01

    In order to perform any Bayesian processing of a model ensemble, we need a prior over the ensemble members. In the case of multimodel ensembles such as CMIP, the historical approach of ``model democracy'' (i.e. equal weight for all models in the sample) is no longer credible (if it ever was) due to model duplication and inbreeding. The question of ``model independence'' is central to the question of prior weights. However, although this question has been repeatedly raised, it has not yet been satisfactorily addressed. Here I will discuss the issue of independence and present a theoretical foundation for understanding and analysing the ensemble in this context. I will also present some simple examples showing how these ideas may be applied and developed.

  14. New software for cervical vertebral geometry assessment and its relationship to skeletal maturation—a pilot study

    PubMed Central

    Cunha, A R; Júnior, G C; Fernandes, N; Campos, M J S; Costa, L F M; Vitral, R W F; Bolognese, A M

    2014-01-01

    Objectives: In the present study, we developed new software for quantitative analysis of cervical vertebrae maturation, and we evaluated its applicability through a multinomial logistic regression model (MLRM). Methods: Digitized images of the bodies of the second (C2), third (C3) and fourth (C4) cervical vertebrae were analysed in cephalometric radiographs of 236 subjects (116 boys and 120 girls) by using a software developed for digitized vertebrae analysis. The sample was initially distributed into 11 categories according to the Fishman's skeletal maturity indicators and were then grouped into four stages for quantitative cervical maturational changes (QCMC) analysis (QCMC I, II, III and IV). Seven variables of interest were measured and analysed to identify morphologic alterations of the vertebral bodies in each QCMC category. Results: Statistically significant differences (p < 0.05) were observed among all QCMC categories for the variables analysed. The MLRM used to calculate the probability that an individual belonged to each of the four cervical vertebrae maturation categories was constructed by taking into account gender, chronological age and four variables determined by digitized vertebrae analysis (Ang_C3, MP_C3, MP_C4 and SP_C4). The MLRM presented a predictability of 81.4%. The weighted κ test showed almost perfect agreement (κ = 0.832) between the categories defined initially by the method of Fishman and those allocated by the MLRM. Conclusions: Significant alterations in the morphologies of the C2, C3 and C4 vertebral bodies that were analysed through the digitized vertebrae analysis software occur during the different stages of skeletal maturation. The model that combines the four parameters measured on the vertebral bodies, the age and the gender showed an excellent prediction. PMID:24319125

  15. New software for cervical vertebral geometry assessment and its relationship to skeletal maturation--a pilot study.

    PubMed

    Santiago, R C; Cunha, A R; Júnior, G C; Fernandes, N; Campos, M J S; Costa, L F M; Vitral, R W F; Bolognese, A M

    2014-01-01

    In the present study, we developed new software for quantitative analysis of cervical vertebrae maturation, and we evaluated its applicability through a multinomial logistic regression model (MLRM). Digitized images of the bodies of the second (C2), third (C3) and fourth (C4) cervical vertebrae were analysed in cephalometric radiographs of 236 subjects (116 boys and 120 girls) by using a software developed for digitized vertebrae analysis. The sample was initially distributed into 11 categories according to the Fishman's skeletal maturity indicators and were then grouped into four stages for quantitative cervical maturational changes (QCMC) analysis (QCMC I, II, III and IV). Seven variables of interest were measured and analysed to identify morphologic alterations of the vertebral bodies in each QCMC category. Statistically significant differences (p < 0.05) were observed among all QCMC categories for the variables analysed. The MLRM used to calculate the probability that an individual belonged to each of the four cervical vertebrae maturation categories was constructed by taking into account gender, chronological age and four variables determined by digitized vertebrae analysis (Ang_C3, MP_C3, MP_C4 and SP_C4). The MLRM presented a predictability of 81.4%. The weighted κ test showed almost perfect agreement (κ = 0.832) between the categories defined initially by the method of Fishman and those allocated by the MLRM. Significant alterations in the morphologies of the C2, C3 and C4 vertebral bodies that were analysed through the digitized vertebrae analysis software occur during the different stages of skeletal maturation. The model that combines the four parameters measured on the vertebral bodies, the age and the gender showed an excellent prediction.

  16. Confronting ‘confounding by health system use’ in Medicare Part D: Comparative effectiveness of propensity score approaches to confounding adjustment

    PubMed Central

    Polinski, Jennifer M.; Schneeweiss, Sebastian; Glynn, Robert J.; Lii, Joyce; Rassen, Jeremy

    2012-01-01

    Purpose Under Medicare Part D, patient characteristics influence plan choice, which in turn influences Part D coverage gap entry. We compared pre-defined propensity score (PS) and high-dimensional propensity score (hdPS) approaches to address such ‘confounding by health system use’ in assessing whether coverage gap entry is associated with cardiovascular events or death. Methods We followed 243,079 Medicare patients aged 65+ with linked prescription, medical, and plan-specific data in 2005–2007. Patients reached the coverage gap and were followed until an event or year’s end. Exposed patients were responsible for drug costs in the gap; unexposed patients (patients with non-Part D drug insurance and Part D patients receiving a low-income subsidy (LIS)) received financial assistance. Exposed patients were 1:1 PS- or hdPS-matched to unexposed patients. The PS model included 52 predefined covariates; the hdPS model added 400 empirically identified covariates. Hazard ratios for death and any of five cardiovascular outcomes were compared. In sensitivity analyses, we explored residual confounding using only LIS patients in the unexposed group. Results In unadjusted analyses, exposed patients had no greater hazard of death (HR=1.00; 95% CI, 0.84–1.20) or other outcomes. PS- (HR=1.29;0.99–1.66) and hdPS- (HR=1.11;0.86–1.42) matched analyses showed elevated but non-significant hazards of death. In sensitivity analyses, the PS analysis showed a protective effect (HR=0.78;0.61–0.98), while the hdPS analysis (HR=1.06;0.82–1.37) confirmed the main hdPS findings. Conclusion Although the PS-matched analysis suggested elevated though non-significant hazards of death among patients with no financial assistance during the gap, the hdPS analysis produced lower estimates that were stable across sensitivity analyses. PMID:22552984

  17. Atlantic Tropical Cyclogenetic Processes during SOP-3 NAMMA in the GEOS-5 Global Data Assimilation and Forecast System

    NASA Technical Reports Server (NTRS)

    Reale, Oreste; Lau, William K.; Kim, Kyu-Myong; Brin, Eugenia

    2009-01-01

    This article investigates the role of the Saharan Air Layer (SAL) in tropical cyclogenetic processes associated with a non-developing and a developing African easterly wave observed during the Special Observation Period (SOP-3) phase of the 2006 NASA African Monsoon Multidisciplinary Analyses (NAMMA). The two waves are chosen because both interact heavily with Saharan air. A global data assimilation and forecast system, the NASA GEOS-5, is being run to produce a set of high-quality global analyses, inclusive of all observations used operationally but with denser satellite information. In particular, following previous works by the same Authors, the quality-controlled data from the Atmospheric Infrared Sounder (AIRS) used to produce these analyses have a better coverage than the one adopted by operational centers. From these improved analyses, two sets of 31 5-day high resolution forecasts, at horizontal resolutions of both half and quarter degrees, are produced. Results show that very steep moisture gradients are associated with the SAL in forecasts and analyses even at great distance from the Sahara. In addition, a thermal dipole (warm above, cool below) is present in the non-developing case. Moderate Resolution Imaging Spectroradiometer (MODIS) show that aerosol optical thickness is higher in the non-developing case. Altogether, results suggest that radiative effect of dust may play some role in producing a thermal structure less favorable to cyclogenesis. Results also indicate that only global horizontal resolutions on the order of 20-30 kilometers can capture the large-scale transport and the fine thermal structure of the SAL, inclusive of the sharp moisture gradients, reproducing the effect of tropical cyclone suppression which has been hypothesized by previous authors from observational and regional modeling perspectives. These effects cannot be fully represented at lower resolutions. Global resolution of a quarter of a degree is a minimum critical threshold to investigate Atlantic tropical cyclogenesis from a global modeling perspective.

  18. Ecological and physiological thermal niches to understand distribution of Chagas disease vectors in Latin America.

    PubMed

    DE LA Vega, G J; Schilman, P E

    2018-03-01

    In order to assess how triatomines (Hemiptera, Reduviidae), Chagas disease vectors, are distributed through Latin America, we analysed the relationship between the ecological niche and the limits of the physiological thermal niche in seven species of triatomines. We combined two methodological approaches: species distribution models, and physiological tolerances. First, we modelled the ecological niche and identified the most important abiotic factor for their distribution. Then, thermal tolerance limits were analysed by measuring maximum and minimum critical temperatures, upper lethal temperature, and 'chill-coma recovery time'. Finally, we used phylogenetic independent contrasts to analyse the link between limiting factors and the thermal tolerance range for the assessment of ecological hypotheses that provide a different outlook for the geo-epidemiology of Chagas disease. In triatomines, thermo-tolerance range increases with increasing latitude mainly due to better cold tolerances, suggesting an effect of thermal selection. In turn, physiological analyses show that species reaching southernmost areas have a higher thermo-tolerance than those with tropical distributions, denoting that thermo-tolerance is limiting the southern distribution. Understanding the latitudinal range along its physiological limits of disease vectors may prove useful to test ecological hypotheses and improve strategies and efficiency of vector control at the local and regional levels. © 2017 The Royal Entomological Society.

  19. Individual participant data meta-analyses should not ignore clustering

    PubMed Central

    Abo-Zaid, Ghada; Guo, Boliang; Deeks, Jonathan J.; Debray, Thomas P.A.; Steyerberg, Ewout W.; Moons, Karel G.M.; Riley, Richard David

    2013-01-01

    Objectives Individual participant data (IPD) meta-analyses often analyze their IPD as if coming from a single study. We compare this approach with analyses that rather account for clustering of patients within studies. Study Design and Setting Comparison of effect estimates from logistic regression models in real and simulated examples. Results The estimated prognostic effect of age in patients with traumatic brain injury is similar, regardless of whether clustering is accounted for. However, a family history of thrombophilia is found to be a diagnostic marker of deep vein thrombosis [odds ratio, 1.30; 95% confidence interval (CI): 1.00, 1.70; P = 0.05] when clustering is accounted for but not when it is ignored (odds ratio, 1.06; 95% CI: 0.83, 1.37; P = 0.64). Similarly, the treatment effect of nicotine gum on smoking cessation is severely attenuated when clustering is ignored (odds ratio, 1.40; 95% CI: 1.02, 1.92) rather than accounted for (odds ratio, 1.80; 95% CI: 1.29, 2.52). Simulations show models accounting for clustering perform consistently well, but downwardly biased effect estimates and low coverage can occur when ignoring clustering. Conclusion Researchers must routinely account for clustering in IPD meta-analyses; otherwise, misleading effect estimates and conclusions may arise. PMID:23651765

  20. A non-equilibrium neutral model for analysing cultural change.

    PubMed

    Kandler, Anne; Shennan, Stephen

    2013-08-07

    Neutral evolution is a frequently used model to analyse changes in frequencies of cultural variants over time. Variants are chosen to be copied according to their relative frequency and new variants are introduced by a process of random mutation. Here we present a non-equilibrium neutral model which accounts for temporally varying population sizes and mutation rates and makes it possible to analyse the cultural system under consideration at any point in time. This framework gives an indication whether observed changes in the frequency distributions of a set of cultural variants between two time points are consistent with the random copying hypothesis. We find that the likelihood of the existence of the observed assemblage at the end of the considered time period (expressed by the probability of the observed number of cultural variants present in the population during the whole period under neutral evolution) is a powerful indicator of departures from neutrality. Further, we study the effects of frequency-dependent selection on the evolutionary trajectories and present a case study of change in the decoration of pottery in early Neolithic Central Europe. Based on the framework developed we show that neutral evolution is not an adequate description of the observed changes in frequency. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Occupational value and relationships to meaning and health: elaborations of the ValMO-model.

    PubMed

    Erlandsson, Lena-Karin; Eklund, Mona; Persson, Dennis

    2011-03-01

    This study investigates the theoretical assumption of the Value and Meaning in Occupations model. The aim was to explore the relationship between occupational value, perceived meaning, and subjective health in a sample of individuals of working age, 50 men and 250 women. Frequency of experienced values in occupations was assessed through the Occupational Value instrument with pre-defined items. Perceived meaning was operationalized and assessed by the Sense of Coherence measure. Subjective health was estimated by two questions from the SF-36 questionnaire. The analyses implied descriptive analyses, correlations, and logistic regression analyses in which sociodemographic variables were included. The findings showed highly significant relationships between occupational value and perceived meaning and when belonging to the high group of occupational value the likelihood was tripled of belonging to the high group of perceived meaning. When married or cohabitating there was double the likelihood of belonging to the high group of perceived meaning. Although perceived meaning was found to be positively associated with subjective health, working full time was the most important factor in explaining subjective health, compared with working less than full time. The results confirm assumptions in the ValMO-model, and the importance of focusing on occupational value in clinical practice is highlighted.

  2. Host Resistance, Population Structure and the Long-Term Persistence of Bubonic Plague: Contributions of a Modelling Approach in the Malagasy Focus

    PubMed Central

    Gascuel, Fanny; Choisy, Marc; Duplantier, Jean-Marc; Débarre, Florence; Brouat, Carine

    2013-01-01

    Although bubonic plague is an endemic zoonosis in many countries around the world, the factors responsible for the persistence of this highly virulent disease remain poorly known. Classically, the endemic persistence of plague is suspected to be due to the coexistence of plague resistant and plague susceptible rodents in natural foci, and/or to a metapopulation structure of reservoirs. Here, we test separately the effect of each of these factors on the long-term persistence of plague. We analyse the dynamics and equilibria of a model of plague propagation, consistent with plague ecology in Madagascar, a major focus where this disease is endemic since the 1920s in central highlands. By combining deterministic and stochastic analyses of this model, and including sensitivity analyses, we show that (i) endemicity is favoured by intermediate host population sizes, (ii) in large host populations, the presence of resistant rats is sufficient to explain long-term persistence of plague, and (iii) the metapopulation structure of susceptible host populations alone can also account for plague endemicity, thanks to both subdivision and the subsequent reduction in the size of subpopulations, and extinction-recolonization dynamics of the disease. In the light of these results, we suggest scenarios to explain the localized presence of plague in Madagascar. PMID:23675291

  3. Host resistance, population structure and the long-term persistence of bubonic plague: contributions of a modelling approach in the Malagasy focus.

    PubMed

    Gascuel, Fanny; Choisy, Marc; Duplantier, Jean-Marc; Débarre, Florence; Brouat, Carine

    2013-01-01

    Although bubonic plague is an endemic zoonosis in many countries around the world, the factors responsible for the persistence of this highly virulent disease remain poorly known. Classically, the endemic persistence of plague is suspected to be due to the coexistence of plague resistant and plague susceptible rodents in natural foci, and/or to a metapopulation structure of reservoirs. Here, we test separately the effect of each of these factors on the long-term persistence of plague. We analyse the dynamics and equilibria of a model of plague propagation, consistent with plague ecology in Madagascar, a major focus where this disease is endemic since the 1920s in central highlands. By combining deterministic and stochastic analyses of this model, and including sensitivity analyses, we show that (i) endemicity is favoured by intermediate host population sizes, (ii) in large host populations, the presence of resistant rats is sufficient to explain long-term persistence of plague, and (iii) the metapopulation structure of susceptible host populations alone can also account for plague endemicity, thanks to both subdivision and the subsequent reduction in the size of subpopulations, and extinction-recolonization dynamics of the disease. In the light of these results, we suggest scenarios to explain the localized presence of plague in Madagascar.

  4. Low Social Status Markers: Do They Predict Depressive Symptoms in Adolescence?

    PubMed

    Jackson, Benita; Goodman, Elizabeth

    2011-07-01

    Some markers of social disadvantage are associated robustly with depressive symptoms among adolescents: female gender and lower socioeconomic status (SES), respectively. Others are associated equivocally, notably Black v. White race/ethnicity. Few studies examine whether markers of social disadvantage by gender, SES, and race/ethnicity jointly predict self-reported depressive symptoms during adolescence; this was our goal. Secondary analyses were conducted on data from a socioeconomically diverse community-based cohort study of non-Hispanic Black and White adolescents (N = 1,263, 50.4% female). Multivariable general linear models tested if female gender, Black race/ethnicity, and lower SES (assessed by parent education and household income), and their interactions predicted greater depressive symptoms reported on the Center for Epidemiological Studies-Depression scale. Models adjusted for age and pubertal status. Univariate analyses revealed more depressive symptoms in females, Blacks, and participants with lower SES. Multivariable models showed females across both racial/ethnic groups reported greater depressive symptoms; Blacks demonstrated more depressive symptoms than did Whites but when SES was included this association disappeared. Exploratory analyses suggested Blacks gained less mental health benefit from increased SES. However there were no statistically significant interactions among gender, race/ethnicity, or SES. Taken together, we conclude that complex patterning among low social status domains within gender, race/ethnicity, and SES predicts depressive symptoms among adolescents.

  5. Characterizing Middle Atmospheric Dynamical Variability and its Impact on the Thermosphere/Ionosphere System During Recent Stratospheric Sudden Warmings

    NASA Astrophysics Data System (ADS)

    McCormack, J. P.; Sassi, F.; Hoppel, K.; Ma, J.; Eckermann, S. D.

    2015-12-01

    We investigate the evolution of neutral atmospheric dynamics in the 10-100 km altitude range before, during, and after recent stratospheric sudden warmings (SSWs) using a prototype high-altitude version of the Navy Global Environmental Model (NAVGEM), which combines a 4-dimensional variational (4DVAR) data assimilation system with a 3-time-level semi-Lagrangian semi-implicit global forecast model. In addition to assimilating conventional meteorological observations, NAVGEM also assimilates middle atmospheric temperature and constituent observations from both operational and research satellite platforms to provide global synoptic meteorological analyses of winds, temperatures, ozone, and water vapor from the surface to ~90 km. In this study, NAVGEM analyses are used to diagnose the spatial and temporal evolution of the main dynamical drivers in the mesosphere and lower thermosphere (MLT) before, during, and after specific SSW events during the 2009-2013 period when large disturbances were observed in the thermosphere/ionosphere (TI) region. Preliminary findings show strong modulation of the semidiurnal tide in the MLT during the onset of an SSW. To assess the impact of the neutral atmosphere dynamical variability on the TI system, NAVGEM analyses are used to constrain simulations of select SSW events using the specified dynamics (SD) configuration of the extended Whole Atmosphere Community Climate Model (WACCM-X).

  6. Linkage Analysis of a Model Quantitative Trait in Humans: Finger Ridge Count Shows Significant Multivariate Linkage to 5q14.1

    PubMed Central

    Medland, Sarah E; Loesch, Danuta Z; Mdzewski, Bogdan; Zhu, Gu; Montgomery, Grant W; Martin, Nicholas G

    2007-01-01

    The finger ridge count (a measure of pattern size) is one of the most heritable complex traits studied in humans and has been considered a model human polygenic trait in quantitative genetic analysis. Here, we report the results of the first genome-wide linkage scan for finger ridge count in a sample of 2,114 offspring from 922 nuclear families. Both univariate linkage to the absolute ridge count (a sum of all the ridge counts on all ten fingers), and multivariate linkage analyses of the counts on individual fingers, were conducted. The multivariate analyses yielded significant linkage to 5q14.1 (Logarithm of odds [LOD] = 3.34, pointwise-empirical p-value = 0.00025) that was predominantly driven by linkage to the ring, index, and middle fingers. The strongest univariate linkage was to 1q42.2 (LOD = 2.04, point-wise p-value = 0.002, genome-wide p-value = 0.29). In summary, the combination of univariate and multivariate results was more informative than simple univariate analyses alone. Patterns of quantitative trait loci factor loadings consistent with developmental fields were observed, and the simple pleiotropic model underlying the absolute ridge count was not sufficient to characterize the interrelationships between the ridge counts of individual fingers. PMID:17907812

  7. The Use of Linear Instrumental Variables Methods in Health Services Research and Health Economics: A Cautionary Note

    PubMed Central

    Terza, Joseph V; Bradford, W David; Dismuke, Clara E

    2008-01-01

    Objective To investigate potential bias in the use of the conventional linear instrumental variables (IV) method for the estimation of causal effects in inherently nonlinear regression settings. Data Sources Smoking Supplement to the 1979 National Health Interview Survey, National Longitudinal Alcohol Epidemiologic Survey, and simulated data. Study Design Potential bias from the use of the linear IV method in nonlinear models is assessed via simulation studies and real world data analyses in two commonly encountered regression setting: (1) models with a nonnegative outcome (e.g., a count) and a continuous endogenous regressor; and (2) models with a binary outcome and a binary endogenous regressor. Principle Findings The simulation analyses show that substantial bias in the estimation of causal effects can result from applying the conventional IV method in inherently nonlinear regression settings. Moreover, the bias is not attenuated as the sample size increases. This point is further illustrated in the survey data analyses in which IV-based estimates of the relevant causal effects diverge substantially from those obtained with appropriate nonlinear estimation methods. Conclusions We offer this research as a cautionary note to those who would opt for the use of linear specifications in inherently nonlinear settings involving endogeneity. PMID:18546544

  8. Developmental Trajectories of Motivation in Physical Education: Course, Demographic Differences, and Antecedents

    ERIC Educational Resources Information Center

    Ntoumanis, Nikos; Barkoukis, Vassilis; Thogersen-Ntoumani, Cecilie

    2009-01-01

    This study investigated changes in student motivation to participate in physical education and some determinants of these changes over a period of 3 years. Measures were taken twice a year, from age 13 until age 15, from a sample of Greek junior high school students. Multilevel modeling analyses showed significant decreases in task-involving…

  9. The Curriculum Design and Development in MOOCs Environment

    ERIC Educational Resources Information Center

    Li, Fei; Du, Jing; Li, Bin

    2014-01-01

    The paper selects over 20 online courses and analyses the subjects, organization, the way to show the content of the courses, the use of media, and design of the teaching in the case study of Chinese popular MOOC platform. On this basis, the paper summarizes the principles of curriculum design and design models in MOOC environment, such as…

  10. Students' Silent Messages: Can Teacher Verbal and Nonverbal Immediacy Moderate Student Use of Text Messaging in Class?

    ERIC Educational Resources Information Center

    Wei, Fang-Yi Flora; Wang, Y. Ken

    2010-01-01

    This study investigated the relationship between teacher immediacy and college students' use of text messaging in class. Using a cross-sectional survey sample (N=228), structural equation model analyses showed that students' learning motivation does not mediate the potential effects of teacher immediacy and students' use of text messaging in…

  11. A Model for Teaching Literary Analysis Using Systemic Functional Grammar

    ERIC Educational Resources Information Center

    McCrocklin, Shannon; Slater, Tammy

    2017-01-01

    This article introduces an approach that middle-school teachers can follow to help their students carry out linguistic-based literary analyses. As an example, it draws on Systemic Functional Grammar (SFG) to show how J.K. Rowling used language to characterize Hermione as an intelligent female in "Harry Potter and the Deathly Hallows."…

  12. Orderly Change in a Stable World: The Antisocial Trait as a Chimera.

    ERIC Educational Resources Information Center

    Patterson, Gerald R.

    1993-01-01

    Used longitudinal data from Oregon Youth Study to examine quantitative and qualitative change. Used latent growth models to demonstrate changes in form and systematic changes in mean level for subgroup of boys. Factor analyses carried out at three ages showed that, over time, changes in form and addition of new problems were quantifiable and thus…

  13. Parenting Styles and Bullying at School: The Mediating Role of Locus of Control

    ERIC Educational Resources Information Center

    Georgiou, Stelios N.; Ioannou, Myria; Stavrinides, Panayiotis

    2017-01-01

    The current study examined the mediating role of children's locus of control in the relation between parenting styles and bully-victim experiences at school. Participants were 447 students aged 10 and 11 years old from 13 different elementary, urban, and rural schools in Cyprus. Analyses using structural equation modeling showed that parenting…

  14. Changes in Health Outcomes among Older Husband Caregivers: A One-Year Longitudinal Study

    ERIC Educational Resources Information Center

    Ducharme, Francine; Levesque, Louise; Zarit, Steven H.; Lachance, Lise; Giroux, Francine

    2007-01-01

    This one-year longitudinal study carried out on a sample of 232 older husband caregivers sought to describe changes in psychological distress and self-perceived health, and to examine relationships between factors drawn primarily from Pearlin's model of caregiving and changes in these two health outcomes. Prediction analyses shows that nearly two…

  15. Hydrothermal contamination of public supply wells in Napa and Sonoma Valleys, California

    USGS Publications Warehouse

    Forrest, Matthew J.; Kulongoski, Justin T.; Edwards, Matthew S.; Farrar, Christopher D.; Belitz, Kenneth; Norris, Richard D.

    2013-01-01

    Groundwater chemistry and isotope data from 44 public supply wells in the Napa and Sonoma Valleys, California were determined to investigate mixing of relatively shallow groundwater with deeper hydrothermal fluids. Multivariate analyses including Cluster Analyses, Multidimensional Scaling (MDS), Principal Components Analyses (PCA), Analysis of Similarities (ANOSIM), and Similarity Percentage Analyses (SIMPER) were used to elucidate constituent distribution patterns, determine which constituents are significantly associated with these hydrothermal systems, and investigate hydrothermal contamination of local groundwater used for drinking water. Multivariate statistical analyses were essential to this study because traditional methods, such as mixing tests involving single species (e.g. Cl or SiO2) were incapable of quantifying component proportions due to mixing of multiple water types. Based on these analyses, water samples collected from the wells were broadly classified as fresh groundwater, saline waters, hydrothermal fluids, or mixed hydrothermal fluids/meteoric water wells. The Multivariate Mixing and Mass-balance (M3) model was applied in order to determine the proportion of hydrothermal fluids, saline water, and fresh groundwater in each sample. Major ions, isotopes, and physical parameters of the waters were used to characterize the hydrothermal fluids as Na–Cl type, with significant enrichment in the trace elements As, B, F and Li. Five of the wells from this study were classified as hydrothermal, 28 as fresh groundwater, two as saline water, and nine as mixed hydrothermal fluids/meteoric water wells. The M3 mixing-model results indicated that the nine mixed wells contained between 14% and 30% hydrothermal fluids. Further, the chemical analyses show that several of these mixed-water wells have concentrations of As, F and B that exceed drinking-water standards or notification levels due to contamination by hydrothermal fluids.

  16. Computational Results for the KTH-NASA Wind-Tunnel Model Used for Acquisition of Transonic Nonlinear Aeroelastic Data

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Chwalowski, Pawel; Wieseman, Carol D.; Eller, David; Ringertz, Ulf

    2017-01-01

    A status report is provided on the collaboration between the Royal Institute of Technology (KTH) in Sweden and the NASA Langley Research Center regarding the aeroelastic analyses of a full-span fighter configuration wind-tunnel model. This wind-tunnel model was tested in the Transonic Dynamics Tunnel (TDT) in the summer of 2016. Large amounts of data were acquired including steady/unsteady pressures, accelerations, strains, and measured dynamic deformations. The aeroelastic analyses presented include linear aeroelastic analyses, CFD steady analyses, and analyses using CFD-based reduced-order models (ROMs).

  17. Investigation of voltage swell mitigation using STATCOM

    NASA Astrophysics Data System (ADS)

    Razak, N. A. Abdul; >S Jaafar, I. S.

    2013-06-01

    STATCOM is one of the best applications of a self commutated FACTS device to control power quality problems in the distribution system. This project proposed a STATCOM model with voltage control mechanism. DQ transformation was implemented in the controller system to achieve better estimation. Then, the model was used to investigate and analyse voltage swell problem in distribution system. The simulation results show that voltage swell could contaminate distribution network with unwanted harmonic frequencies. Negative sequence frequencies give harmful effects to the network. System connected with proposed STATCOM model illustrates that it could mitigate this problems efficiently.

  18. Bayesian techniques for analyzing group differences in the Iowa Gambling Task: A case study of intuitive and deliberate decision-makers.

    PubMed

    Steingroever, Helen; Pachur, Thorsten; Šmíra, Martin; Lee, Michael D

    2018-06-01

    The Iowa Gambling Task (IGT) is one of the most popular experimental paradigms for comparing complex decision-making across groups. Most commonly, IGT behavior is analyzed using frequentist tests to compare performance across groups, and to compare inferred parameters of cognitive models developed for the IGT. Here, we present a Bayesian alternative based on Bayesian repeated-measures ANOVA for comparing performance, and a suite of three complementary model-based methods for assessing the cognitive processes underlying IGT performance. The three model-based methods involve Bayesian hierarchical parameter estimation, Bayes factor model comparison, and Bayesian latent-mixture modeling. We illustrate these Bayesian methods by applying them to test the extent to which differences in intuitive versus deliberate decision style are associated with differences in IGT performance. The results show that intuitive and deliberate decision-makers behave similarly on the IGT, and the modeling analyses consistently suggest that both groups of decision-makers rely on similar cognitive processes. Our results challenge the notion that individual differences in intuitive and deliberate decision styles have a broad impact on decision-making. They also highlight the advantages of Bayesian methods, especially their ability to quantify evidence in favor of the null hypothesis, and that they allow model-based analyses to incorporate hierarchical and latent-mixture structures.

  19. A generalised individual-based algorithm for modelling the evolution of quantitative herbicide resistance in arable weed populations.

    PubMed

    Liu, Chun; Bridges, Melissa E; Kaundun, Shiv S; Glasgow, Les; Owen, Micheal Dk; Neve, Paul

    2017-02-01

    Simulation models are useful tools for predicting and comparing the risk of herbicide resistance in weed populations under different management strategies. Most existing models assume a monogenic mechanism governing herbicide resistance evolution. However, growing evidence suggests that herbicide resistance is often inherited in a polygenic or quantitative fashion. Therefore, we constructed a generalised modelling framework to simulate the evolution of quantitative herbicide resistance in summer annual weeds. Real-field management parameters based on Amaranthus tuberculatus (Moq.) Sauer (syn. rudis) control with glyphosate and mesotrione in Midwestern US maize-soybean agroecosystems demonstrated that the model can represent evolved herbicide resistance in realistic timescales. Sensitivity analyses showed that genetic and management parameters were impactful on the rate of quantitative herbicide resistance evolution, whilst biological parameters such as emergence and seed bank mortality were less important. The simulation model provides a robust and widely applicable framework for predicting the evolution of quantitative herbicide resistance in summer annual weed populations. The sensitivity analyses identified weed characteristics that would favour herbicide resistance evolution, including high annual fecundity, large resistance phenotypic variance and pre-existing herbicide resistance. Implications for herbicide resistance management and potential use of the model are discussed. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  20. Multiple Imputation of Cognitive Performance as a Repeatedly Measured Outcome

    PubMed Central

    Rawlings, Andreea M.; Sang, Yingying; Sharrett, A. Richey; Coresh, Josef; Griswold, Michael; Kucharska-Newton, Anna M.; Palta, Priya; Wruck, Lisa M.; Gross, Alden L.; Deal, Jennifer A.; Power, Melinda C.; Bandeen-Roche, Karen

    2016-01-01

    Background Longitudinal studies of cognitive performance are sensitive to dropout, as participants experiencing cognitive deficits are less likely to attend study visits, which may bias estimated associations between exposures of interest and cognitive decline. Multiple imputation is a powerful tool for handling missing data, however its use for missing cognitive outcome measures in longitudinal analyses remains limited. Methods We use multiple imputation by chained equations (MICE) to impute cognitive performance scores of participants who did not attend the 2011-2013 exam of the Atherosclerosis Risk in Communities Study. We examined the validity of imputed scores using observed and simulated data under varying assumptions. We examined differences in the estimated association between diabetes at baseline and 20-year cognitive decline with and without imputed values. Lastly, we discuss how different analytic methods (mixed models and models fit using generalized estimate equations) and choice of for whom to impute result in different estimands. Results Validation using observed data showed MICE produced unbiased imputations. Simulations showed a substantial reduction in the bias of the 20-year association between diabetes and cognitive decline comparing MICE (3-4% bias) to analyses of available data only (16-23% bias) in a construct where missingness was strongly informative but realistic. Associations between diabetes and 20-year cognitive decline were substantially stronger with MICE than in available-case analyses. Conclusions Our study suggests when informative data are available for non-examined participants, MICE can be an effective tool for imputing cognitive performance and improving assessment of cognitive decline, though careful thought should be given to target imputation population and analytic model chosen, as they may yield different estimands. PMID:27619926

  1. Cost-Effectiveness of Treating Hepatitis C with Sofosbuvir/Ledipasvir in Germany.

    PubMed

    Stahmeyer, Jona T; Rossol, Siegbert; Liersch, Sebastian; Guerra, Ines; Krauth, Christian

    2017-01-01

    Infections with the hepatitis C virus (HCV) are a global public health problem. Long-term consequences are the development of liver cirrhosis and hepatocellular carcinoma. Newly introduced direct acting antivirals, especially interferon-free regimens, have improved rates of sustained viral response above 90% in most patient groups and allow treating patients who were ineligible for treatment in the past. These new regimens have replaced former treatment and are recommended by current guidelines. However, there is an ongoing discussion on high pharmaceutical prices. Our aim was to assess the long-term cost-effectiveness of treating hepatitis C genotype 1 patients with sofosbuvir/ledipasvir (SOF/LDV) treatment in Germany. We used a Markov cohort model to simulate disease progression and assess cost-effectiveness. The model calculates lifetime costs and outcomes (quality-adjusted life years, QALYs) of SOF/LDV and other strategies. Patients were stratified by treatment status (treatment-naive and treatment-experienced) and absence/presence of cirrhosis. Different treatment strategies were compared to prior standard of care. Sensitivity analyses were performed to evaluate model robustness. Base-case analyses results show that in treatment-naive non-cirrhotic patients treatment with SOF/LDV dominates the prior standard of care (is more effective and less costly). In cirrhotic patients an incremental cost-effectiveness ratio (ICER) of 3,383 €/QALY was estimated. In treatment-experienced patients ICERs were 26,426 €/QALY and 1,397 €/QALY for treatment-naive and treatment-experienced patients, respectively. Robustness of results was confirmed in sensitivity analyses. Our analysis shows that treatment with SOF/LDV is cost-effective compared to prior standard of care in all patient groups considering international costs per QALY thresholds.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dionne, B.; Tzanos, C. P.

    To support the safety analyses required for the conversion of the Belgian Reactor 2 (BR2) from highly-enriched uranium (HEU) to low-enriched uranium (LEU) fuel, the simulation of a number of loss-of-flow tests, with or without loss of pressure, has been undertaken. These tests were performed at BR2 in 1963 and used instrumented fuel assemblies (FAs) with thermocouples (TC) imbedded in the cladding as well as probes to measure the FAs power on the basis of their coolant temperature rise. The availability of experimental data for these tests offers an opportunity to better establish the credibility of the RELAP5-3D model andmore » methodology used in the conversion analysis. In order to support the HEU to LEU conversion safety analyses of the BR2 reactor, RELAP simulations of a number of loss-of-flow/loss-of-pressure tests have been undertaken. Preliminary analyses showed that the conservative power distributions used historically in the BR2 RELAP model resulted in a significant overestimation of the peak cladding temperature during the transient. Therefore, it was concluded that better estimates of the steady-state and decay power distributions were needed to accurately predict the cladding temperatures measured during the tests and establish the credibility of the RELAP model and methodology. The new approach ('best estimate' methodology) uses the MCNP5, ORIGEN-2 and BERYL codes to obtain steady-state and decay power distributions for the BR2 core during the tests A/400/1, C/600/3 and F/400/1. This methodology can be easily extended to simulate any BR2 core configuration. Comparisons with measured peak cladding temperatures showed a much better agreement when power distributions obtained with the new methodology are used.« less

  3. Testing a dual-systems model of adolescent brain development using resting-state connectivity analyses.

    PubMed

    van Duijvenvoorde, A C K; Achterberg, M; Braams, B R; Peters, S; Crone, E A

    2016-01-01

    The current study aimed to test a dual-systems model of adolescent brain development by studying changes in intrinsic functional connectivity within and across networks typically associated with cognitive-control and affective-motivational processes. To this end, resting-state and task-related fMRI data were collected of 269 participants (ages 8-25). Resting-state analyses focused on seeds derived from task-related neural activation in the same participants: the dorsal lateral prefrontal cortex (dlPFC) from a cognitive rule-learning paradigm and the nucleus accumbens (NAcc) from a reward-paradigm. Whole-brain seed-based resting-state analyses showed an age-related increase in dlPFC connectivity with the caudate and thalamus, and an age-related decrease in connectivity with the (pre)motor cortex. nAcc connectivity showed a strengthening of connectivity with the dorsal anterior cingulate cortex (ACC) and subcortical structures such as the hippocampus, and a specific age-related decrease in connectivity with the ventral medial PFC (vmPFC). Behavioral measures from both functional paradigms correlated with resting-state connectivity strength with their respective seed. That is, age-related change in learning performance was mediated by connectivity between the dlPFC and thalamus, and age-related change in winning pleasure was mediated by connectivity between the nAcc and vmPFC. These patterns indicate (i) strengthening of connectivity between regions that support control and learning, (ii) more independent functioning of regions that support motor and control networks, and (iii) more independent functioning of regions that support motivation and valuation networks with age. These results are interpreted vis-à-vis a dual-systems model of adolescent brain development. Copyright © 2015. Published by Elsevier Inc.

  4. Personality compensates for impaired quality of life and social functioning in patients with psychotic disorders who experienced traumatic events.

    PubMed

    Boyette, Lindy-Lou; van Dam, Daniëlla; Meijer, Carin; Velthorst, Eva; Cahn, Wiepke; de Haan, Lieuwe; Kahn, René; de Haan, Lieuwe; van Os, Jim; Wiersma, Durk; Bruggeman, Richard; Cahn, Wiepke; Meijer, Carin; Myin-Germeys, Inez

    2014-11-01

    Patients with psychotic disorders who experienced childhood trauma show more social dysfunction than patients without traumatic experiences. However, this may not hold for all patients with traumatic experiences. Little is known about the potential compensating role of Five-Factor Model personality traits within this group, despite their strong predictive value for social functioning and well-being in the general population. Our sample consisted of 195 patients with psychotic disorders (74% diagnosed with schizophrenia) and 132 controls. Cluster analyses were conducted to identify and validate distinct personality profiles. General linear model analyses were conducted to examine whether patients with different profiles differed in social functioning and quality of life (QoL), while controlling for possible confounders. Mediation models were tested to assess potential causal links. In general, patients with higher levels of self-reported traumatic experiences (PT+) showed lower QoL and more social withdrawal compared with patients with lower traumatic experiences (PT-). Two clusters reflecting personality profiles were identified. PT+ with the first profile (lower neuroticism and higher extraversion, openness, agreeableness, and conscientiousness) presented higher levels of QoL and better social functioning in several areas, including less withdrawal, compared with both PT+ and PT- with the second profile. PT+ and PT- with the first personality profile did not differ in QoL and social functioning. Mediation analyses suggested that personality traits mediate the relation between traumatic experiences and QoL and social withdrawal. Our findings indicate that personality may "buffer" the impact of childhood traumatic experiences on functional outcome in patients with psychotic disorders. © The Author 2014. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  5. Molecular markers of carcinogenesis for risk stratification of individuals with colorectal polyps: a case-control study.

    PubMed

    Gupta, Samir; Sun, Han; Yi, Sang; Storm, Joy; Xiao, Guanghua; Balasubramanian, Bijal A; Zhang, Song; Ashfaq, Raheela; Rockey, Don C

    2014-10-01

    Risk stratification using number, size, and histology of colorectal adenomas is currently suboptimal for identifying patients at increased risk for future colorectal cancer. We hypothesized that molecular markers of carcinogenesis in adenomas, measured via immunohistochemistry, may help identify high-risk patients. To test this hypothesis, we conducted a retrospective, 1:1 matched case-control study (n = 216; 46% female) in which cases were patients with colorectal cancer and synchronous adenoma and controls were patients with adenoma but no colorectal cancer at baseline or within 5 years of follow-up. In phase I of analyses, we compared expression of molecular markers of carcinogenesis in case and control adenomas, blind to case status. In phase II of analyses, patients were randomly divided into independent training and validation groups to develop a model for predicting case status. We found that seven markers [p53, p21, Cox-2, β-catenin (BCAT), DNA-dependent protein kinase (DNApkcs), survivin, and O6-methylguanine-DNA methyltransferase (MGMT)] were significantly associated with case status on unadjusted analyses, as well as analyses adjusted for age and advanced adenoma status (P < 0.01 for at least one marker component). When applied to the validation set, a predictive model using these seven markers showed substantial accuracy for identifying cases [area under the receiver operation characteristic curve (AUC), 0.83; 95% confidence interval (CI), 0.74-0.92]. A parsimonious model using three markers performed similarly to the seven-marker model (AUC, 0.84). In summary, we found that molecular markers of carcinogenesis distinguished adenomas from patients with and without colorectal cancer. Furthermore, we speculate that prospective studies using molecular markers to identify individuals with polyps at risk for future neoplasia are warranted. ©2014 American Association for Cancer Research.

  6. Performance and Stability Analyses of Rocket Combustion Devices Using Liquid Oxygen/Liquid Methane Propellants

    NASA Technical Reports Server (NTRS)

    Hulka, James R.; Jones, G. W.

    2010-01-01

    Liquid rocket engines using oxygen and methane propellants are being considered by the National Aeronautics and Space Administration (NASA) for in-space vehicles. This propellant combination has not been previously used in flight-qualified engine systems, so limited test data and analysis results are available at this stage of early development. NASA has funded several hardware-oriented programs with oxygen and methane propellants over the past several years with the Propulsion and Cryogenic Advanced Development (PCAD) project, under the Exploration Technology Development Program. As part of this effort, NASA Marshall Space Flight Center has conducted combustion, performance, and combustion stability analyses of several of the configurations on these programs. This paper summarizes these analyses. Test and analysis results of impinging and coaxial element injectors using liquid oxygen and liquid methane propellants are included. Several cases with gaseous methane are included for reference. Several different thrust chamber configurations have been modeled, including thrust chambers with multi-element like-on-like and swirl coax element injectors tested at NASA MSFC, and a unielement chamber with shear and swirl coax injectors tested at The Pennsylvania State University. Configurations were modeled with two one-dimensional liquid rocket combustion analysis codes, the Rocket Combustor Interaction Design and Analysis (ROCCID), and the Coaxial Injector Combustion Model (CICM). Significant effort was applied to show how these codes can be used to model combustion and performance with oxygen/methane propellants a priori, and what anchoring or calibrating features need to be applied or developed in the future. This paper describes the test hardware configurations, presents the results of all the analyses, and compares the results from the two analytical methods.

  7. World Meteorological Organization's model simulations of the radionuclide dispersion and deposition from the Fukushima Daiichi nuclear power plant accident.

    PubMed

    Draxler, Roland; Arnold, Dèlia; Chino, Masamichi; Galmarini, Stefano; Hort, Matthew; Jones, Andrew; Leadbetter, Susan; Malo, Alain; Maurer, Christian; Rolph, Glenn; Saito, Kazuo; Servranckx, René; Shimbori, Toshiki; Solazzo, Efisio; Wotawa, Gerhard

    2015-01-01

    Five different atmospheric transport and dispersion model's (ATDM) deposition and air concentration results for atmospheric releases from the Fukushima Daiichi nuclear power plant accident were evaluated over Japan using regional (137)Cs deposition measurements and (137)Cs and (131)I air concentration time series at one location about 110 km from the plant. Some of the ATDMs used the same and others different meteorological data consistent with their normal operating practices. There were four global meteorological analyses data sets available and two regional high-resolution analyses. Not all of the ATDMs were able to use all of the meteorological data combinations. The ATDMs were configured identically as much as possible with respect to the release duration, release height, concentration grid size, and averaging time. However, each ATDM retained its unique treatment of the vertical velocity field and the wet and dry deposition, one of the largest uncertainties in these calculations. There were 18 ATDM-meteorology combinations available for evaluation. The deposition results showed that even when using the same meteorological analysis, each ATDM can produce quite different deposition patterns. The better calculations in terms of both deposition and air concentration were associated with the smoother ATDM deposition patterns. The best model with respect to the deposition was not always the best model with respect to air concentrations. The use of high-resolution mesoscale analyses improved ATDM performance; however, high-resolution precipitation analyses did not improve ATDM predictions. Although some ATDMs could be identified as better performers for either deposition or air concentration calculations, overall, the ensemble mean of a subset of better performing members provided more consistent results for both types of calculations. Published by Elsevier Ltd.

  8. Measuring trends of outpatient antibiotic use in Europe: jointly modelling longitudinal data in defined daily doses and packages.

    PubMed

    Bruyndonckx, Robin; Hens, Niel; Aerts, Marc; Goossens, Herman; Molenberghs, Geert; Coenen, Samuel

    2014-07-01

    To complement analyses of the linear trend and seasonal fluctuation of European outpatient antibiotic use expressed in defined daily doses (DDD) by analyses of data in packages, to assess the agreement between both measures and to study changes in the number of DDD per package over time. Data on outpatient antibiotic use, aggregated at the level of the active substance (WHO version 2011) were collected from 2000 to 2007 for 31 countries and expressed in DDD and packages per 1000 inhabitants per day (DID and PID, respectively). Data expressed in DID and PID were analysed separately using non-linear mixed models while the agreement between these measurements was analysed through a joint non-linear mixed model. The change in DDD per package over time was studied with a linear mixed model. Total outpatient antibiotic and penicillin use in Europe and their seasonal fluctuation significantly increased in DID, but not in PID. The use of combinations of penicillins significantly increased in DID and in PID. Broad-spectrum penicillin use did not increase significantly in DID and decreased significantly in PID. For all but one subgroup, country-specific deviations moved in the same direction whether measured in DID or PID. The correlations are not perfect. The DDD per package increased significantly over time for all but one subgroup. Outpatient antibiotic use in Europe shows contrasting trends, depending on whether DID or PID is used as the measure. The increase of the DDD per package corroborates the recommendation to adopt PID to monitor outpatient antibiotic use in Europe. © The Author 2014. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Supersymmetry without prejudice at the LHC

    NASA Astrophysics Data System (ADS)

    Conley, John A.; Gainer, James S.; Hewett, JoAnne L.; Le, My Phuong; Rizzo, Thomas G.

    2011-07-01

    The discovery and exploration of Supersymmetry in a model-independent fashion will be a daunting task due to the large number of soft-breaking parameters in the MSSM. In this paper, we explore the capability of the ATLAS detector at the LHC (sqrt{s}=14 TeV, 1 fb-1) to find SUSY within the 19-dimensional pMSSM subspace of the MSSM using their standard transverse missing energy and long-lived particle searches that were essentially designed for mSUGRA. To this end, we employ a set of ˜71k previously generated model points in the 19-dimensional parameter space that satisfy all of the existing experimental and theoretical constraints. Employing ATLAS-generated SM backgrounds and following their approach in each of 11 missing energy analyses as closely as possible, we explore all of these 71k model points for a possible SUSY signal. To test our analysis procedure, we first verify that we faithfully reproduce the published ATLAS results for the signal distributions for their benchmark mSUGRA model points. We then show that, requiring all sparticle masses to lie below 1(3) TeV, almost all (two-thirds) of the pMSSM model points are discovered with a significance S>5 in at least one of these 11 analyses assuming a 50% systematic error on the SM background. If this systematic error can be reduced to only 20% then this parameter space coverage is increased. These results are indicative that the ATLAS SUSY search strategy is robust under a broad class of Supersymmetric models. We then explore in detail the properties of the kinematically accessible model points which remain unobservable by these search analyses in order to ascertain problematic cases which may arise in general SUSY searches.

  10. Regional coseismic landslide hazard assessment without historical landslide inventories: A new approach

    NASA Astrophysics Data System (ADS)

    Kritikos, Theodosios; Robinson, Tom R.; Davies, Tim R. H.

    2015-04-01

    Currently, regional coseismic landslide hazard analyses require comprehensive historical landslide inventories as well as detailed geotechnical data. Consequently, such analyses have not been possible where these data are not available. A new approach is proposed herein to assess coseismic landslide hazard at regional scale for specific earthquake scenarios in areas without historical landslide inventories. The proposed model employs fuzzy logic and geographic information systems to establish relationships between causative factors and coseismic slope failures in regions with well-documented and substantially complete coseismic landslide inventories. These relationships are then utilized to estimate the relative probability of landslide occurrence in regions with neither historical landslide inventories nor detailed geotechnical data. Statistical analyses of inventories from the 1994 Northridge and 2008 Wenchuan earthquakes reveal that shaking intensity, topography, and distance from active faults and streams are the main controls on the spatial distribution of coseismic landslides. Average fuzzy memberships for each factor are developed and aggregated to model the relative coseismic landslide hazard for both earthquakes. The predictive capabilities of the models are assessed and show good-to-excellent model performance for both events. These memberships are then applied to the 1999 Chi-Chi earthquake, using only a digital elevation model, active fault map, and isoseismal data, replicating prediction of a future event in a region lacking historic inventories and/or geotechnical data. This similarly results in excellent model performance, demonstrating the model's predictive potential and confirming it can be meaningfully applied in regions where previous methods could not. For such regions, this method may enable a greater ability to analyze coseismic landslide hazard from specific earthquake scenarios, allowing for mitigation measures and emergency response plans to be better informed of earthquake-related hazards.

  11. Modeling the Self-organized Critical Behavior of Earth's Plasma Sheet Reconnection Dynamics

    NASA Technical Reports Server (NTRS)

    Klimas, Alexander J.

    2006-01-01

    Analyses of Polar UVI auroral image data show that bright night-side high-latitude W emissions exhibit so many of the key properties of systems in self-organized criticality that an alternate interpretation has become virtually impossible. These analyses will be reviewed. It is now necessary to find and model the source of this behavior. We note that the most common models of self-organized criticality are numerical sandpiles. These are, at root, models that govern the transport of some quantity from a region where it is loaded to another where it is unloaded. Transport is enabled by the excitation of a local threshold instability; it is intermittent and bursty, and it exhibits a number of scale-free statistical properties. Searching for a system in the magnetosphere that is analogous and that, in addition, is known to produce auroral signatures, we focus on the reconnection dynamics of the magnetotail plasma sheet. In our previous work, a driven reconnection model has been constructed and has been under study. The transport of electromagnetic (primarily magnetic) energy carried by the Poynting flux into the reconnection region of the model has been examined. All of the analysis techniques (and more) that have been applied to the auroral image data have also been applied to this Poynting flux. New results will be presented showing that this model also exhibits so many of the key properties of systems in self-organized criticality that an alternate interpretation is implausible. A strong correlation between these key properties of the model and those of the auroral UV emissions will be demonstrated. We suggest that, in general, the driven reconnection model is an important step toward a realistic plasma physical model of self-organized criticality and we conclude, more specifically, that it is also a step in the right direction toward modeling the multiscale reconnection dynamics of the magnetotail.

  12. Potential fitting biases resulting from grouping data into variable width bins

    NASA Astrophysics Data System (ADS)

    Towers, S.

    2014-07-01

    When reading peer-reviewed scientific literature describing any analysis of empirical data, it is natural and correct to proceed with the underlying assumption that experiments have made good faith efforts to ensure that their analyses yield unbiased results. However, particle physics experiments are expensive and time consuming to carry out, thus if an analysis has inherent bias (even if unintentional), much money and effort can be wasted trying to replicate or understand the results, particularly if the analysis is fundamental to our understanding of the universe. In this note we discuss the significant biases that can result from data binning schemes. As we will show, if data are binned such that they provide the best comparison to a particular (but incorrect) model, the resulting model parameter estimates when fitting to the binned data can be significantly biased, leading us to too often accept the model hypothesis when it is not in fact true. When using binned likelihood or least squares methods there is of course no a priori requirement that data bin sizes need to be constant, but we show that fitting to data grouped into variable width bins is particularly prone to produce biased results if the bin boundaries are chosen to optimize the comparison of the binned data to a wrong model. The degree of bias that can be achieved simply with variable binning can be surprisingly large. Fitting the data with an unbinned likelihood method, when possible to do so, is the best way for researchers to show that their analyses are not biased by binning effects. Failing that, equal bin widths should be employed as a cross-check of the fitting analysis whenever possible.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollingsworth, LaWen T.; Kurth, Laurie,; Parresol, Bernard, R.

    Landscape-scale fire behavior analyses are important to inform decisions on resource management projects that meet land management objectives and protect values from adverse consequences of fire. Deterministic and probabilistic geospatial fire behavior analyses are conducted with various modeling systems including FARSITE, FlamMap, FSPro, and Large Fire Simulation System. The fundamental fire intensity algorithms in these systems require surface fire behavior fuel models and canopy cover to model surface fire behavior. Canopy base height, stand height, and canopy bulk density are required in addition to surface fire behavior fuel models and canopy cover to model crown fire activity. Several surface fuelmore » and canopy classification efforts have used various remote sensing and ecological relationships as core methods to develop the spatial layers. All of these methods depend upon consistent and temporally constant interpretations of crown attributes and their ecological conditions to estimate surface fuel conditions. This study evaluates modeled fire behavior for an 80,000 ha tract of land in the Atlantic Coastal Plain of the southeastern US using three different data sources. The Fuel Characteristic Classification System (FCCS) was used to build fuelbeds from intensive field sampling of 629 plots. Custom fire behavior fuel models were derived from these fuelbeds. LANDFIRE developed surface fire behavior fuel models and canopy attributes for the US using satellite imagery informed by field data. The Southern Wildfire Risk Assessment (SWRA) developed surface fire behavior fuel models and canopy cover for the southeastern US using satellite imagery. Differences in modeled fire behavior, data development, and data utility are summarized to assist in determining which data source may be most applicable for various land management activities and required analyses. Characterizing fire behavior under different fuel relationships provides insights for natural ecological processes, management strategies for fire mitigation, and positive and negative features of different modeling systems. A comparison of flame length, rate of spread, crown fire activity, and burn probabilities modeled with FlamMap shows some similar patterns across the landscape from all three data sources, but there are potentially important differences. All data sources showed an expected range of fire behavior. Average flame lengths ranged between 1 and 1.4 m. Rate of spread varied the greatest with a range of 2.4-5.7 m min{sup -1}. Passive crown fire was predicted for 5% of the study area using FCCS and LANDFIRE while passive crown fire was not predicted using SWRA data. No active crown fire was predicted regardless of the data source. Burn probability patterns across the landscape were similar but probability was highest using SWRA and lowest using FCCS.« less

  14. Using generalized additive (mixed) models to analyze single case designs.

    PubMed

    Shadish, William R; Zuur, Alain F; Sullivan, Kristynn J

    2014-04-01

    This article shows how to apply generalized additive models and generalized additive mixed models to single-case design data. These models excel at detecting the functional form between two variables (often called trend), that is, whether trend exists, and if it does, what its shape is (e.g., linear and nonlinear). In many respects, however, these models are also an ideal vehicle for analyzing single-case designs because they can consider level, trend, variability, overlap, immediacy of effect, and phase consistency that single-case design researchers examine when interpreting a functional relation. We show how these models can be implemented in a wide variety of ways to test whether treatment is effective, whether cases differ from each other, whether treatment effects vary over cases, and whether trend varies over cases. We illustrate diagnostic statistics and graphs, and we discuss overdispersion of data in detail, with examples of quasibinomial models for overdispersed data, including how to compute dispersion and quasi-AIC fit indices in generalized additive models. We show how generalized additive mixed models can be used to estimate autoregressive models and random effects and discuss the limitations of the mixed models compared to generalized additive models. We provide extensive annotated syntax for doing all these analyses in the free computer program R. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  15. Cross-validation to select Bayesian hierarchical models in phylogenetics.

    PubMed

    Duchêne, Sebastián; Duchêne, David A; Di Giallonardo, Francesca; Eden, John-Sebastian; Geoghegan, Jemma L; Holt, Kathryn E; Ho, Simon Y W; Holmes, Edward C

    2016-05-26

    Recent developments in Bayesian phylogenetic models have increased the range of inferences that can be drawn from molecular sequence data. Accordingly, model selection has become an important component of phylogenetic analysis. Methods of model selection generally consider the likelihood of the data under the model in question. In the context of Bayesian phylogenetics, the most common approach involves estimating the marginal likelihood, which is typically done by integrating the likelihood across model parameters, weighted by the prior. Although this method is accurate, it is sensitive to the presence of improper priors. We explored an alternative approach based on cross-validation that is widely used in evolutionary analysis. This involves comparing models according to their predictive performance. We analysed simulated data and a range of viral and bacterial data sets using a cross-validation approach to compare a variety of molecular clock and demographic models. Our results show that cross-validation can be effective in distinguishing between strict- and relaxed-clock models and in identifying demographic models that allow growth in population size over time. In most of our empirical data analyses, the model selected using cross-validation was able to match that selected using marginal-likelihood estimation. The accuracy of cross-validation appears to improve with longer sequence data, particularly when distinguishing between relaxed-clock models. Cross-validation is a useful method for Bayesian phylogenetic model selection. This method can be readily implemented even when considering complex models where selecting an appropriate prior for all parameters may be difficult.

  16. Micromechanical investigation of sand migration in gas hydrate-bearing sediments

    NASA Astrophysics Data System (ADS)

    Uchida, S.; Klar, A.; Cohen, E.

    2017-12-01

    Past field gas production tests from hydrate bearing sediments have indicated that sand migration is an important phenomenon that needs to be considered for successful long-term gas production. The authors previously developed the continuum based analytical thermo-hydro-mechanical sand migration model that can be applied to predict wellbore responses during gas production. However, the model parameters involved in the model still needs to be calibrated and studied thoroughly and it still remains a challenge to conduct well-defined laboratory experiments of sand migration, especially in hydrate-bearing sediments. Taking the advantage of capability of micromechanical modelling approach through discrete element method (DEM), this work presents a first step towards quantifying one of the model parameters that governs stresses reduction due to grain detachment. Grains represented by DEM particles are randomly removed from an isotropically loaded DEM specimen and statistical analyses reveal that linear proportionality exists between the normalized volume of detached solids and normalized reduced stresses. The DEM specimen with different porosities (different packing densities) are also considered and statistical analyses show that there is a clear transition between loose sand behavior and dense sand behavior, characterized by the relative density.

  17. Role of aerosols on the Indian Summer Monsoon variability, as simulated by state-of-the-art global climate models

    NASA Astrophysics Data System (ADS)

    Cagnazzo, Chiara; Biondi, Riccardo; D'Errico, Miriam; Cherchi, Annalisa; Fierli, Federico; Lau, William K. M.

    2016-04-01

    Recent observational and modeling analyses have explored the interaction between aerosols and the Indian summer monsoon precipitation on seasonal-to-interannual time scales. By using global scale climate model simulations, we show that when increased aerosol loading is found on the Himalayas slopes in the premonsoon period (April-May), intensification of early monsoon rainfall over India and increased low-level westerly flow follow, in agreement with the elevated-heat-pump (EHP) mechanism. The increase in rainfall during the early monsoon season has a cooling effect on the land surface that may also be amplified through solar dimming (SD) by more cloudiness and aerosol loading with subsequent reduction in monsoon rainfall over India. We extend this analyses to a subset of CMIP5 climate model simulations. Our results suggest that 1) absorbing aerosols, by influencing the seasonal variability of the Indian summer monsoon with the discussed time-lag, may act as a source of predictability for the Indian Summer Monsoon and 2) if the EHP and SD effects are operating also in a number of state-of-the-art climate models, their inclusion could potentially improve seasonal forecasts.

  18. Time Reversal Method for Pipe Inspection with Guided Wave

    NASA Astrophysics Data System (ADS)

    Deng, Fei; He, Cunfu; Wu, Bin

    2008-02-01

    The temporal-spatial focusing effect of the time reversal method on the guided wave inspection in pipes is investigated. A steel pipe model with outer diameter of 70 mm and wall thickness of 3.5 mm is numerically built to analyse the reflection coefficient of L(0,2) mode when the time reversal method is applied in the model. According to the calculated results, it is shown that a synthetic time reversal array method is effective to improve the signal-to-noise ratio of a guided wave inspection system. As an intercepting window is widened, more energy can be included in a re-emitted signal, which leads to a large reflection coefficient of L(0,2) mode. It is also shown that when a time reversed signal is reapplied in the pipe model, by analysing the motion of the time reversed wave propagating along the pipe model, a defect can be identified. Therefore, it is demonstrated that the time reversal method can be used to locate the circumferential position of a defect in a pipe. Finally, through an experiment corresponding with the pipe model, the experimental result shows that the above-mentioned method can be valid in the inspection of a pipe.

  19. Toward a Model of Social Influence that Explains Minority Student Integration into the Scientific Community

    PubMed Central

    Estrada, Mica; Woodcock, Anna; Hernandez, Paul R.; Schultz, P. Wesley

    2010-01-01

    Students from several ethnic minority groups are underrepresented in the sciences, such that minority students more frequently drop out of the scientific career path than non-minority students. Viewed from a perspective of social influence, this pattern suggests that minority students do not integrate into the scientific community at the same rate as non-minority students. Kelman (1958, 2006) describes a tripartite integration model of social influence (TIMSI) by which a person orients to a social system. To test if this model predicts integration into the scientific community, we conducted analyses of data from a national panel of minority science students. A structural equation model framework showed that self-efficacy (operationalized consistent with Kelman’s ‘rule-orientation’) predicted student intentions to pursue a scientific career. However, when identification as a scientist and internalization of values are added to the model, self-efficacy becomes a poorer predictor of intention. Additional mediation analyses support the conclusion that while having scientific self-efficacy is important, identifying with and endorsing the values of the social system reflect a deeper integration and more durable motivation to persist as a scientist. PMID:21552374

  20. 77 FR 26444 - Revisions to Final Response To Petition From New Jersey Regarding SO2

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-04

    ... modeling or other technical analyses and no new analyses were necessary to make the revisions. III. Public... this modeling approach. Therefore, no new technical analyses or any changes to the modeling are...) modeling analysis submitted with the September 2010 petition identified NAAQS violations at receptors in...

  1. 75 FR 70623 - Airworthiness Directives; DORNIER LUFTFAHRT GmbH Models Dornier 228-100, Dornier 228-101, Dornier...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-18

    ... measurements as well as finite element modelling and fatigue analyses to better understand the stress... include strain measurements as well as finite element modeling and fatigue analyses to better understand... finite element modelling and fatigue analyses to better understand the stress distribution onto the frame...

  2. Effect of increasing disorder on domains of the 2d Coulomb glass.

    PubMed

    Bhandari, Preeti; Malik, Vikas

    2017-12-06

    We have studied a two dimensional lattice model of Coulomb glass for a wide range of disorders at [Formula: see text]. The system was first annealed using Monte Carlo simulation. Further minimization of the total energy of the system was done using an algorithm developed by Baranovskii et al, followed by cluster flipping to obtain the pseudo-ground states. We have shown that the energy required to create a domain of linear size L in d dimensions is proportional to [Formula: see text]. Using Imry-Ma arguments given for random field Ising model, one gets critical dimension [Formula: see text] for Coulomb glass. The investigation of domains in the transition region shows a discontinuity in staggered magnetization which is an indication of a first-order type transition from charge-ordered phase to disordered phase. The structure and nature of random field fluctuations of the second largest domain in Coulomb glass are inconsistent with the assumptions of Imry and Ma, as was also reported for random field Ising model. The study of domains showed that in the transition region there were mostly two large domains, and that as disorder was increased the two large domains remained, but a large number of small domains also opened up. We have also studied the properties of the second largest domain as a function of disorder. We furthermore analysed the effect of disorder on the density of states, and showed a transition from hard gap at low disorders to a soft gap at higher disorders. At [Formula: see text], we have analysed the soft gap in detail, and found that the density of states deviates slightly ([Formula: see text]) from the linear behaviour in two dimensions. Analysis of local minima show that the pseudo-ground states have similar structure.

  3. Postural instability and gait are associated with severity and prognosis of Parkinson disease.

    PubMed

    van der Heeden, Jorine F; Marinus, Johan; Martinez-Martin, Pablo; Rodriguez-Blazquez, Carmen; Geraedts, Victor J; van Hilten, Jacobus J

    2016-06-14

    Differences in disease progression in Parkinson disease (PD) have variously been attributed to 2 motor subtypes: tremor-dominant (TD) and postural instability and gait difficulty (PIGD)-dominant (PG). We evaluated the role of these phenotypic variants in severity and progression of nondopaminergic manifestations of PD and motor complications. Linear mixed models were applied to data from the Profiling Parkinson's disease (PROPARK) cohort (n = 396) to evaluate the effect of motor subtype on severity and progression of cognitive impairment (Scales for Outcomes in Parkinson's disease [SCOPA]-Cognition [SCOPA-COG]), depression (Hospital Anxiety and Depression Scale [HADS]), autonomic dysfunction (SCOPA-Autonomic [SCOPA-AUT]), excessive daytime sleepiness, psychotic symptoms (SCOPA-Psychiatric Complications [SCOPA-PC]), and motor complications. In first analyses, subtype as determined by the commonly used ratio of tremor over PIGD score was entered as a factor, whereas in second analyses separate tremor and PIGD scores were used. Results were verified in an independent cohort (Estudio Longitudinal de Pacientes con Enfermedad de Parkinson [ELEP]; n = 365). The first analyses showed that PG subtype patients had worse SCOPA-COG, HADS, SCOPA-AUT, SCOPA-PC, and motor complications scores, and exhibited faster progression on the SCOPA-COG. The second analyses showed that only higher PIGD scores were associated with worse scores for these variables; tremor score was not associated with severity or progression of any symptom. Analyses in the independent cohort yielded similar results. In contrast to PIGD, which consistently was associated with greater severity of nondopaminergic symptoms, there was no evidence of a benign effect of tremor. Our findings do not support the use of the TD subtype as a prognostic trait in PD. The results showed that severity of PIGD is a useful indicator of severity and prognosis in PD by itself. © 2016 American Academy of Neurology.

  4. The global abundance and size distribution of lakes, ponds, and impoundments

    USGS Publications Warehouse

    Downing, J.A.; Prairie, Y.T.; Cole, J.J.; Duarte, C.M.; Tranvik, L.J.; Striegl, Robert G.; McDowell, W.H.; Kortelainen, Pirkko; Caraco, N.F.; Melack, J.M.; Middelburg, J.J.

    2006-01-01

    One of the major impediments to the integration of lentic ecosystems into global environmental analyses has been fragmentary data on the extent and size distribution of lakes, ponds, and impoundments. We use new data sources, enhanced spatial resolution, and new analytical approaches to provide new estimates of the global abundance of surface-water bodies. A global model based on the Pareto distribution shows that the global extent of natural lakes is twice as large as previously known (304 million lakes; 4.2 million km 2 in area) and is dominated in area by millions of water bodies smaller than 1 km2. Similar analyses of impoundments based on inventories of large, engineered dams show that impounded waters cover approximately 0.26 million km2. However, construction of low-tech farm impoundments is estimated to be between 0.1 % and 6% of farm area worldwide, dependent upon precipitation, and represents >77,000 km 2 globally, at present. Overall, about 4.6 million km2 of the earth's continental "land" surface (>3%) is covered by water. These analyses underscore the importance of explicitly considering lakes, ponds, and impoundments, especially small ones, in global analyses of rates and processes. ?? 2006, by the American Society of Limnology and Oceanography, Inc.

  5. Cloning, characterisation, and comparative quantitative expression analyses of receptor for advanced glycation end products (RAGE) transcript forms.

    PubMed

    Sterenczak, Katharina A; Willenbrock, Saskia; Barann, Matthias; Klemke, Markus; Soller, Jan T; Eberle, Nina; Nolte, Ingo; Bullerdiek, Jörn; Murua Escobar, Hugo

    2009-04-01

    RAGE is a member of the immunoglobulin superfamily of cell surface molecules playing key roles in pathophysiological processes, e.g. immune/inflammatory disorders, Alzheimer's disease, diabetic arteriosclerosis and tumourigenesis. In humans 19 naturally occurring RAGE splicing variants resulting in either N-terminally or C-terminally truncated proteins were identified and are lately discussed as mechanisms for receptor regulation. Accordingly, deregulation of sRAGE levels has been associated with several diseases e.g. Alzheimer's disease, Type 1 diabetes, and rheumatoid arthritis. Administration of recombinant sRAGE to animal models of cancer blocked tumour growth successfully. In spite of its obvious relationship to cancer and metastasis data focusing sRAGE deregulation and tumours is rare. In this study we screened a set of tumours, healthy tissues and various cancer cell lines for RAGE splicing variants and analysed their structure. Additionally, we analysed the ratio of the mainly found transcript variants using quantitative Real-Time PCR. In total we characterised 24 previously not described canine and 4 human RAGE splicing variants, analysed their structure, classified their characteristics, and derived their respective protein forms. Interestingly, the healthy and the neoplastic tissue samples showed in majority RAGE transcripts coding for the complete receptor and transcripts showing insertions of intron 1.

  6. Associations between DSM-5 section III personality traits and the Minnesota Multiphasic Personality Inventory 2-Restructured Form (MMPI-2-RF) scales in a psychiatric patient sample.

    PubMed

    Anderson, Jaime L; Sellbom, Martin; Ayearst, Lindsay; Quilty, Lena C; Chmielewski, Michael; Bagby, R Michael

    2015-09-01

    Our aim in the current study was to evaluate the convergence between Diagnostic and Statistical Manual of Mental Disorders, fifth edition (DSM-5) Section III dimensional personality traits, as operationalized via the Personality Inventory for DSM-5 (PID-5), and Minnesota Multiphasic Personality Inventory 2-Restructured Form (MMPI-2-RF) scale scores in a psychiatric patient sample. We used a sample of 346 (171 men, 175 women) patients who were recruited through a university-affiliated psychiatric facility in Toronto, Canada. We estimated zero-order correlations between the PID-5 and MMPI-2-RF substantive scale scores, as well as a series of exploratory structural equation modeling (ESEM) analyses to examine how these scales converged in multivariate latent space. Results generally showed empirical convergence between the scales of these two measures that were thematically meaningful and in accordance with conceptual expectations. Correlation analyses showed significant associations between conceptually expected scales, and the highest associations tended to be between scales that were theoretically related. ESEM analyses generated evidence for distinct internalizing, externalizing, and psychoticism factors across all analyses. These findings indicate convergence between these two measures and help further elucidate the associations between dysfunctional personality traits and general psychopathology. (c) 2015 APA, all rights reserved.

  7. A new high-resolution 3-D quantitative method for analysing small morphological features: an example using a Cambrian trilobite.

    PubMed

    Esteve, Jorge; Zhao, Yuan-Long; Maté-González, Miguel Ángel; Gómez-Heras, Miguel; Peng, Jin

    2018-02-12

    Taphonomic processes play an important role in the preservation of small morphological features such as granulation or pits. However, the assessment of these features may face the issue of the small size of the specimens and, sometimes, the destructiveness of these analyses, which makes impossible carrying them out in singular specimen, such as holotypes or lectotypes. This paper takes a new approach to analysing small-morphological features, by using an optical surface roughness (OSR) meter to create a high-resolution three-dimensional digital-elevation model (DEM). This non-destructive technique allows analysing quantitatively the DEM using geometric morphometric methods (GMM). We created a number of DEMs from three populations putatively belonging to the same species of trilobite (Oryctocephalus indicus) that present the same cranidial outline, but differ in the presence or absence of the second and third transglabellar furrows. Profile analysis of the DEMs demonstrate that all three populations show similar preservation variation in the glabellar furrows and lobes. The GMM shows that all populations exhibit the same range of variation. Differences in preservation are a consequence of different degrees of cementation and rates of dissolution. Fast cementation enhances the preservation of glabellar furrows and lobes, while fast dissolution hampers preservation of the same structures.

  8. Piloted Evaluation of a UH-60 Mixer Equivalent Turbulence Simulation Model

    NASA Technical Reports Server (NTRS)

    Lusardi, Jeff A.; Blanken, Chris L.; Tischeler, Mark B.

    2002-01-01

    A simulation study of a recently developed hover/low speed Mixer Equivalent Turbulence Simulation (METS) model for the UH-60 Black Hawk helicopter was conducted in the NASA Ames Research Center Vertical Motion Simulator (VMS). The experiment was a continuation of previous work to develop a simple, but validated, turbulence model for hovering rotorcraft. To validate the METS model, two experienced test pilots replicated precision hover tasks that had been conducted in an instrumented UH-60 helicopter in turbulence. Objective simulation data were collected for comparison with flight test data, and subjective data were collected that included handling qualities ratings and pilot comments for increasing levels of turbulence. Analyses of the simulation results show good analytic agreement between the METS model and flight test data, with favorable pilot perception of the simulated turbulence. Precision hover tasks were also repeated using the more complex rotating-frame SORBET (Simulation Of Rotor Blade Element Turbulence) model to generate turbulence. Comparisons of the empirically derived METS model with the theoretical SORBET model show good agreement providing validation of the more complex blade element method of simulating turbulence.

  9. An algebraic turbulence model for three-dimensional viscous flows

    NASA Technical Reports Server (NTRS)

    Chima, R. V.; Giel, P. W.; Boyle, R. J.

    1993-01-01

    An algebraic turbulence model is proposed for use with three-dimensional Navier-Stokes analyses. It incorporates features of both the Baldwin-Lomax and Cebeci-Smith models. The Baldwin-Lomax model uses the maximum of a function f(y) to determine length and velocity scales. An analysis of the Baldwin-Lomax model shows that f(y) can have a spurious maximum close to the wall, causing numerical problems and non-physical results. The proposed model uses integral relations to determine delta(*) u(sub e) and delta used in the Cebeci-Smith mode. It eliminates a constant in the Baldwin-Lomax model and determines the two remaining constants by comparison to the Cebeci-Smith formulation. Pressure gradient effects, a new wake model, and the implementation of these features in a three-dimensional Navier-Stokes code are also described. Results are shown for a flat plate boundary layer, an annular turbine cascade, and endwall heat transfer in a linear turbine cascade. The heat transfer results agree well with experimental data which shows large variations in endwall Stanton number contours with Reynolds number.

  10. Using remotely sensed data and stochastic models to simulate realistic flood hazard footprints across the continental US

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; Quinn, N.; Sampson, C. C.; Smith, A.; Wing, O.; Neal, J. C.

    2017-12-01

    Remotely sensed data has transformed the field of large scale hydraulic modelling. New digital elevation, hydrography and river width data has allowed such models to be created for the first time, and remotely sensed observations of water height, slope and water extent has allowed them to be calibrated and tested. As a result, we are now able to conduct flood risk analyses at national, continental or even global scales. However, continental scale analyses have significant additional complexity compared to typical flood risk modelling approaches. Traditional flood risk assessment uses frequency curves to define the magnitude of extreme flows at gauging stations. The flow values for given design events, such as the 1 in 100 year return period flow, are then used to drive hydraulic models in order to produce maps of flood hazard. Such an approach works well for single gauge locations and local models because over relatively short river reaches (say 10-60km) one can assume that the return period of an event does not vary. At regional to national scales and across multiple river catchments this assumption breaks down, and for a given flood event the return period will be different at different gauging stations, a pattern known as the event `footprint'. Despite this, many national scale risk analyses still use `constant in space' return period hazard layers (e.g. the FEMA Special Flood Hazard Areas) in their calculations. Such an approach can estimate potential exposure, but will over-estimate risk and cannot determine likely flood losses over a whole region or country. We address this problem by using a stochastic model to simulate many realistic extreme event footprints based on observed gauged flows and the statistics of gauge to gauge correlations. We take the entire USGS gauge data catalogue for sites with > 45 years of record and use a conditional approach for multivariate extreme values to generate sets of flood events with realistic return period variation in space. We undertake a number of quality checks of the stochastic model and compare real and simulated footprints to show that the method is able to re-create realistic patterns even at continental scales where there is large variation in flood generating mechanisms. We then show how these patterns can be used to drive a large scale 2D hydraulic to predict regional scale flooding.

  11. Comparing apples with apples: Using spatially distributed time series of monitoring data for model evaluation

    NASA Astrophysics Data System (ADS)

    Solazzo, E.; Galmarini, S.

    2015-07-01

    A more sensible use of monitoring data for the evaluation and development of regional-scale atmospheric models is proposed. The motivation stems from observing current practices in this realm where the quality of monitoring data is seldom questioned and model-to-data deviation is uniquely attributed to model deficiency. Efforts are spent to quantify the uncertainty intrinsic to the measurement process, but aspects connected to model evaluation and development have recently emerged that remain obscure, such as the spatial representativeness and the homogeneity of signals subjects of our investigation. By using time series of hourly records of ozone for a whole year (2006) collected by the European AirBase network the area of representativeness is firstly analysed showing, for similar class of stations (urban, suburban, rural), large heterogeneity and high sensitivity to the density of the network and to the noise of the signal, suggesting the mere station classification to be not a suitable candidate to help select the pool of stations used in model evaluation. Therefore a novel, more robust technique is developed based on the spatial properties of the associativity of the spectral components of the ozone time series, in an attempt to determine the level of homogeneity. The spatial structure of the associativity among stations is informative of the spatial representativeness of that specific component and automatically tells about spatial anisotropy. Time series of ozone data from North American networks have also been analysed to support the methodology. We find that the low energy components (especially the intra-day signal) suffer from a too strong influence of country-level network set-up in Europe, and different networks in North America, showing spatial heterogeneity exactly at the administrative border that separates countries in Europe and at areas separating different networks in North America. For model evaluation purposes these elements should be treated as purely stochastic and discarded, while retaining the portion of the signal useful to the evaluation process. Trans-boundary discontinuity of the intra-day signal along with cross-network grouping has been found to be predominant. Skills of fifteen regional chemical-transport modelling systems have been assessed in light of this result, finding an improved accuracy of up to 5% when the intra-day signal is removed with respect to the case where all components are analysed.

  12. ODE Constrained Mixture Modelling: A Method for Unraveling Subpopulation Structures and Dynamics

    PubMed Central

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J.

    2014-01-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity. PMID:24992156

  13. A population genetics analysis in clinical isolates of Sporothrix schenckii based on calmodulin and calcium/calmodulin-dependent kinase partial gene sequences.

    PubMed

    Rangel-Gamboa, Lucia; Martinez-Hernandez, Fernando; Maravilla, Pablo; Flisser, Ana

    2018-02-02

    Sporotrichosis is a subcutaneous mycosis that is caused by diverse species of Sporothrix. High levels of genetic diversity in Sporothrix isolates have been reported, but few population genetics analyses have been documented. To analyse the genetic variability and population genetics relations of Sporothrix schenckii Mexican clinical isolates and to compare them with other reported isolates. We studied the partial sequences of calmodulin and calcium/calmodulin-dependent kinase genes in 24 isolates; 22 from Mexico, one from Colombia, and one ATCC ® 6331™; the latter was used as a positive control. In total, 24 isolates were analysed. Phylogenetic, haplotype and population genetic analyses were performed with 24 sequences obtained by us and 345 sequences obtained from GenBank. The frequency of S. schenckii sensu stricto was 81% in the 22 Mexican isolates, while the remaining 19% were Sporothrix globosa. Mexican S. schenckii sensu stricto had high genetic diversity and was related to isolates from South America. In contrast, S. globosa showed one haplotype related to isolates from Asia, Brazil, Spain and the USA. In S. schenckii sensu stricto, S. brasiliensis and S. globosa, haplotype polymorphism (θ) values were higher than the nucleotide diversity data (π). In addition, Tajima's D plus Fu and Li's tests analyses displayed negative values, suggesting directional selection and arguing against the model of neutral evolution in these populations. In addition, analyses showed that calcium/calmodulin-dependent kinase was a suitable genetic marker to discriminate between common Sporothrix species. © 2018 Blackwell Verlag GmbH.

  14. High-flow oxygen therapy: pressure analysis in a pediatric airway model.

    PubMed

    Urbano, Javier; del Castillo, Jimena; López-Herce, Jesús; Gallardo, José A; Solana, María J; Carrillo, Ángel

    2012-05-01

    The mechanism of high-flow oxygen therapy and the pressures reached in the airway have not been defined. We hypothesized that the flow would generate a low continuous positive pressure, and that elevated flow rates in this model could produce moderate pressures. The objective of this study was to analyze the pressure generated by a high-flow oxygen therapy system in an experimental model of the pediatric airway. An experimental in vitro study was performed. A high-flow oxygen therapy system was connected to 3 types of interface (nasal cannulae, nasal mask, and oronasal mask) and applied to 2 types of pediatric manikin (infant and neonatal). The pressures generated in the circuit, in the airway, and in the pharynx were measured at different flow rates (5, 10, 15, and 20 L/min). The experiment was conducted with and without a leak (mouth sealed and unsealed). Linear regression analyses were performed for each set of measurements. The pressures generated with the different interfaces were very similar. The maximum pressure recorded was 4 cm H(2)O with a flow of 20 L/min via nasal cannulae or nasal mask. When the mouth of the manikin was held open, the pressures reached in the airway and pharynxes were undetectable. Linear regression analyses showed a similar linear relationship between flow and pressures measured in the pharynx (pressure = -0.375 + 0.138 × flow) and in the airway (pressure = -0.375 + 0.158 × flow) with the closed mouth condition. According to our hypothesis, high-flow oxygen therapy systems produced a low-level CPAP in an experimental pediatric model, even with the use of very high flow rates. Linear regression analyses showed similar linear relationships between flow and pressures measured in the pharynx and in the airway. This finding suggests that, at least in part, the effects may be due to other mechanisms.

  15. Application of a transmission model to estimate performance objectives for Salmonella in the broiler supply chain.

    PubMed

    van der Fels-Klerx, H J; Tromp, S; Rijgersberg, H; van Asselt, E D

    2008-11-30

    The aim of the present study was to demonstrate how Performance Objectives (POs) for Salmonella at various points in the broiler supply chain can be estimated, starting from pre-set levels of the PO in finished products. The estimations were performed using an analytical transmission model, based on prevalence data collected throughout the chain in The Netherlands. In the baseline (current) situation, the end PO was set at 2.5% of the finished products (at end of processing) being contaminated with Salmonella. Scenario analyses were performed by reducing this baseline end PO to 1.5% and 0.5%. The results showed the end PO could be reduced by spreading the POs over the various stages of the broiler supply chain. Sensitivity analyses were performed by changing the values of the model parameters. Results indicated that, in general, decreasing Salmonella contamination between points in the chain is more effective in reducing the baseline PO than increasing the reduction of the pathogen, implying contamination should be prevented rather than treated. Application of both approaches at the same time showed to be most effective in reducing the end PO, especially at the abattoir and during processing. The modelling approach of this study proved to be useful to estimate the implications for preceding stages of the chain by setting a PO at the end of the chain as well as to evaluate the effectiveness of potential interventions in reducing the end PO. The model estimations may support policy-makers in their decision-making process with regard to microbiological food safety.

  16. Area under the curve predictions of dalbavancin, a new lipoglycopeptide agent, using the end of intravenous infusion concentration data point by regression analyses such as linear, log-linear and power models.

    PubMed

    Bhamidipati, Ravi Kanth; Syed, Muzeeb; Mullangi, Ramesh; Srinivas, Nuggehally

    2018-02-01

    1. Dalbavancin, a lipoglycopeptide, is approved for treating gram-positive bacterial infections. Area under plasma concentration versus time curve (AUC inf ) of dalbavancin is a key parameter and AUC inf /MIC ratio is a critical pharmacodynamic marker. 2. Using end of intravenous infusion concentration (i.e. C max ) C max versus AUC inf relationship for dalbavancin was established by regression analyses (i.e. linear, log-log, log-linear and power models) using 21 pairs of subject data. 3. The predictions of the AUC inf were performed using published C max data by application of regression equations. The quotient of observed/predicted values rendered fold difference. The mean absolute error (MAE)/root mean square error (RMSE) and correlation coefficient (r) were used in the assessment. 4. MAE and RMSE values for the various models were comparable. The C max versus AUC inf exhibited excellent correlation (r > 0.9488). The internal data evaluation showed narrow confinement (0.84-1.14-fold difference) with a RMSE < 10.3%. The external data evaluation showed that the models predicted AUC inf with a RMSE of 3.02-27.46% with fold difference largely contained within 0.64-1.48. 5. Regardless of the regression models, a single time point strategy of using C max (i.e. end of 30-min infusion) is amenable as a prospective tool for predicting AUC inf of dalbavancin in patients.

  17. Flexibility in community pharmacy: a qualitative study of business models and cognitive services.

    PubMed

    Feletto, Eleonora; Wilson, Laura K; Roberts, Alison S; Benrimoj, Shalom I

    2010-04-01

    To identify the capacity of current pharmacy business models, and the dimensions of organisational flexibility within them, to integrate products and services as well as the perceptions of viability of these models. Fifty-seven semi-structured interviews were conducted with community pharmacy owners or managers and support staff in 30 pharmacies across Australia. A framework of organisational flexibility was used to analyse their capacity to integrate services and perceptions of viability. Data were analysed using the method of constant comparison by two independent researchers. The study found that Australian community pharmacies have used the four types of flexibility to build capacity in distinct ways and react to changes in the local environment. This capacity building was manifested in four emerging business models which integrate services to varying degrees: classic community pharmacy, retail destination pharmacy, health care solution pharmacy and networked pharmacy. The perception of viability is less focused on dispensing medications and more focused on differentiating pharmacies through either a retail or services focus. Strategic flexibility appeared to offer pharmacies the ability to integrate and sustainably deliver services more successfully than other types, as exhibited by health care solution and networked pharmacies. Active support and encouragement to transition from being dependent on dispensing to implementing services is needed. The study showed that pharmacies where services were implemented and showed success are those strategically differentiating their businesses to become focused health care providers. This holistic approach should inevitably influence the sustainability of services.

  18. Methods for Probabilistic Radiological Dose Assessment at a High-Level Radioactive Waste Repository.

    NASA Astrophysics Data System (ADS)

    Maheras, Steven James

    Methods were developed to assess and evaluate the uncertainty in offsite and onsite radiological dose at a high-level radioactive waste repository to show reasonable assurance that compliance with applicable regulatory requirements will be achieved. Uncertainty in offsite dose was assessed by employing a stochastic precode in conjunction with Monte Carlo simulation using an offsite radiological dose assessment code. Uncertainty in onsite dose was assessed by employing a discrete-event simulation model of repository operations in conjunction with an occupational radiological dose assessment model. Complementary cumulative distribution functions of offsite and onsite dose were used to illustrate reasonable assurance. Offsite dose analyses were performed for iodine -129, cesium-137, strontium-90, and plutonium-239. Complementary cumulative distribution functions of offsite dose were constructed; offsite dose was lognormally distributed with a two order of magnitude range. However, plutonium-239 results were not lognormally distributed and exhibited less than one order of magnitude range. Onsite dose analyses were performed for the preliminary inspection, receiving and handling, and the underground areas of the repository. Complementary cumulative distribution functions of onsite dose were constructed and exhibited less than one order of magnitude range. A preliminary sensitivity analysis of the receiving and handling areas was conducted using a regression metamodel. Sensitivity coefficients and partial correlation coefficients were used as measures of sensitivity. Model output was most sensitive to parameters related to cask handling operations. Model output showed little sensitivity to parameters related to cask inspections.

  19. Weather-Driven Variation in Dengue Activity in Australia Examined Using a Process-Based Modeling Approach

    PubMed Central

    Bannister-Tyrrell, Melanie; Williams, Craig; Ritchie, Scott A.; Rau, Gina; Lindesay, Janette; Mercer, Geoff; Harley, David

    2013-01-01

    The impact of weather variation on dengue transmission in Cairns, Australia, was determined by applying a process-based dengue simulation model (DENSiM) that incorporated local meteorologic, entomologic, and demographic data. Analysis showed that inter-annual weather variation is one of the significant determinants of dengue outbreak receptivity. Cross-correlation analyses showed that DENSiM simulated epidemics of similar relative magnitude and timing to those historically recorded in reported dengue cases in Cairns during 1991–2009, (r = 0.372, P < 0.01). The DENSiM model can now be used to study the potential impacts of future climate change on dengue transmission. Understanding the impact of climate variation on the geographic range, seasonality, and magnitude of dengue transmission will enhance development of adaptation strategies to minimize future disease burden in Australia. PMID:23166197

  20. Transformation (normalization) of slope gradient and surface curvatures, automated for statistical analyses from DEMs

    NASA Astrophysics Data System (ADS)

    Csillik, O.; Evans, I. S.; Drăguţ, L.

    2015-03-01

    Automated procedures are developed to alleviate long tails in frequency distributions of morphometric variables. They minimize the skewness of slope gradient frequency distributions, and modify the kurtosis of profile and plan curvature distributions toward that of the Gaussian (normal) model. Box-Cox (for slope) and arctangent (for curvature) transformations are tested on nine digital elevation models (DEMs) of varying origin and resolution, and different landscapes, and shown to be effective. Resulting histograms are illustrated and show considerable improvements over those for previously recommended slope transformations (sine, square root of sine, and logarithm of tangent). Unlike previous approaches, the proposed method evaluates the frequency distribution of slope gradient values in a given area and applies the most appropriate transform if required. Sensitivity of the arctangent transformation is tested, showing that Gaussian-kurtosis transformations are acceptable also in terms of histogram shape. Cube root transformations of curvatures produced bimodal histograms. The transforms are applicable to morphometric variables and many others with skewed or long-tailed distributions. By avoiding long tails and outliers, they permit parametric statistics such as correlation, regression and principal component analyses to be applied, with greater confidence that requirements for linearity, additivity and even scatter of residuals (constancy of error variance) are likely to be met. It is suggested that such transformations should be routinely applied in all parametric analyses of long-tailed variables. Our Box-Cox and curvature automated transformations are based on a Python script, implemented as an easy-to-use script tool in ArcGIS.

  1. Advanced Behavioral Analyses Show that the Presence of Food Causes Subtle Changes in C. elegans Movement.

    PubMed

    Angstman, Nicholas B; Frank, Hans-Georg; Schmitz, Christoph

    2016-01-01

    As a widely used and studied model organism, Caenorhabditis elegans worms offer the ability to investigate implications of behavioral change. Although, investigation of C. elegans behavioral traits has been shown, analysis is often narrowed down to measurements based off a single point, and thus cannot pick up on subtle behavioral and morphological changes. In the present study videos were captured of four different C. elegans strains grown in liquid cultures and transferred to NGM-agar plates with an E. coli lawn or with no lawn. Using an advanced software, WormLab, the full skeleton and outline of worms were tracked to determine whether the presence of food affects behavioral traits. In all seven investigated parameters, statistically significant differences were found in worm behavior between those moving on NGM-agar plates with an E. coli lawn and NGM-agar plates with no lawn. Furthermore, multiple test groups showed differences in interaction between variables as the parameters that significantly correlated statistically with speed of locomotion varied. In the present study, we demonstrate the validity of a model to analyze C. elegans behavior beyond simple speed of locomotion. The need to account for a nested design while performing statistical analyses in similar studies is also demonstrated. With extended analyses, C. elegans behavioral change can be investigated with greater sensitivity, which could have wide utility in fields such as, but not limited to, toxicology, drug discovery, and RNAi screening.

  2. Modelling cholera epidemics: the role of waterways, human mobility and sanitation.

    PubMed

    Mari, L; Bertuzzo, E; Righetto, L; Casagrandi, R; Gatto, M; Rodriguez-Iturbe, I; Rinaldo, A

    2012-02-07

    We investigate the role of human mobility as a driver for long-range spreading of cholera infections, which primarily propagate through hydrologically controlled ecological corridors. Our aim is to build a spatially explicit model of a disease epidemic, which is relevant to both social and scientific issues. We present a two-layer network model that accounts for the interplay between epidemiological dynamics, hydrological transport and long-distance dissemination of the pathogen Vibrio cholerae owing to host movement, described here by means of a gravity-model approach. We test our model against epidemiological data recorded during the extensive cholera outbreak occurred in the KwaZulu-Natal province of South Africa during 2000-2001. We show that long-range human movement is fundamental in quantifying otherwise unexplained inter-catchment transport of V. cholerae, thus playing a key role in the formation of regional patterns of cholera epidemics. We also show quantitatively how heterogeneously distributed drinking water supplies and sanitation conditions may affect large-scale cholera transmission, and analyse the effects of different sanitation policies.

  3. Preliminary assessment of soil moisture over vegetation

    NASA Technical Reports Server (NTRS)

    Carlson, T. N.

    1986-01-01

    Modeling of surface energy fluxes was combined with in-situ measurement of surface parameters, specifically the surface sensible heat flux and the substrate soil moisture. A vegetation component was incorporated in the atmospheric/substrate model and subsequently showed that fluxes over vegetation can be very much different than those over bare soil for a given surface-air temperature difference. The temperature signatures measured by a satellite or airborne radiometer should be interpreted in conjunction with surface measurements of modeled parameters. Paradoxically, analyses of the large-scale distribution of soil moisture availability shows that there is a very high correlation between antecedent precipitation and inferred surface moisture availability, even when no specific vegetation parameterization is used in the boundary layer model. Preparatory work was begun in streamlining the present boundary layer model, developing better algorithms for relating surface temperatures to substrate moisture, preparing for participation in the French HAPEX experiment, and analyzing aircraft microwave and radiometric surface temperature data for the 1983 French Beauce experiments.

  4. Test of the efficiency of three storm water quality models with a rich set of data.

    PubMed

    Ahyerre, M; Henry, F O; Gogien, F; Chabanel, M; Zug, M; Renaudet, D

    2005-01-01

    The objective of this article is to test the efficiency of three different Storm Water Quality Model (SWQM) on the same data set (34 rain events, SS measurements) sampled on a 42 ha watershed in the center of Paris. The models have been calibrated at the scale of the rain event. Considering the mass of pollution calculated per event, the results on the models are satisfactory but that they are in the same order of magnitude as the simple hydraulic approach associated to a constant concentration. In a second time, the mass of pollutant at the outlet of the catchment at the global scale of the 34 events has been calculated. This approach shows that the simple hydraulic calculations gives better results than SWQM. Finally, the pollutographs are analysed, showing that storm water quality models are interesting tools to represent the shape of the pollutographs, and the dynamics of the phenomenon which can be useful in some projects for managers.

  5. Using variable rate models to identify genes under selection in sequence pairs: their validity and limitations for EST sequences.

    PubMed

    Church, Sheri A; Livingstone, Kevin; Lai, Zhao; Kozik, Alexander; Knapp, Steven J; Michelmore, Richard W; Rieseberg, Loren H

    2007-02-01

    Using likelihood-based variable selection models, we determined if positive selection was acting on 523 EST sequence pairs from two lineages of sunflower and lettuce. Variable rate models are generally not used for comparisons of sequence pairs due to the limited information and the inaccuracy of estimates of specific substitution rates. However, previous studies have shown that the likelihood ratio test (LRT) is reliable for detecting positive selection, even with low numbers of sequences. These analyses identified 56 genes that show a signature of selection, of which 75% were not identified by simpler models that average selection across codons. Subsequent mapping studies in sunflower show four of five of the positively selected genes identified by these methods mapped to domestication QTLs. We discuss the validity and limitations of using variable rate models for comparisons of sequence pairs, as well as the limitations of using ESTs for identification of positively selected genes.

  6. Assessment of human thermal comfort and mitigation measures in different urban climatotopes

    NASA Astrophysics Data System (ADS)

    Müller, N.; Kuttler, W.

    2012-04-01

    This study analyses thermal comfort in the model city of Oberhausen as an example for the densely populated metropolitan region Ruhr, Germany. As thermal loads increase due to climate change negative impacts especially for city dwellers will arise. Therefore mitigation strategies should be developed and considered in urban planning today to prevent future thermal stress. The method consists of the combination of in-situ measurements and numerical model simulations. So in a first step the actual thermal situation is determined and then possible mitigation strategies are derived. A measuring network was installed in eight climatotopes for a one year period recording air temperature, relative humidity, wind speed and wind direction. Based on these parameters the human thermal comfort in terms of physiological equivalent temperature (PET) was calculated by RayMan Pro software. Thus the human comfort of different climatotopes was determined. Heat stress in different land uses varies, so excess thermal loads in urban areas could be detected. Based on the measuring results mitigation strategies were developed, such as increasing areas with high evaporation capacity (green areas and water bodies). These strategies were implemented as different plan scenarios in the microscale urban climate model ENVI-met. The best measure should be identified by comparing the range and effect of these scenarios. Simulations were run in three of the eight climatotopes (city center, suburban and open land site) to analyse the effectiveness of the mitigation strategies in several land use structures. These cover the range of values of all eight climatotopes and therefore provide representative results. In the model area of 21 ha total, the modified section in the different plan scenarios was 1 ha. Thus the effect of small-scale changes could be analysed. Such areas can arise due to population decline and structural changes and hold conversion potential. Emphasis was also laid on analysing the effectiveness of water bodies, which need further research in contrast to well analysed vegetation areas. Results show different thermal loads in the miscellaneous climatotopes due to land use structures. Both measurements and model simulations demonstrate the positive effect on thermal comfort due to augmentation of areas with high evaporation capacity. These effects can be especially well detected in summer, when heat stress is most pronounced. The measurement based PET calculations show a maximum difference of 4 K PET between inner city and open land site in summer nights. Simulation results overall present a PET reduction of 1-3 K. The average PET reduction in the city center site is about 2 K, while the maximum reduction in the suburban site can exceed 5 K. In urban areas parks are particularly advisable as mitigation measure, because they reduce thermal stress both by tree shading and evapotranspiration.

  7. Meta-epidemiologic study showed frequent time trends in summary estimates from meta-analyses of diagnostic accuracy studies.

    PubMed

    Cohen, Jérémie F; Korevaar, Daniël A; Wang, Junfeng; Leeflang, Mariska M; Bossuyt, Patrick M

    2016-09-01

    To evaluate changes over time in summary estimates from meta-analyses of diagnostic accuracy studies. We included 48 meta-analyses from 35 MEDLINE-indexed systematic reviews published between September 2011 and January 2012 (743 diagnostic accuracy studies; 344,015 participants). Within each meta-analysis, we ranked studies by publication date. We applied random-effects cumulative meta-analysis to follow how summary estimates of sensitivity and specificity evolved over time. Time trends were assessed by fitting a weighted linear regression model of the summary accuracy estimate against rank of publication. The median of the 48 slopes was -0.02 (-0.08 to 0.03) for sensitivity and -0.01 (-0.03 to 0.03) for specificity. Twelve of 96 (12.5%) time trends in sensitivity or specificity were statistically significant. We found a significant time trend in at least one accuracy measure for 11 of the 48 (23%) meta-analyses. Time trends in summary estimates are relatively frequent in meta-analyses of diagnostic accuracy studies. Results from early meta-analyses of diagnostic accuracy studies should be considered with caution. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Type Ia supernovae, standardizable candles, and gravity

    NASA Astrophysics Data System (ADS)

    Wright, Bill S.; Li, Baojiu

    2018-04-01

    Type Ia supernovae (SNe Ia) are generally accepted to act as standardizable candles, and their use in cosmology led to the first confirmation of the as yet unexplained accelerated cosmic expansion. Many of the theoretical models to explain the cosmic acceleration assume modifications to Einsteinian general relativity which accelerate the expansion, but the question of whether such modifications also affect the ability of SNe Ia to be standardizable candles has rarely been addressed. This paper is an attempt to answer this question. For this we adopt a semianalytical model to calculate SNe Ia light curves in non-standard gravity. We use this model to show that the average rescaled intrinsic peak luminosity—a quantity that is assumed to be constant with redshift in standard analyses of Type Ia supernova (SN Ia) cosmology data—depends on the strength of gravity in the supernova's local environment because the latter determines the Chandrasekhar mass—the mass of the SN Ia's white dwarf progenitor right before the explosion. This means that SNe Ia are no longer standardizable candles in scenarios where the strength of gravity evolves over time, and therefore the cosmology implied by the existing SN Ia data will be different when analysed in the context of such models. As an example, we show that the observational SN Ia cosmology data can be fitted with both a model where (ΩM,ΩΛ)=(0.62 ,0.38 ) and Newton's constant G varies as G (z )=G0(1 +z )-1/4 and the standard model where (ΩM,ΩΛ)=(0.3 ,0.7 ) and G is constant, when the Universe is assumed to be flat.

  9. Bee Venom Alleviates Motor Deficits and Modulates the Transfer of Cortical Information through the Basal Ganglia in Rat Models of Parkinson's Disease.

    PubMed

    Maurice, Nicolas; Deltheil, Thierry; Melon, Christophe; Degos, Bertrand; Mourre, Christiane; Amalric, Marianne; Kerkerian-Le Goff, Lydia

    2015-01-01

    Recent evidence points to a neuroprotective action of bee venom on nigral dopamine neurons in animal models of Parkinson's disease (PD). Here we examined whether bee venom also displays a symptomatic action by acting on the pathological functioning of the basal ganglia in rat PD models. Bee venom effects were assessed by combining motor behavior analyses and in vivo electrophysiological recordings in the substantia nigra pars reticulata (SNr, basal ganglia output structure) in pharmacological (neuroleptic treatment) and lesional (unilateral intranigral 6-hydroxydopamine injection) PD models. In the hemi-parkinsonian 6-hydroxydopamine lesion model, subchronic bee venom treatment significantly alleviates contralateral forelimb akinesia and apomorphine-induced rotations. Moreover, a single injection of bee venom reverses haloperidol-induced catalepsy, a pharmacological model reminiscent of parkinsonian akinetic deficit. This effect is mimicked by apamin, a blocker of small conductance Ca2+-activated K+ (SK) channels, and blocked by CyPPA, a positive modulator of these channels, suggesting the involvement of SK channels in the bee venom antiparkinsonian action. In vivo electrophysiological recordings in the substantia nigra pars reticulata (basal ganglia output structure) showed no significant effect of BV on the mean neuronal discharge frequency or pathological bursting activity. In contrast, analyses of the neuronal responses evoked by motor cortex stimulation show that bee venom reverses the 6-OHDA- and neuroleptic-induced biases in the influence exerted by the direct inhibitory and indirect excitatory striatonigral circuits. These data provide the first evidence for a beneficial action of bee venom on the pathological functioning of the cortico-basal ganglia circuits underlying motor PD symptoms with potential relevance to the symptomatic treatment of this disease.

  10. Should adhesive debonding be simulated for intra-radicular post stress analyses?

    PubMed

    Caldas, Ricardo A; Bacchi, Atais; Barão, Valentim A R; Versluis, Antheunis

    2018-06-23

    Elucidate the influence of debonding on stress distribution and maximum stresses for intra-radicular restorations. Five intra-radicular restorations were analyzed by finite element analysis (FEA): MP=metallic cast post core; GP=glass fiber post core; PP=pre-fabricated metallic post core; RE=resin endocrowns; CE=single piece ceramic endocrown. Two cervical preparations were considered: no ferule (f 0 ) and 2mm ferule (f 1 ). The simulation was conducted in three steps: (1) intact bonds at all contacts; (2) bond failure between crown and tooth; (3) bond failure among tooth, post and crown interfaces. Contact friction and separation between interfaces was modeled where bond failure occurred. Mohr-Coulomb stress ratios (σ MC ratio ) and fatigue safety factors (SF) for dentin structure were compared with published strength values, fatigue life, and fracture patterns of teeth with intra-radicular restorations. The σ MC ratio showed no differences among models at first step. The second step increased σ MC ratio at the ferule compared to step 1. At the third step, the σ MC ratio and SF for f 0 models were highly influenced by post material. CE and RE models had the highest values for σ MC ratio and lower SF. MP had the lowest σ MC ratio and higher SF. The f 1 models showed no relevant differences among them at the third step. FEA most closely predicted failure performance of intra-radicular posts when frictional contact was modeled. Results of analyses where all interfaces are assumed to be perfectly bonded should be considered with caution. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.

  11. Forecasting and prediction of scorpion sting cases in Biskra province, Algeria, using a seasonal autoregressive integrated moving average model

    PubMed Central

    2016-01-01

    OBJECTIVES The aims of this study were to highlight some epidemiological aspects of scorpion envenomations, to analyse and interpret the available data for Biskra province, Algeria, and to develop a forecasting model for scorpion sting cases in Biskra province, which records the highest number of scorpion stings in Algeria. METHODS In addition to analysing the epidemiological profile of scorpion stings that occurred throughout the year 2013, we used the Box-Jenkins approach to fit a seasonal autoregressive integrated moving average (SARIMA) model to the monthly recorded scorpion sting cases in Biskra from 2000 to 2012. RESULTS The epidemiological analysis revealed that scorpion stings were reported continuously throughout the year, with peaks in the summer months. The most affected age group was 15 to 49 years old, with a male predominance. The most prone human body areas were the upper and lower limbs. The majority of cases (95.9%) were classified as mild envenomations. The time series analysis showed that a (5,1,0)×(0,1,1)12 SARIMA model offered the best fit to the scorpion sting surveillance data. This model was used to predict scorpion sting cases for the year 2013, and the fitted data showed considerable agreement with the actual data. CONCLUSIONS SARIMA models are useful for monitoring scorpion sting cases, and provide an estimate of the variability to be expected in future scorpion sting cases. This knowledge is helpful in predicting whether an unusual situation is developing or not, and could therefore assist decision-makers in strengthening the province’s prevention and control measures and in initiating rapid response measures. PMID:27866407

  12. Diagnostic accuracy of a bayesian latent group analysis for the detection of malingering-related poor effort.

    PubMed

    Ortega, Alonso; Labrenz, Stephan; Markowitsch, Hans J; Piefke, Martina

    2013-01-01

    In the last decade, different statistical techniques have been introduced to improve assessment of malingering-related poor effort. In this context, we have recently shown preliminary evidence that a Bayesian latent group model may help to optimize classification accuracy using a simulation research design. In the present study, we conducted two analyses. Firstly, we evaluated how accurately this Bayesian approach can distinguish between participants answering in an honest way (honest response group) and participants feigning cognitive impairment (experimental malingering group). Secondly, we tested the accuracy of our model in the differentiation between patients who had real cognitive deficits (cognitively impaired group) and participants who belonged to the experimental malingering group. All Bayesian analyses were conducted using the raw scores of a visual recognition forced-choice task (2AFC), the Test of Memory Malingering (TOMM, Trial 2), and the Word Memory Test (WMT, primary effort subtests). The first analysis showed 100% accuracy for the Bayesian model in distinguishing participants of both groups with all effort measures. The second analysis showed outstanding overall accuracy of the Bayesian model when estimates were obtained from the 2AFC and the TOMM raw scores. Diagnostic accuracy of the Bayesian model diminished when using the WMT total raw scores. Despite, overall diagnostic accuracy can still be considered excellent. The most plausible explanation for this decrement is the low performance in verbal recognition and fluency tasks of some patients of the cognitively impaired group. Additionally, the Bayesian model provides individual estimates, p(zi |D), of examinees' effort levels. In conclusion, both high classification accuracy levels and Bayesian individual estimates of effort may be very useful for clinicians when assessing for effort in medico-legal settings.

  13. From local hydrological process analysis to regional hydrological model application in Benin: Concept, results and perspectives

    NASA Astrophysics Data System (ADS)

    Bormann, H.; Faß, T.; Giertz, S.; Junge, B.; Diekkrüger, B.; Reichert, B.; Skowronek, A.

    This paper presents the concept, first results and perspectives of the hydrological sub-project of the IMPETUS-Benin project which is part of the GLOWA program funded by the German ministry of education and research. In addition to the research concept, first results on field hydrology, pedology, hydrogeology and hydrological modelling are presented, focusing on the understanding of the actual hydrological processes. For analysing the processes a 30 km 2 catchment acting as a super test site was chosen which is assumed to be representative for the entire catchment of about 15,000 km 2. First results of the field investigations show that infiltration, runoff generation and soil erosion strongly depend on land cover and land use which again influence the soil properties significantly. A conceptual hydrogeological model has been developed summarising the process knowledge on runoff generation and subsurface hydrological processes. This concept model shows a dominance of fast runoff components (surface runoff and interflow), a groundwater recharge along preferential flow paths, temporary interaction between surface and groundwater and separate groundwater systems on different scales (shallow, temporary groundwater on local scale and permanent, deep groundwater on regional scale). The findings of intensive measurement campaigns on soil hydrology, groundwater dynamics and soil erosion have been integrated into different, scale-dependent hydrological modelling concepts applied at different scales in the target region (upper Ouémé catchment in Benin, about 15,000 km 2). The models have been applied and successfully validated. They will be used for integrated scenario analyses in the forthcoming project phase to assess the impacts of global change on the regional water cycle and on typical problem complexes such as food security in West African countries.

  14. Bee Venom Alleviates Motor Deficits and Modulates the Transfer of Cortical Information through the Basal Ganglia in Rat Models of Parkinson’s Disease

    PubMed Central

    Maurice, Nicolas; Deltheil, Thierry; Melon, Christophe; Degos, Bertrand; Mourre, Christiane

    2015-01-01

    Recent evidence points to a neuroprotective action of bee venom on nigral dopamine neurons in animal models of Parkinson’s disease (PD). Here we examined whether bee venom also displays a symptomatic action by acting on the pathological functioning of the basal ganglia in rat PD models. Bee venom effects were assessed by combining motor behavior analyses and in vivo electrophysiological recordings in the substantia nigra pars reticulata (SNr, basal ganglia output structure) in pharmacological (neuroleptic treatment) and lesional (unilateral intranigral 6-hydroxydopamine injection) PD models. In the hemi-parkinsonian 6-hydroxydopamine lesion model, subchronic bee venom treatment significantly alleviates contralateral forelimb akinesia and apomorphine-induced rotations. Moreover, a single injection of bee venom reverses haloperidol-induced catalepsy, a pharmacological model reminiscent of parkinsonian akinetic deficit. This effect is mimicked by apamin, a blocker of small conductance Ca2+-activated K+ (SK) channels, and blocked by CyPPA, a positive modulator of these channels, suggesting the involvement of SK channels in the bee venom antiparkinsonian action. In vivo electrophysiological recordings in the substantia nigra pars reticulata (basal ganglia output structure) showed no significant effect of BV on the mean neuronal discharge frequency or pathological bursting activity. In contrast, analyses of the neuronal responses evoked by motor cortex stimulation show that bee venom reverses the 6-OHDA- and neuroleptic-induced biases in the influence exerted by the direct inhibitory and indirect excitatory striatonigral circuits. These data provide the first evidence for a beneficial action of bee venom on the pathological functioning of the cortico-basal ganglia circuits underlying motor PD symptoms with potential relevance to the symptomatic treatment of this disease. PMID:26571268

  15. 75 FR 9411 - Official Release of the MOVES2010 Motor Vehicle Emissions Model for Emissions Inventories in SIPs...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-02

    ... hot-spot analyses at this time; the Agency will approve the model for such analyses in the near future...-Level Conformity Hot-Spot Analyses Availability of MOVES2010 and Support Materials Copies of the... for regional emissions analyses and CO hot-spot analyses for transportation conformity (73 FR 3464...

  16. Characterization of pigments and colors used in ancient Egyptian boat models

    NASA Astrophysics Data System (ADS)

    Hühnerfuβ, Katja; von Bohlen, Alex; Kurth, Dieter

    2006-11-01

    The analyses of pigments originating from well dated ancient boat models found in Egyptian graves were used for characterization and for dating tasks of unknown objects. A nearly destruction free sampling technique using cotton buds was applied for sampling these valuable artifacts for a subsequent Total Reflection X-Ray Fluorescence Spectrometry (TXRF) analysis. Two relevant collections of Egyptian object of art were at our disposal, one of the Ägyptisches Museum Berlin and the second of the British Museum London. Three groups of colors were studied, they originate from white, red and blue/green paints, respectively. The results of the analyses performed on micro-amounts of paints (< 1 μg) show that some artifacts were misclassified and belong to other epochs. Some others were retouched with modern colors. In general, it can be stated that results obtained by Total Reflection X-Ray Fluorescence Spectrometry may dissipate some uncertainties when applying classical archaeological dating methods.

  17. Selecting risk factors: a comparison of discriminant analysis, logistic regression and Cox's regression model using data from the Tromsø Heart Study.

    PubMed

    Brenn, T; Arnesen, E

    1985-01-01

    For comparative evaluation, discriminant analysis, logistic regression and Cox's model were used to select risk factors for total and coronary deaths among 6595 men aged 20-49 followed for 9 years. Groups with mortality between 5 and 93 per 1000 were considered. Discriminant analysis selected variable sets only marginally different from the logistic and Cox methods which always selected the same sets. A time-saving option, offered for both the logistic and Cox selection, showed no advantage compared with discriminant analysis. Analysing more than 3800 subjects, the logistic and Cox methods consumed, respectively, 80 and 10 times more computer time than discriminant analysis. When including the same set of variables in non-stepwise analyses, all methods estimated coefficients that in most cases were almost identical. In conclusion, discriminant analysis is advocated for preliminary or stepwise analysis, otherwise Cox's method should be used.

  18. The SZ-5 Spaceship Orbit Changes During The 2003 "Halloween Storm"

    NASA Astrophysics Data System (ADS)

    Huang, C.; Liu, D.; Guo, J.

    2017-12-01

    We analyse the daily major semi-axis variations of SZ-5 (ShenZhou V) spaceship during Oct. 20 to Dec. 30 in 2003, which includes the period of the 2003 "Halloween Storm". The significant orbital decay has been observed in late October due to the great solar flares and the severe geomagnetic storms. According to the equation of the air-drag-force on a spacecraft and the SZ-5 orbital decay information, we derive the thermospheric density relative changes during the 2003 "Halloween Storm" and compare the results with the Naval Research Laboratory Mass Spectrometer Incoherent Scatter Radar Extended Model (NRLMSISE-00). The analyses show that the thermosperic density (at the altitude of SZ-5, about 350 km) in storm time enchances to approximately three times as much as that in the quiet time but the empirical model may underestimate the thermospheric density changes during this severe storm.

  19. Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events

    USGS Publications Warehouse

    Dinitz, Laura B.; Taketa, Richard A.

    2013-01-01

    This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.

  20. Transcriptional Pathways Altered in Response to Vibration in a Model of Hand-Arm Vibration Syndrome.

    PubMed

    Waugh, Stacey; Kashon, Michael L; Li, Shengqiao; Miller, Gerome R; Johnson, Claud; Krajnak, Kristine

    2016-04-01

    The aim of this study was to use an established model of vibration-induced injury to assess frequency-dependent changes in transcript expression in skin, artery, and nerve tissues. Transcript expression in tissues from control and vibration-exposed rats (4 h/day for 10 days at 62.5, 125, or 250 Hz; 49 m/s, rms) was measured. Transcripts affected by vibration were used in bioinformatics analyses to identify molecular- and disease-related pathways associated with exposure to vibration. Analyses revealed that cancer-related pathways showed frequency-dependent changes in activation or inhibition. Most notably, the breast-related cancer-1 pathway was affected. Other pathways associated with breast cancer type 1 susceptibility protein related signaling, or associated with cancer and cell cycle/cell survivability were also affected. Occupational exposure to vibration may result in DNA damage and alterations in cell signaling pathways that have significant effects on cellular division.

Top