Sample records for mixed models analyses

  1. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  2. The Evaluation of Bivariate Mixed Models in Meta-analyses of Diagnostic Accuracy Studies with SAS, Stata and R.

    PubMed

    Vogelgesang, Felicitas; Schlattmann, Peter; Dewey, Marc

    2018-05-01

    Meta-analyses require a thoroughly planned procedure to obtain unbiased overall estimates. From a statistical point of view not only model selection but also model implementation in the software affects the results. The present simulation study investigates the accuracy of different implementations of general and generalized bivariate mixed models in SAS (using proc mixed, proc glimmix and proc nlmixed), Stata (using gllamm, xtmelogit and midas) and R (using reitsma from package mada and glmer from package lme4). Both models incorporate the relationship between sensitivity and specificity - the two outcomes of interest in meta-analyses of diagnostic accuracy studies - utilizing random effects. Model performance is compared in nine meta-analytic scenarios reflecting the combination of three sizes for meta-analyses (89, 30 and 10 studies) with three pairs of sensitivity/specificity values (97%/87%; 85%/75%; 90%/93%). The evaluation of accuracy in terms of bias, standard error and mean squared error reveals that all implementations of the generalized bivariate model calculate sensitivity and specificity estimates with deviations less than two percentage points. proc mixed which together with reitsma implements the general bivariate mixed model proposed by Reitsma rather shows convergence problems. The random effect parameters are in general underestimated. This study shows that flexibility and simplicity of model specification together with convergence robustness should influence implementation recommendations, as the accuracy in terms of bias was acceptable in all implementations using the generalized approach. Schattauer GmbH.

  3. Mixing in the shear superposition micromixer: three-dimensional analysis.

    PubMed

    Bottausci, Frederic; Mezić, Igor; Meinhart, Carl D; Cardonne, Caroline

    2004-05-15

    In this paper, we analyse mixing in an active chaotic advection micromixer. The micromixer consists of a main rectangular channel and three cross-stream secondary channels that provide ability for time-dependent actuation of the flow stream in the direction orthogonal to the main stream. Three-dimensional motion in the mixer is studied. Numerical simulations and modelling of the flow are pursued in order to understand the experiments. It is shown that for some values of parameters a simple model can be derived that clearly represents the flow nature. Particle image velocimetry measurements of the flow are compared with numerical simulations and the analytical model. A measure for mixing, the mixing variance coefficient (MVC), is analysed. It is shown that mixing is substantially improved with multiple side channels with oscillatory flows, whose frequencies are increasing downstream. The optimization of MVC results for single side-channel mixing is presented. It is shown that dependence of MVC on frequency is not monotone, and a local minimum is found. Residence time distributions derived from the analytical model are analysed. It is shown that, while the average Lagrangian velocity profile is flattened over the steady flow, Taylor-dispersion effects are still present for the current micromixer configuration.

  4. Hydrothermal contamination of public supply wells in Napa and Sonoma Valleys, California

    USGS Publications Warehouse

    Forrest, Matthew J.; Kulongoski, Justin T.; Edwards, Matthew S.; Farrar, Christopher D.; Belitz, Kenneth; Norris, Richard D.

    2013-01-01

    Groundwater chemistry and isotope data from 44 public supply wells in the Napa and Sonoma Valleys, California were determined to investigate mixing of relatively shallow groundwater with deeper hydrothermal fluids. Multivariate analyses including Cluster Analyses, Multidimensional Scaling (MDS), Principal Components Analyses (PCA), Analysis of Similarities (ANOSIM), and Similarity Percentage Analyses (SIMPER) were used to elucidate constituent distribution patterns, determine which constituents are significantly associated with these hydrothermal systems, and investigate hydrothermal contamination of local groundwater used for drinking water. Multivariate statistical analyses were essential to this study because traditional methods, such as mixing tests involving single species (e.g. Cl or SiO2) were incapable of quantifying component proportions due to mixing of multiple water types. Based on these analyses, water samples collected from the wells were broadly classified as fresh groundwater, saline waters, hydrothermal fluids, or mixed hydrothermal fluids/meteoric water wells. The Multivariate Mixing and Mass-balance (M3) model was applied in order to determine the proportion of hydrothermal fluids, saline water, and fresh groundwater in each sample. Major ions, isotopes, and physical parameters of the waters were used to characterize the hydrothermal fluids as Na–Cl type, with significant enrichment in the trace elements As, B, F and Li. Five of the wells from this study were classified as hydrothermal, 28 as fresh groundwater, two as saline water, and nine as mixed hydrothermal fluids/meteoric water wells. The M3 mixing-model results indicated that the nine mixed wells contained between 14% and 30% hydrothermal fluids. Further, the chemical analyses show that several of these mixed-water wells have concentrations of As, F and B that exceed drinking-water standards or notification levels due to contamination by hydrothermal fluids.

  5. A Proposed Model of Retransformed Qualitative Data within a Mixed Methods Research Design

    ERIC Educational Resources Information Center

    Palladino, John M.

    2009-01-01

    Most models of mixed methods research design provide equal emphasis of qualitative and quantitative data analyses and interpretation. Other models stress one method more than the other. The present article is a discourse about the investigator's decision to employ a mixed method design to examine special education teachers' advocacy and…

  6. Applications of MIDAS regression in analysing trends in water quality

    NASA Astrophysics Data System (ADS)

    Penev, Spiridon; Leonte, Daniela; Lazarov, Zdravetz; Mann, Rob A.

    2014-04-01

    We discuss novel statistical methods in analysing trends in water quality. Such analysis uses complex data sets of different classes of variables, including water quality, hydrological and meteorological. We analyse the effect of rainfall and flow on trends in water quality utilising a flexible model called Mixed Data Sampling (MIDAS). This model arises because of the mixed frequency in the data collection. Typically, water quality variables are sampled fortnightly, whereas the rain data is sampled daily. The advantage of using MIDAS regression is in the flexible and parsimonious modelling of the influence of the rain and flow on trends in water quality variables. We discuss the model and its implementation on a data set from the Shoalhaven Supply System and Catchments in the state of New South Wales, Australia. Information criteria indicate that MIDAS modelling improves upon simplistic approaches that do not utilise the mixed data sampling nature of the data.

  7. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    PubMed

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  8. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    PubMed Central

    Shek, Daniel T. L.; Ma, Cecilia M. S.

    2011-01-01

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented. PMID:21218263

  9. Separate-channel analysis of two-channel microarrays: recovering inter-spot information.

    PubMed

    Smyth, Gordon K; Altman, Naomi S

    2013-05-26

    Two-channel (or two-color) microarrays are cost-effective platforms for comparative analysis of gene expression. They are traditionally analysed in terms of the log-ratios (M-values) of the two channel intensities at each spot, but this analysis does not use all the information available in the separate channel observations. Mixed models have been proposed to analyse intensities from the two channels as separate observations, but such models can be complex to use and the gain in efficiency over the log-ratio analysis is difficult to quantify. Mixed models yield test statistics for the null distributions can be specified only approximately, and some approaches do not borrow strength between genes. This article reformulates the mixed model to clarify the relationship with the traditional log-ratio analysis, to facilitate information borrowing between genes, and to obtain an exact distributional theory for the resulting test statistics. The mixed model is transformed to operate on the M-values and A-values (average log-expression for each spot) instead of on the log-expression values. The log-ratio analysis is shown to ignore information contained in the A-values. The relative efficiency of the log-ratio analysis is shown to depend on the size of the intraspot correlation. A new separate channel analysis method is proposed that assumes a constant intra-spot correlation coefficient across all genes. This approach permits the mixed model to be transformed into an ordinary linear model, allowing the data analysis to use a well-understood empirical Bayes analysis pipeline for linear modeling of microarray data. This yields statistically powerful test statistics that have an exact distributional theory. The log-ratio, mixed model and common correlation methods are compared using three case studies. The results show that separate channel analyses that borrow strength between genes are more powerful than log-ratio analyses. The common correlation analysis is the most powerful of all. The common correlation method proposed in this article for separate-channel analysis of two-channel microarray data is no more difficult to apply in practice than the traditional log-ratio analysis. It provides an intuitive and powerful means to conduct analyses and make comparisons that might otherwise not be possible.

  10. MANOVA vs nonlinear mixed effects modeling: The comparison of growth patterns of female and male quail

    NASA Astrophysics Data System (ADS)

    Gürcan, Eser Kemal

    2017-04-01

    The most commonly used methods for analyzing time-dependent data are multivariate analysis of variance (MANOVA) and nonlinear regression models. The aim of this study was to compare some MANOVA techniques and nonlinear mixed modeling approach for investigation of growth differentiation in female and male Japanese quail. Weekly individual body weight data of 352 male and 335 female quail from hatch to 8 weeks of age were used to perform analyses. It is possible to say that when all the analyses are evaluated, the nonlinear mixed modeling is superior to the other techniques because it also reveals the individual variation. In addition, the profile analysis also provides important information.

  11. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    PubMed

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Analyzing Mixed-Dyadic Data Using Structural Equation Models

    ERIC Educational Resources Information Center

    Peugh, James L.; DiLillo, David; Panuzio, Jillian

    2013-01-01

    Mixed-dyadic data, collected from distinguishable (nonexchangeable) or indistinguishable (exchangeable) dyads, require statistical analysis techniques that model the variation within dyads and between dyads appropriately. The purpose of this article is to provide a tutorial for performing structural equation modeling analyses of cross-sectional…

  13. Global analysis of fermion mixing with exotics

    NASA Technical Reports Server (NTRS)

    Nardi, Enrico; Roulet, Esteban; Tommasini, Daniele

    1991-01-01

    The limits are analyzed on deviation of the lepton and quark weak-couplings from their standard model values in a general class of models where the known fermions are allowed to mix with new heavy particles with exotic SU(2) x U(1) quantum number assignments (left-handed singlets or right-handed doublets). These mixings appear in many extensions of the electroweak theory such as models with mirror fermions, E(sub 6) models, etc. The results update previous analyses and improve considerably the existing bounds.

  14. INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS: A REPLY TO ROBBINS, HILDERBRAND AND FARLEY (2002)

    EPA Science Inventory

    Phillips & Koch (2002) outlined a new stable isotope mixing model which incorporates differences in elemental concentrations in the determinations of source proportions in a mixture. They illustrated their method with sensitivity analyses and two examples from the wildlife ecolog...

  15. “SNP Snappy”: A Strategy for Fast Genome-Wide Association Studies Fitting a Full Mixed Model

    PubMed Central

    Meyer, Karin; Tier, Bruce

    2012-01-01

    A strategy to reduce computational demands of genome-wide association studies fitting a mixed model is presented. Improvements are achieved by utilizing a large proportion of calculations that remain constant across the multiple analyses for individual markers involved, with estimates obtained without inverting large matrices. PMID:22021386

  16. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    PubMed

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  17. Assessment of RANS and LES Turbulence Modeling for Buoyancy-Aided/Opposed Forced and Mixed Convection

    NASA Astrophysics Data System (ADS)

    Clifford, Corey; Kimber, Mark

    2017-11-01

    Over the last 30 years, an industry-wide shift within the nuclear community has led to increased utilization of computational fluid dynamics (CFD) to supplement nuclear reactor safety analyses. One such area that is of particular interest to the nuclear community, specifically to those performing loss-of-flow accident (LOFA) analyses for next-generation very-high temperature reactors (VHTR), is the capacity of current computational models to predict heat transfer across a wide range of buoyancy conditions. In the present investigation, a critical evaluation of Reynolds-averaged Navier-Stokes (RANS) and large-eddy simulation (LES) turbulence modeling techniques is conducted based on CFD validation data collected from the Rotatable Buoyancy Tunnel (RoBuT) at Utah State University. Four different experimental flow conditions are investigated: (1) buoyancy-aided forced convection; (2) buoyancy-opposed forced convection; (3) buoyancy-aided mixed convection; (4) buoyancy-opposed mixed convection. Overall, good agreement is found for both forced convection-dominated scenarios, but an overly-diffusive prediction of the normal Reynolds stress is observed for the RANS-based turbulence models. Low-Reynolds number RANS models perform adequately for mixed convection, while higher-order RANS approaches underestimate the influence of buoyancy on the production of turbulence.

  18. Modelling ventricular fibrillation coarseness during cardiopulmonary resuscitation by mixed effects stochastic differential equations.

    PubMed

    Gundersen, Kenneth; Kvaløy, Jan Terje; Eftestøl, Trygve; Kramer-Johansen, Jo

    2015-10-15

    For patients undergoing cardiopulmonary resuscitation (CPR) and being in a shockable rhythm, the coarseness of the electrocardiogram (ECG) signal is an indicator of the state of the patient. In the current work, we show how mixed effects stochastic differential equations (SDE) models, commonly used in pharmacokinetic and pharmacodynamic modelling, can be used to model the relationship between CPR quality measurements and ECG coarseness. This is a novel application of mixed effects SDE models to a setting quite different from previous applications of such models and where using such models nicely solves many of the challenges involved in analysing the available data. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Evidence of a major gene from Bayesian segregation analyses of liability to osteochondral diseases in pigs.

    PubMed

    Kadarmideen, Haja N; Janss, Luc L G

    2005-11-01

    Bayesian segregation analyses were used to investigate the mode of inheritance of osteochondral lesions (osteochondrosis, OC) in pigs. Data consisted of 1163 animals with OC and their pedigrees included 2891 animals. Mixed-inheritance threshold models (MITM) and several variants of MITM, in conjunction with Markov chain Monte Carlo methods, were developed for the analysis of these (categorical) data. Results showed major genes with significant and substantially higher variances (range 1.384-37.81), compared to the polygenic variance (sigmau2). Consequently, heritabilities for a mixed inheritance (range 0.65-0.90) were much higher than the heritabilities from the polygenes. Disease allele frequencies range was 0.38-0.88. Additional analyses estimating the transmission probabilities of the major gene showed clear evidence for Mendelian segregation of a major gene affecting osteochondrosis. The variants, MITM with informative prior on sigmau2, showed significant improvement in marginal distributions and accuracy of parameters. MITM with a "reduced polygenic model" for parameterization of polygenic effects avoided convergence problems and poor mixing encountered in an "individual polygenic model." In all cases, "shrinkage estimators" for fixed effects avoided unidentifiability for these parameters. The mixed-inheritance linear model (MILM) was also applied to all OC lesions and compared with the MITM. This is the first study to report evidence of major genes for osteochondral lesions in pigs; these results may also form a basis for underpinning the genetic inheritance of this disease in other animals as well as in humans.

  20. Probabilistic performance-assessment modeling of the mixed waste landfill at Sandia National Laboratories.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peace, Gerald; Goering, Timothy James; Miller, Mark Laverne

    2007-01-01

    A probabilistic performance assessment has been conducted to evaluate the fate and transport of radionuclides (americium-241, cesium-137, cobalt-60, plutonium-238, plutonium-239, radium-226, radon-222, strontium-90, thorium-232, tritium, uranium-238), heavy metals (lead and cadmium), and volatile organic compounds (VOCs) at the Mixed Waste Landfill (MWL). Probabilistic analyses were performed to quantify uncertainties inherent in the system and models for a 1,000-year period, and sensitivity analyses were performed to identify parameters and processes that were most important to the simulated performance metrics. Comparisons between simulated results and measured values at the MWL were made to gain confidence in the models and perform calibrations whenmore » data were available. In addition, long-term monitoring requirements and triggers were recommended based on the results of the quantified uncertainty and sensitivity analyses.« less

  1. Machine learning to construct reduced-order models and scaling laws for reactive-transport applications

    NASA Astrophysics Data System (ADS)

    Mudunuru, M. K.; Karra, S.; Vesselinov, V. V.

    2017-12-01

    The efficiency of many hydrogeological applications such as reactive-transport and contaminant remediation vastly depends on the macroscopic mixing occurring in the aquifer. In the case of remediation activities, it is fundamental to enhancement and control of the mixing through impact of the structure of flow field which is impacted by groundwater pumping/extraction, heterogeneity, and anisotropy of the flow medium. However, the relative importance of these hydrogeological parameters to understand mixing process is not well studied. This is partially because to understand and quantify mixing, one needs to perform multiple runs of high-fidelity numerical simulations for various subsurface model inputs. Typically, high-fidelity simulations of existing subsurface models take hours to complete on several thousands of processors. As a result, they may not be feasible to study the importance and impact of model inputs on mixing. Hence, there is a pressing need to develop computationally efficient models to accurately predict the desired QoIs for remediation and reactive-transport applications. An attractive way to construct computationally efficient models is through reduced-order modeling using machine learning. These approaches can substantially improve our capabilities to model and predict remediation process. Reduced-Order Models (ROMs) are similar to analytical solutions or lookup tables. However, the method in which ROMs are constructed is different. Here, we present a physics-informed ML framework to construct ROMs based on high-fidelity numerical simulations. First, random forests, F-test, and mutual information are used to evaluate the importance of model inputs. Second, SVMs are used to construct ROMs based on these inputs. These ROMs are then used to understand mixing under perturbed vortex flows. Finally, we construct scaling laws for certain important QoIs such as degree of mixing and product yield. Scaling law parameters dependence on model inputs are evaluated using cluster analysis. We demonstrate application of the developed method for model analyses of reactive-transport and contaminant remediation at the Los Alamos National Laboratory (LANL) chromium contamination sites. The developed method is directly applicable for analyses of alternative site remediation scenarios.

  2. Improved estimation of sediment source contributions by concentration-dependent Bayesian isotopic mixing model

    NASA Astrophysics Data System (ADS)

    Ram Upadhayay, Hari; Bodé, Samuel; Griepentrog, Marco; Bajracharya, Roshan Man; Blake, Will; Cornelis, Wim; Boeckx, Pascal

    2017-04-01

    The implementation of compound-specific stable isotope (CSSI) analyses of biotracers (e.g. fatty acids, FAs) as constraints on sediment-source contributions has become increasingly relevant to understand the origin of sediments in catchments. The CSSI fingerprinting of sediment utilizes CSSI signature of biotracer as input in an isotopic mixing model (IMM) to apportion source soil contributions. So far source studies relied on the linear mixing assumptions of CSSI signature of sources to the sediment without accounting for potential effects of source biotracer concentration. Here we evaluated the effect of FAs concentration in sources on the accuracy of source contribution estimations in artificial soil mixture of three well-separated land use sources. Soil samples from land use sources were mixed to create three groups of artificial mixture with known source contributions. Sources and artificial mixture were analysed for δ13C of FAs using gas chromatography-combustion-isotope ratio mass spectrometry. The source contributions to the mixture were estimated using with and without concentration-dependent MixSIAR, a Bayesian isotopic mixing model. The concentration-dependent MixSIAR provided the closest estimates to the known artificial mixture source contributions (mean absolute error, MAE = 10.9%, and standard error, SE = 1.4%). In contrast, the concentration-independent MixSIAR with post mixing correction of tracer proportions based on aggregated concentration of FAs of sources biased the source contributions (MAE = 22.0%, SE = 3.4%). This study highlights the importance of accounting the potential effect of a source FA concentration for isotopic mixing in sediments that adds realisms to mixing model and allows more accurate estimates of contributions of sources to the mixture. The potential influence of FA concentration on CSSI signature of sediments is an important underlying factor that determines whether the isotopic signature of a given source is observable even after equilibrium. Therefore inclusion of FA concentrations of the sources in the IMM formulation is standard procedure for accurate estimation of source contributions. The post model correction approach that dominates the CSSI fingerprinting causes bias, especially if the FAs concentration of sources differs substantially.

  3. Strengthen forensic entomology in court--the need for data exploration and the validation of a generalised additive mixed model.

    PubMed

    Baqué, Michèle; Amendt, Jens

    2013-01-01

    Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.

  4. Three novel approaches to structural identifiability analysis in mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2016-05-06

    Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Multivariate Models for Normal and Binary Responses in Intervention Studies

    ERIC Educational Resources Information Center

    Pituch, Keenan A.; Whittaker, Tiffany A.; Chang, Wanchen

    2016-01-01

    Use of multivariate analysis (e.g., multivariate analysis of variance) is common when normally distributed outcomes are collected in intervention research. However, when mixed responses--a set of normal and binary outcomes--are collected, standard multivariate analyses are no longer suitable. While mixed responses are often obtained in…

  6. Using multilevel modeling to assess case-mix adjusters in consumer experience surveys in health care.

    PubMed

    Damman, Olga C; Stubbe, Janine H; Hendriks, Michelle; Arah, Onyebuchi A; Spreeuwenberg, Peter; Delnoij, Diana M J; Groenewegen, Peter P

    2009-04-01

    Ratings on the quality of healthcare from the consumer's perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for analyzing healthcare performance data, it has rarely been used to assess case-mix adjustment of such data. The purpose of this article is to investigate whether multilevel regression analysis is a useful tool to detect case-mix adjusters in consumer assessment of healthcare. We used data on 11,539 consumers from 27 Dutch health plans, which were collected using the Dutch Consumer Quality Index health plan instrument. We conducted multilevel regression analyses of consumers' responses nested within health plans to assess the effects of consumer characteristics on consumer experience. We compared our findings to the results of another methodology: the impact factor approach, which combines the predictive effect of each case-mix variable with its heterogeneity across health plans. Both multilevel regression and impact factor analyses showed that age and education were the most important case-mix adjusters for consumer experience and ratings of health plans. With the exception of age, case-mix adjustment had little impact on the ranking of health plans. On both theoretical and practical grounds, multilevel modeling is useful for adequate case-mix adjustment and analysis of performance ratings.

  7. Significance of the model considering mixed grain-size for inverse analysis of turbidites

    NASA Astrophysics Data System (ADS)

    Nakao, K.; Naruse, H.; Tokuhashi, S., Sr.

    2016-12-01

    A method for inverse analysis of turbidity currents is proposed for application to field observations. Estimation of initial condition of the catastrophic events from field observations has been important for sedimentological researches. For instance, there are various inverse analyses to estimate hydraulic conditions from topography observations of pyroclastic flows (Rossano et al., 1996), real-time monitored debris-flow events (Fraccarollo and Papa, 2000), tsunami deposits (Jaffe and Gelfenbaum, 2007) and ancient turbidites (Falcini et al., 2009). These inverse analyses need forward models and the most turbidity current models employ uniform grain-size particles. The turbidity currents, however, are the best characterized by variation of grain-size distribution. Though there are numerical models of mixed grain-sized particles, the models have difficulty in feasibility of application to natural examples because of calculating costs (Lesshaft et al., 2011). Here we expand the turbidity current model based on the non-steady 1D shallow-water equation at low calculation costs for mixed grain-size particles and applied the model to the inverse analysis. In this study, we compared two forward models considering uniform and mixed grain-size particles respectively. We adopted inverse analysis based on the Simplex method that optimizes the initial conditions (thickness, depth-averaged velocity and depth-averaged volumetric concentration of a turbidity current) with multi-point start and employed the result of the forward model [h: 2.0 m, U: 5.0 m/s, C: 0.01%] as reference data. The result shows that inverse analysis using the mixed grain-size model found the known initial condition of reference data even if the condition where the optimization started is deviated from the true solution, whereas the inverse analysis using the uniform grain-size model requires the condition in which the starting parameters for optimization must be in quite narrow range near the solution. The uniform grain-size model often reaches to local optimum condition that is significantly different from true solution. In conclusion, we propose a method of optimization based on the model considering mixed grain-size particles, and show its application to examples of turbidites in the Kiyosumi Formation, Boso Peninsula, Japan.

  8. Impact of tree priors in species delimitation and phylogenetics of the genus Oligoryzomys (Rodentia: Cricetidae).

    PubMed

    da Cruz, Marcos de O R; Weksler, Marcelo

    2018-02-01

    The use of genetic data and tree-based algorithms to delimit evolutionary lineages is becoming an important practice in taxonomic identification, especially in morphologically cryptic groups. The effects of different phylogenetic and/or coalescent models in the analyses of species delimitation, however, are not clear. In this paper, we assess the impact of different evolutionary priors in phylogenetic estimation, species delimitation, and molecular dating of the genus Oligoryzomys (Mammalia: Rodentia), a group with complex taxonomy and morphological cryptic species. Phylogenetic and coalescent analyses included 20 of the 24 recognized species of the genus, comprising of 416 Cytochrome b sequences, 26 Cytochrome c oxidase I sequences, and 27 Beta-Fibrinogen Intron 7 sequences. For species delimitation, we employed the General Mixed Yule Coalescent (GMYC) and Bayesian Poisson tree processes (bPTP) analyses, and contrasted 4 genealogical and phylogenetic models: Pure-birth (Yule), Constant Population Size Coalescent, Multiple Species Coalescent, and a mixed Yule-Coalescent model. GMYC analyses of trees from different genealogical models resulted in similar species delimitation and phylogenetic relationships, with incongruence restricted to areas of poor nodal support. bPTP results, however, significantly differed from GMYC for 5 taxa. Oligoryzomys early diversification was estimated to have occurred in the Early Pleistocene, between 0.7 and 2.6 MYA. The mixed Yule-Coalescent model, however, recovered younger dating estimates for Oligoryzomys diversification, and for the threshold for the speciation-coalescent horizon in GMYC. Eight of the 20 included Oligoryzomys species were identified as having two or more independent evolutionary units, indicating that current taxonomy of Oligoryzomys is still unsettled. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    PubMed

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes predictors from a MGLMM are always preferable to scatterplots of empirical Bayes predictors generated by separate models, unless the true association between outcomes is zero.

  10. Analysis of Cross-Sectional Univariate Measurements for Family Dyads Using Linear Mixed Modeling

    PubMed Central

    Knafl, George J.; Dixon, Jane K.; O'Malley, Jean P.; Grey, Margaret; Deatrick, Janet A.; Gallo, Agatha M.; Knafl, Kathleen A.

    2010-01-01

    Outcome measurements from members of the same family are likely correlated. Such intrafamilial correlation (IFC) is an important dimension of the family as a unit but is not always accounted for in analyses of family data. This article demonstrates the use of linear mixed modeling to account for IFC in the important special case of univariate measurements for family dyads collected at a single point in time. Example analyses of data from partnered parents having a child with a chronic condition on their child's adaptation to the condition and on the family's general functioning and management of the condition are provided. Analyses of this kind are reasonably straightforward to generate with popular statistical tools. Thus, it is recommended that IFC be reported as standard practice reflecting the fact that a family dyad is more than just the aggregate of two individuals. Moreover, not accounting for IFC can affect the conclusions. PMID:19307316

  11. A Poisson approach to the validation of failure time surrogate endpoints in individual patient data meta-analyses.

    PubMed

    Rotolo, Federico; Paoletti, Xavier; Burzykowski, Tomasz; Buyse, Marc; Michiels, Stefan

    2017-01-01

    Surrogate endpoints are often used in clinical trials instead of well-established hard endpoints for practical convenience. The meta-analytic approach relies on two measures of surrogacy: one at the individual level and one at the trial level. In the survival data setting, a two-step model based on copulas is commonly used. We present a new approach which employs a bivariate survival model with an individual random effect shared between the two endpoints and correlated treatment-by-trial interactions. We fit this model using auxiliary mixed Poisson models. We study via simulations the operating characteristics of this mixed Poisson approach as compared to the two-step copula approach. We illustrate the application of the methods on two individual patient data meta-analyses in gastric cancer, in the advanced setting (4069 patients from 20 randomized trials) and in the adjuvant setting (3288 patients from 14 randomized trials).

  12. Metrics to quantify the importance of mixing state for CCN activity

    DOE PAGES

    Ching, Joseph; Fast, Jerome; West, Matthew; ...

    2017-06-21

    It is commonly assumed that models are more prone to errors in predicted cloud condensation nuclei (CCN) concentrations when the aerosol populations are externally mixed. In this work we investigate this assumption by using the mixing state index ( χ) proposed by Riemer and West (2013) to quantify the degree of external and internal mixing of aerosol populations. We combine this metric with particle-resolved model simulations to quantify error in CCN predictions when mixing state information is neglected, exploring a range of scenarios that cover different conditions of aerosol aging. We show that mixing state information does indeed become unimportantmore » for more internally mixed populations, more precisely for populations with χ larger than 75 %. For more externally mixed populations ( χ below 20 %) the relationship of χ and the error in CCN predictions is not unique and ranges from lower than -40 % to about 150 %, depending on the underlying aerosol population and the environmental supersaturation. We explain the reasons for this behavior with detailed process analyses.« less

  13. Metrics to quantify the importance of mixing state for CCN activity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ching, Joseph; Fast, Jerome; West, Matthew

    It is commonly assumed that models are more prone to errors in predicted cloud condensation nuclei (CCN) concentrations when the aerosol populations are externally mixed. In this work we investigate this assumption by using the mixing state index ( χ) proposed by Riemer and West (2013) to quantify the degree of external and internal mixing of aerosol populations. We combine this metric with particle-resolved model simulations to quantify error in CCN predictions when mixing state information is neglected, exploring a range of scenarios that cover different conditions of aerosol aging. We show that mixing state information does indeed become unimportantmore » for more internally mixed populations, more precisely for populations with χ larger than 75 %. For more externally mixed populations ( χ below 20 %) the relationship of χ and the error in CCN predictions is not unique and ranges from lower than -40 % to about 150 %, depending on the underlying aerosol population and the environmental supersaturation. We explain the reasons for this behavior with detailed process analyses.« less

  14. MILP model for integrated balancing and sequencing mixed-model two-sided assembly line with variable launching interval and assignment restrictions

    NASA Astrophysics Data System (ADS)

    Azmi, N. I. L. Mohd; Ahmad, R.; Zainuddin, Z. M.

    2017-09-01

    This research explores the Mixed-Model Two-Sided Assembly Line (MMTSAL). There are two interrelated problems in MMTSAL which are line balancing and model sequencing. In previous studies, many researchers considered these problems separately and only few studied them simultaneously for one-sided line. However in this study, these two problems are solved simultaneously to obtain more efficient solution. The Mixed Integer Linear Programming (MILP) model with objectives of minimizing total utility work and idle time is generated by considering variable launching interval and assignment restriction constraint. The problem is analysed using small-size test cases to validate the integrated model. Throughout this paper, numerical experiment was conducted by using General Algebraic Modelling System (GAMS) with the solver CPLEX. Experimental results indicate that integrating the problems of model sequencing and line balancing help to minimise the proposed objectives function.

  15. Multivariate statistical approach to estimate mixing proportions for unknown end members

    USGS Publications Warehouse

    Valder, Joshua F.; Long, Andrew J.; Davis, Arden D.; Kenner, Scott J.

    2012-01-01

    A multivariate statistical method is presented, which includes principal components analysis (PCA) and an end-member mixing model to estimate unknown end-member hydrochemical compositions and the relative mixing proportions of those end members in mixed waters. PCA, together with the Hotelling T2 statistic and a conceptual model of groundwater flow and mixing, was used in selecting samples that best approximate end members, which then were used as initial values in optimization of the end-member mixing model. This method was tested on controlled datasets (i.e., true values of estimates were known a priori) and found effective in estimating these end members and mixing proportions. The controlled datasets included synthetically generated hydrochemical data, synthetically generated mixing proportions, and laboratory analyses of sample mixtures, which were used in an evaluation of the effectiveness of this method for potential use in actual hydrological settings. For three different scenarios tested, correlation coefficients (R2) for linear regression between the estimated and known values ranged from 0.968 to 0.993 for mixing proportions and from 0.839 to 0.998 for end-member compositions. The method also was applied to field data from a study of end-member mixing in groundwater as a field example and partial method validation.

  16. The Pediatric Home Care/Expenditure Classification Model (P/ECM): A Home Care Case-Mix Model for Children Facing Special Health Care Challenges.

    PubMed

    Phillips, Charles D

    2015-01-01

    Case-mix classification and payment systems help assure that persons with similar needs receive similar amounts of care resources, which is a major equity concern for consumers, providers, and programs. Although health service programs for adults regularly use case-mix payment systems, programs providing health services to children and youth rarely use such models. This research utilized Medicaid home care expenditures and assessment data on 2,578 children receiving home care in one large state in the USA. Using classification and regression tree analyses, a case-mix model for long-term pediatric home care was developed. The Pediatric Home Care/Expenditure Classification Model (P/ECM) grouped children and youth in the study sample into 24 groups, explaining 41% of the variance in annual home care expenditures. The P/ECM creates the possibility of a more equitable, and potentially more effective, allocation of home care resources among children and youth facing serious health care challenges.

  17. The Pediatric Home Care/Expenditure Classification Model (P/ECM): A Home Care Case-Mix Model for Children Facing Special Health Care Challenges

    PubMed Central

    Phillips, Charles D.

    2015-01-01

    Case-mix classification and payment systems help assure that persons with similar needs receive similar amounts of care resources, which is a major equity concern for consumers, providers, and programs. Although health service programs for adults regularly use case-mix payment systems, programs providing health services to children and youth rarely use such models. This research utilized Medicaid home care expenditures and assessment data on 2,578 children receiving home care in one large state in the USA. Using classification and regression tree analyses, a case-mix model for long-term pediatric home care was developed. The Pediatric Home Care/Expenditure Classification Model (P/ECM) grouped children and youth in the study sample into 24 groups, explaining 41% of the variance in annual home care expenditures. The P/ECM creates the possibility of a more equitable, and potentially more effective, allocation of home care resources among children and youth facing serious health care challenges. PMID:26740744

  18. Analyzing Longitudinal Data with Multilevel Models: An Example with Individuals Living with Lower Extremity Intra-articular Fractures

    PubMed Central

    Kwok, Oi-Man; Underhill, Andrea T.; Berry, Jack W.; Luo, Wen; Elliott, Timothy R.; Yoon, Myeongsun

    2008-01-01

    The use and quality of longitudinal research designs has increased over the past two decades, and new approaches for analyzing longitudinal data, including multi-level modeling (MLM) and latent growth modeling (LGM), have been developed. The purpose of this paper is to demonstrate the use of MLM and its advantages in analyzing longitudinal data. Data from a sample of individuals with intra-articular fractures of the lower extremity from the University of Alabama at Birmingham’s Injury Control Research Center is analyzed using both SAS PROC MIXED and SPSS MIXED. We start our presentation with a discussion of data preparation for MLM analyses. We then provide example analyses of different growth models, including a simple linear growth model and a model with a time-invariant covariate, with interpretation for all the parameters in the models. More complicated growth models with different between- and within-individual covariance structures and nonlinear models are discussed. Finally, information related to MLM analysis such as online resources is provided at the end of the paper. PMID:19649151

  19. Measuring trends of outpatient antibiotic use in Europe: jointly modelling longitudinal data in defined daily doses and packages.

    PubMed

    Bruyndonckx, Robin; Hens, Niel; Aerts, Marc; Goossens, Herman; Molenberghs, Geert; Coenen, Samuel

    2014-07-01

    To complement analyses of the linear trend and seasonal fluctuation of European outpatient antibiotic use expressed in defined daily doses (DDD) by analyses of data in packages, to assess the agreement between both measures and to study changes in the number of DDD per package over time. Data on outpatient antibiotic use, aggregated at the level of the active substance (WHO version 2011) were collected from 2000 to 2007 for 31 countries and expressed in DDD and packages per 1000 inhabitants per day (DID and PID, respectively). Data expressed in DID and PID were analysed separately using non-linear mixed models while the agreement between these measurements was analysed through a joint non-linear mixed model. The change in DDD per package over time was studied with a linear mixed model. Total outpatient antibiotic and penicillin use in Europe and their seasonal fluctuation significantly increased in DID, but not in PID. The use of combinations of penicillins significantly increased in DID and in PID. Broad-spectrum penicillin use did not increase significantly in DID and decreased significantly in PID. For all but one subgroup, country-specific deviations moved in the same direction whether measured in DID or PID. The correlations are not perfect. The DDD per package increased significantly over time for all but one subgroup. Outpatient antibiotic use in Europe shows contrasting trends, depending on whether DID or PID is used as the measure. The increase of the DDD per package corroborates the recommendation to adopt PID to monitor outpatient antibiotic use in Europe. © The Author 2014. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Environmental effects of interstate power trading on electricity consumption mixes.

    PubMed

    Marriott, Joe; Matthews, H Scott

    2005-11-15

    Although many studies of electricity generation use national or state average generation mix assumptions, in reality a great deal of electricity is transferred between states with very different mixes of fossil and renewable fuels, and using the average numbers could result in incorrect conclusions in these studies. We create electricity consumption profiles for each state and for key industry sectors in the U.S. based on existing state generation profiles, net state power imports, industry presence by state, and an optimization model to estimate interstate electricity trading. Using these "consumption mixes" can provide a more accurate assessment of electricity use in life-cycle analyses. We conclude that the published generation mixes for states that import power are misleading, since the power consumed in-state has a different makeup than the power that was generated. And, while most industry sectors have consumption mixes similar to the U.S. average, some of the most critical sectors of the economy--such as resource extraction and material processing sectors--are very different. This result does validate the average mix assumption made in many environmental assessments, but it is important to accurately quantify the generation methods for electricity used when doing life-cycle analyses.

  1. Different Trophic Tracers Give Different Answers for the Same Bugs - Comparing a Stable Isotope and Fatty Acid Based Analysis of Resource Utilization in a Marine Isopod

    NASA Astrophysics Data System (ADS)

    Galloway, A. W. E.; Eisenlord, M. E.; Brett, M. T.

    2016-02-01

    Stable isotope (SI) based mixing models are the most common approach used to infer resource pathways in consumers. However, SI based analyses are often underdetermined, and consumer SI fractionation is usually unknown. The use of fatty acid (FA) tracers in mixing models offers an alternative approach that can resolve the underdetermined constraint. A limitation to both methods is the considerable uncertainty about consumer `trophic modification' (TM) of dietary FA or SI, which occurs as consumers transform dietary resources into tissues. We tested the utility of SI and FA approaches for inferring the diets of the marine benthic isopod (Idotea wosnesenskii) fed various marine macroalgae in controlled feeding trials. Our analyses quantified how the accuracy and precision of Bayesian mixing models was influenced by choice of algorithm (SIAR vs MixSIR), fractionation (assumed or known), and whether the model was under or overdetermined (seven sources and two vs 26 tracers) for cases where isopods were fed an exclusive diet of one of the seven different macroalgae. Using the conventional approach (i.e., 2 SI with assumed TM) resulted in average model outputs, i.e., the contribution from the exclusive resource = 0.20 ± 0.23 (0.00-0.79), mean ± SD (95% credible interval), that only differed slightly from the prior assumption. Using the FA based approach with known TM greatly improved model performance, i.e., the contribution from the exclusive resource = 0.91 ± 0.10 (0.58-0.99). The choice of algorithm only made a difference when fractionation was known and the model was overdetermined (FA approach). In this case SIAR and MixSIR had outputs of 0.86 ± 0.11 (0.48-0.96) and 0.96 ± 0.05 (0.79-1.00), respectively. This analysis shows the choice of dietary tracers and the assumption of consumer trophic modification greatly influence the performance of mixing model dietary reconstructions, and ultimately our understanding of what resources actually support aquatic consumers.

  2. Who Is Overeducated and Why? Probit and Dynamic Mixed Multinomial Logit Analyses of Vertical Mismatch in East and West Germany

    ERIC Educational Resources Information Center

    Boll, Christina; Leppin, Julian Sebastian; Schömann, Klaus

    2016-01-01

    Overeducation potentially signals a productivity loss. With Socio-Economic Panel data from 1984 to 2011 we identify drivers of educational mismatch for East and West medium and highly educated Germans. Addressing measurement error, state dependence and unobserved heterogeneity, we run dynamic mixed multinomial logit models for three different…

  3. Genetic overlap between diagnostic subtypes of ischemic stroke.

    PubMed

    Holliday, Elizabeth G; Traylor, Matthew; Malik, Rainer; Bevan, Steve; Falcone, Guido; Hopewell, Jemma C; Cheng, Yu-Ching; Cotlarciuc, Ioana; Bis, Joshua C; Boerwinkle, Eric; Boncoraglio, Giorgio B; Clarke, Robert; Cole, John W; Fornage, Myriam; Furie, Karen L; Ikram, M Arfan; Jannes, Jim; Kittner, Steven J; Lincz, Lisa F; Maguire, Jane M; Meschia, James F; Mosley, Thomas H; Nalls, Mike A; Oldmeadow, Christopher; Parati, Eugenio A; Psaty, Bruce M; Rothwell, Peter M; Seshadri, Sudha; Scott, Rodney J; Sharma, Pankaj; Sudlow, Cathie; Wiggins, Kerri L; Worrall, Bradford B; Rosand, Jonathan; Mitchell, Braxton D; Dichgans, Martin; Markus, Hugh S; Levi, Christopher; Attia, John; Wray, Naomi R

    2015-03-01

    Despite moderate heritability, the phenotypic heterogeneity of ischemic stroke has hampered gene discovery, motivating analyses of diagnostic subtypes with reduced sample sizes. We assessed evidence for a shared genetic basis among the 3 major subtypes: large artery atherosclerosis (LAA), cardioembolism, and small vessel disease (SVD), to inform potential cross-subtype analyses. Analyses used genome-wide summary data for 12 389 ischemic stroke cases (including 2167 LAA, 2405 cardioembolism, and 1854 SVD) and 62 004 controls from the Metastroke consortium. For 4561 cases and 7094 controls, individual-level genotype data were also available. Genetic correlations between subtypes were estimated using linear mixed models and polygenic profile scores. Meta-analysis of a combined LAA-SVD phenotype (4021 cases and 51 976 controls) was performed to identify shared risk alleles. High genetic correlation was identified between LAA and SVD using linear mixed models (rg=0.96, SE=0.47, P=9×10(-4)) and profile scores (rg=0.72; 95% confidence interval, 0.52-0.93). Between LAA and cardioembolism and SVD and cardioembolism, correlation was moderate using linear mixed models but not significantly different from zero for profile scoring. Joint meta-analysis of LAA and SVD identified strong association (P=1×10(-7)) for single nucleotide polymorphisms near the opioid receptor μ1 (OPRM1) gene. Our results suggest that LAA and SVD, which have been hitherto treated as genetically distinct, may share a substantial genetic component. Combined analyses of LAA and SVD may increase power to identify small-effect alleles influencing shared pathophysiological processes. © 2015 American Heart Association, Inc.

  4. Modelling exhaust plume mixing in the near field of an aircraft

    NASA Astrophysics Data System (ADS)

    Garnier, F.; Brunet, S.; Jacquin, L.

    1997-11-01

    A simplified approach has been applied to analyse the mixing and entrainment processes of the engine exhaust through their interaction with the vortex wake of an aircraft. Our investigation is focused on the near field, extending from the exit nozzle until about 30 s after the wake is generated, in the vortex phase. This study was performed by using an integral model and a numerical simulation for two large civil aircraft: a two-engine Airbus 330 and a four-engine Boeing 747. The influence of the wing-tip vortices on the dilution ratio (defined as a tracer concentration) shown. The mixing process is also affected by the buoyancy effect, but only after the jet regime, when the trapping in the vortex core has occurred. In the early wake, the engine jet location (i.e. inboard or outboard engine jet) has an important influence on the mixing rate. The plume streamlines inside the vortices are subject to distortion and stretching, and the role of the descent of the vortices on the maximum tracer concentration is discussed. Qualitative comparison with contrail photograph shows similar features. Finally, tracer concentration of inboard engine centreline of B-747 are compared with other theoretical analyses and measured data.

  5. Case study of flexure and shear strengthening of RC beams by CFRP using FEA

    NASA Astrophysics Data System (ADS)

    Jankowiak, Iwona

    2018-01-01

    In the paper the preliminary results of study on strengthening RC beams by means of CFRP materials under mixed shear-flexural work condition are presented. The Finite Element Method analyses were performed using numerical models proposed and verified earlier by the results of laboratory tests [4, 5] for estimation of effectiveness of CFRP strengthening of RC beams under flexure. The currently conducted analyses deal with 3D models of RC beams under mixed shear-flexural loading conditions. The symmetry of analyzed beams was taken into account (in both directions). The application of Concrete Damage Plasticity (CDP) model of RC beam allowed to predict a layout and propagation of cracks leading to failure. Different cases of strengthening were analyzed: with the use of CFRP strip or CFRP closed hoops as well as with the combination of above mentioned. The preliminary study was carried out and the first results were presented.

  6. Privatization and environmental pollution in an international mixed Cournot model

    NASA Astrophysics Data System (ADS)

    Ferreira, Fernanda A.

    2016-06-01

    In this paper, we consider a competition between a domestic public firm and a foreign private firm, supposing that the production processes generates environmental pollution. Introducing the residents' environmental preference into the public firm's objective function, we analyse its economic impacts. We also analyse the economic impacts of the privatization.

  7. Observing and Simulating Diapycnal Mixing in the Canadian Arctic Archipelago

    NASA Astrophysics Data System (ADS)

    Hughes, K.; Klymak, J. M.; Hu, X.; Myers, P. G.; Williams, W. J.; Melling, H.

    2016-12-01

    High-spatial-resolution observations in the central Canadian Arctic Archipelago are analysed in conjunction with process-oriented modelling to estimate the flow pathways among the constricted waterways, understand the nature of the hydraulic control(s), and assess the influence of smaller scale (metres to kilometres) phenomena such as internal waves and topographically induced eddies. The observations repeatedly display isopycnal displacements of 50 m as dense water plunges over a sill. Depth-averaged turbulent dissipation rates near the sill estimated from these observations are typically 10-6-10-5 W kg-1, a range that is three orders of magnitude larger than that for the open ocean. These and other estimates are compared against a 1/12° basin-scale model from which we estimate diapycnal mixing rates using a volume-integrated advection-diffusion equation. Much of the mixing in this simulation is concentrated near constrictions within Barrow Strait and Queens Channel, the latter being our observational site. This suggests the model is capable of capturing topographically induced mixing. However, such mixing is expected to be enhanced in the presence of tides, a process not included in our basin scale simulation or other similar models. Quantifying this enhancement is another objective of our process-oriented modelling.

  8. Computational Analyses of Pressurization in Cryogenic Tanks

    NASA Technical Reports Server (NTRS)

    Ahuja, Vineet; Hosangadi, Ashvin; Lee, Chun P.; Field, Robert E.; Ryan, Harry

    2010-01-01

    A comprehensive numerical framework utilizing multi-element unstructured CFD and rigorous real fluid property routines has been developed to carry out analyses of propellant tank and delivery systems at NASA SSC. Traditionally CFD modeling of pressurization and mixing in cryogenic tanks has been difficult primarily because the fluids in the tank co-exist in different sub-critical and supercritical states with largely varying properties that have to be accurately accounted for in order to predict the correct mixing and phase change between the ullage and the propellant. For example, during tank pressurization under some circumstances, rapid mixing of relatively warm pressurant gas with cryogenic propellant can lead to rapid densification of the gas and loss of pressure in the tank. This phenomenon can cause serious problems during testing because of the resulting decrease in propellant flow rate. With proper physical models implemented, CFD can model the coupling between the propellant and pressurant including heat transfer and phase change effects and accurately capture the complex physics in the evolving flowfields. This holds the promise of allowing the specification of operational conditions and procedures that could minimize the undesirable mixing and heat transfer inherent in propellant tank operation. In our modeling framework, we incorporated two different approaches to real fluids modeling: (a) the first approach is based on the HBMS model developed by Hirschfelder, Beuler, McGee and Sutton and (b) the second approach is based on a cubic equation of state developed by Soave, Redlich and Kwong (SRK). Both approaches cover fluid properties and property variation spanning sub-critical gas and liquid states as well as the supercritical states. Both models were rigorously tested and properties for common fluids such as oxygen, nitrogen, hydrogen etc were compared against NIST data in both the sub-critical as well as supercritical regimes.

  9. A necessarily complex model to explain the biogeography of the amphibians and reptiles of Madagascar.

    PubMed

    Brown, Jason L; Cameron, Alison; Yoder, Anne D; Vences, Miguel

    2014-10-09

    Pattern and process are inextricably linked in biogeographic analyses, though we can observe pattern, we must infer process. Inferences of process are often based on ad hoc comparisons using a single spatial predictor. Here, we present an alternative approach that uses mixed-spatial models to measure the predictive potential of combinations of hypotheses. Biodiversity patterns are estimated from 8,362 occurrence records from 745 species of Malagasy amphibians and reptiles. By incorporating 18 spatially explicit predictions of 12 major biogeographic hypotheses, we show that mixed models greatly improve our ability to explain the observed biodiversity patterns. We conclude that patterns are influenced by a combination of diversification processes rather than by a single predominant mechanism. A 'one-size-fits-all' model does not exist. By developing a novel method for examining and synthesizing spatial parameters such as species richness, endemism and community similarity, we demonstrate the potential of these analyses for understanding the diversification history of Madagascar's biota.

  10. Axisymmetric magnetic modes of neutron stars having mixed poloidal and toroidal magnetic fields

    NASA Astrophysics Data System (ADS)

    Lee, Umin

    2018-05-01

    We calculate axisymmetric magnetic modes of a neutron star possessing a mixed poloidal and toroidal magnetic field, where the toroidal field is assumed to be proportional to a dimensionless parameter ζ0. Here, we assume an isentropic structure for the neutron star and consider no effects of rotation. Ignoring the equilibrium deformation due to the magnetic field, we employ a polytrope of the index n = 1 as the background model for our modal analyses. For the mixed poloidal and toroidal magnetic field with ζ _0\

  11. Environmental effects of interstate power trading on electricity consumption mixes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joe Marriott; H. Scott Matthews

    2005-11-15

    Although many studies of electricity generation use national or state average generation mix assumptions, in reality a great deal of electricity is transferred between states with very different mixes of fossil and renewable fuels, and using the average numbers could result in incorrect conclusions in these studies. The authors create electricity consumption profiles for each state and for key industry sectors in the U.S. based on existing state generation profiles, net state power imports, industry presence by state, and an optimization model to estimate interstate electricity trading. Using these 'consumption mixes' can provide a more accurate assessment of electricity usemore » in life-cycle analyses. It is concluded that the published generation mixes for states that import power are misleading, since the power consumed in-state has a different makeup than the power that was generated. And, while most industry sectors have consumption mixes similar to the U.S. average, some of the most critical sectors of the economy - such as resource extraction and material processing sectors - are very different. This result does validate the average mix assumption made in many environmental assessments, but it is important to accurately quantify the generation methods for electricity used when doing life-cycle analyses. 16 refs., 7 figs., 2 tabs.« less

  12. Joint modelling of repeated measurement and time-to-event data: an introductory tutorial.

    PubMed

    Asar, Özgür; Ritchie, James; Kalra, Philip A; Diggle, Peter J

    2015-02-01

    The term 'joint modelling' is used in the statistical literature to refer to methods for simultaneously analysing longitudinal measurement outcomes, also called repeated measurement data, and time-to-event outcomes, also called survival data. A typical example from nephrology is a study in which the data from each participant consist of repeated estimated glomerular filtration rate (eGFR) measurements and time to initiation of renal replacement therapy (RRT). Joint models typically combine linear mixed effects models for repeated measurements and Cox models for censored survival outcomes. Our aim in this paper is to present an introductory tutorial on joint modelling methods, with a case study in nephrology. We describe the development of the joint modelling framework and compare the results with those obtained by the more widely used approaches of conducting separate analyses of the repeated measurements and survival times based on a linear mixed effects model and a Cox model, respectively. Our case study concerns a data set from the Chronic Renal Insufficiency Standards Implementation Study (CRISIS). We also provide details of our open-source software implementation to allow others to replicate and/or modify our analysis. The results for the conventional linear mixed effects model and the longitudinal component of the joint models were found to be similar. However, there were considerable differences between the results for the Cox model with time-varying covariate and the time-to-event component of the joint model. For example, the relationship between kidney function as measured by eGFR and the hazard for initiation of RRT was significantly underestimated by the Cox model that treats eGFR as a time-varying covariate, because the Cox model does not take measurement error in eGFR into account. Joint models should be preferred for simultaneous analyses of repeated measurement and survival data, especially when the former is measured with error and the association between the underlying error-free measurement process and the hazard for survival is of scientific interest. © The Author 2015; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  13. The role of CSP in the electricity system of South Africa - technical operation, grid constraints, market structure and economics

    NASA Astrophysics Data System (ADS)

    Kost, Christoph; Friebertshäuser, Chris; Hartmann, Niklas; Fluri, Thomas; Nitz, Peter

    2017-06-01

    This paper analyses the role of solar technologies (CSP and PV) and their interaction in the South African electricity system by using a fundamental electricity system modelling (ENTIGRIS-SouthAfrica). The model is used to analyse the South African long-term electricity generation portfolio mix, optimized site selection and required transmission capacities until the year 2050. Hereby especially the location and grid integration of solar technology (PV and CSP) and wind power plants is analysed. This analysis is carried out by using detailed resource assessment of both technologies. A cluster approach is presented to reduce complexity by integrating the data in an optimization model.

  14. Population pharmacokinetics of caffeine in healthy male adults using mixed-effects models.

    PubMed

    Seng, K-Y; Fun, C-Y; Law, Y-L; Lim, W-M; Fan, W; Lim, C-L

    2009-02-01

    Caffeine has been shown to maintain or improve the performance of individuals, but its pharmacokinetic profile for Asians has not been well characterized. In this study, a population pharmacokinetic model for describing the pharmacokinetics of caffeine in Singapore males was developed. The data were also analysed using non-compartmental models. Data gathered from 59 male volunteers, who each ingested a single caffeine capsule in two clinical trials (3 or 5 mg/kg), were analysed via non-linear mixed-effects modelling. The participants' covariates, including age, body weight, and regularity of caffeinated-beverage consumption or smoking, were analysed in a stepwise fashion to identify their potential influence on caffeine pharmacokinetics. The final pharmacostatistical model was then subjected to stochastic simulation to predict the plasma concentrations of caffeine after oral (204, 340 and 476 mg) dosing regimens (repeated dosing every 6, 8 or 12 h) over a hypothetical 3-day period. The data were best described by a one-compartmental model with first-order absorption and first-order elimination. Smoking status was an influential covariate for clearance: clearance (mL/min) = 110*SMOKE + 114, where SMOKE was 0 and 1 for the non-smoker and the smoker respectively. Interoccasion variability was smaller compared to interindividual variability in clearance, volume and absorption rate (27% vs. 33%, 10% vs. 15% and 23% vs. 51% respectively). The extrapolated elimination half-lives of caffeine in the non-smokers and the smokers were 4.3 +/- 1.5 and 3.0 +/- 0.7 h respectively. Dosing simulations indicated that dosing regimens of 340 mg (repeated every 8 h) and 476 mg (repeated every 6 h) should achieve population-averaged caffeine concentrations within the reported beneficial range (4.5-9 microg/mL) in the non-smokers and the smokers respectively over 72 h. The population pharmacokinetic model satisfactorily described the disposition and variability of caffeine in the data. Mixed-effects modelling showed that the dose of caffeine depended on cigarette smoking status.

  15. Using the Mixed Rasch Model to analyze data from the beliefs and attitudes about memory survey.

    PubMed

    Smith, Everett V; Ying, Yuping; Brown, Scott W

    2012-01-01

    In this study, we used the Mixed Rasch Model (MRM) to analyze data from the Beliefs and Attitudes About Memory Survey (BAMS; Brown, Garry, Silver, and Loftus, 1997). We used the original 5-point BAMS data to investigate the functioning of the "Neutral" category via threshold analysis under a 2-class MRM solution. The "Neutral" category was identified as not eliciting the model expected responses and observations in the "Neutral" category were subsequently treated as missing data. For the BAMS data without the "Neutral" category, exploratory MRM analyses specifying up to 5 latent classes were conducted to evaluate data-model fit using the consistent Akaike information criterion (CAIC). For each of three BAMS subscales, a two latent class solution was identified as fitting the mixed Rasch rating scale model the best. Results regarding threshold analysis, person parameters, and item fit based on the final models are presented and discussed as well as the implications of this study.

  16. A Comparative Evaluation of Mixed Dentition Analysis on Reliability of Cone Beam Computed Tomography Image Compared to Plaster Model.

    PubMed

    Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam

    2017-01-01

    The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t -test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis.

  17. Manpower Mix for Health Services

    PubMed Central

    Shuman, Larry J.; Young, John P.; Naddor, Eliezer

    1971-01-01

    A model is formulated to determine the mix of manpower and technology needed to provide health services of acceptable quality at a minimum total cost to the community. Total costs include both the direct costs associated with providing the services and with developing additional manpower and the indirect costs (shortage costs) resulting from not providing needed services. The model is applied to a hypothetical neighborhood health center, and its sensitivity to alternative policies is investigated by cost-benefit analyses. Possible extensions of the model to include dynamic elements in health delivery systems are discussed, as is its adaptation for use in hospital planning, with a changed objective function. PMID:5095652

  18. Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models

    NASA Astrophysics Data System (ADS)

    Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana

    2014-05-01

    Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.

  19. Combined Recirculatory-compartmental Population Pharmacokinetic Modeling of Arterial and Venous Plasma S(+) and R(-) Ketamine Concentrations.

    PubMed

    Henthorn, Thomas K; Avram, Michael J; Dahan, Albert; Gustafsson, Lars L; Persson, Jan; Krejcie, Tom C; Olofsen, Erik

    2018-05-16

    The pharmacokinetics of infused drugs have been modeled without regard for recirculatory or mixing kinetics. We used a unique ketamine dataset with simultaneous arterial and venous blood sampling, during and after separate S(+) and R(-) ketamine infusions, to develop a simplified recirculatory model of arterial and venous plasma drug concentrations. S(+) or R(-) ketamine was infused over 30 min on two occasions to 10 healthy male volunteers. Frequent, simultaneous arterial and forearm venous blood samples were obtained for up to 11 h. A multicompartmental pharmacokinetic model with front-end arterial mixing and venous blood components was developed using nonlinear mixed effects analyses. A three-compartment base pharmacokinetic model with additional arterial mixing and arm venous compartments and with shared S(+)/R(-) distribution kinetics proved superior to standard compartmental modeling approaches. Total pharmacokinetic flow was estimated to be 7.59 ± 0.36 l/min (mean ± standard error of the estimate), and S(+) and R(-) elimination clearances were 1.23 ± 0.04 and 1.06 ± 0.03 l/min, respectively. The arm-tissue link rate constant was 0.18 ± 0.01 min and the fraction of arm blood flow estimated to exchange with arm tissue was 0.04 ± 0.01. Arterial drug concentrations measured during drug infusion have two kinetically distinct components: partially or lung-mixed drug and fully mixed-recirculated drug. Front-end kinetics suggest the partially mixed concentration is proportional to the ratio of infusion rate and total pharmacokinetic flow. This simplified modeling approach could lead to more generalizable models for target-controlled infusions and improved methods for analyzing pharmacokinetic-pharmacodynamic data.

  20. Mixed-method research protocol: defining and operationalizing patient-related complexity of nursing care in acute care hospitals.

    PubMed

    Huber, Evelyn; Kleinknecht-Dolf, Michael; Müller, Marianne; Kugler, Christiane; Spirig, Rebecca

    2017-06-01

    To define the concept of patient-related complexity of nursing care in acute care hospitals and to operationalize it in a questionnaire. The concept of patient-related complexity of nursing care in acute care hospitals has not been conclusively defined in the literature. The operationalization in a corresponding questionnaire is necessary, given the increased significance of the topic, due to shortened lengths of stay and increased patient morbidity. Hybrid model of concept development and embedded mixed-methods design. The theoretical phase of the hybrid model involved a literature review and the development of a working definition. In the fieldwork phase of 2015 and 2016, an embedded mixed-methods design was applied with complexity assessments of all patients at five Swiss hospitals using our newly operationalized questionnaire 'Complexity of Nursing Care' over 1 month. These data will be analysed with structural equation modelling. Twelve qualitative case studies will be embedded. They will be analysed using a structured process of constructing case studies and content analysis. In the final analytic phase, the quantitative and qualitative data will be merged and added to the results of the theoretical phase for a common interpretation. Cantonal Ethics Committee Zurich judged the research programme as unproblematic in December 2014 and May 2015. Following the phases of the hybrid model and using an embedded mixed-methods design can reach an in-depth understanding of patient-related complexity of nursing care in acute care hospitals, a final version of the questionnaire and an acknowledged definition of the concept. © 2016 John Wiley & Sons Ltd.

  1. Development of a Reduced-Order Three-Dimensional Flow Model for Thermal Mixing and Stratification Simulation during Reactor Transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Rui

    2017-09-03

    Mixing, thermal-stratification, and mass transport phenomena in large pools or enclosures play major roles for the safety of reactor systems. Depending on the fidelity requirement and computational resources, various modeling methods, from the 0-D perfect mixing model to 3-D Computational Fluid Dynamics (CFD) models, are available. Each is associated with its own advantages and shortcomings. It is very desirable to develop an advanced and efficient thermal mixing and stratification modeling capability embedded in a modern system analysis code to improve the accuracy of reactor safety analyses and to reduce modeling uncertainties. An advanced system analysis tool, SAM, is being developedmore » at Argonne National Laboratory for advanced non-LWR reactor safety analysis. While SAM is being developed as a system-level modeling and simulation tool, a reduced-order three-dimensional module is under development to model the multi-dimensional flow and thermal mixing and stratification in large enclosures of reactor systems. This paper provides an overview of the three-dimensional finite element flow model in SAM, including the governing equations, stabilization scheme, and solution methods. Additionally, several verification and validation tests are presented, including lid-driven cavity flow, natural convection inside a cavity, laminar flow in a channel of parallel plates. Based on the comparisons with the analytical solutions and experimental results, it is demonstrated that the developed 3-D fluid model can perform very well for a wide range of flow problems.« less

  2. Statistical correlations and risk analyses techniques for a diving dual phase bubble model and data bank using massively parallel supercomputers.

    PubMed

    Wienke, B R; O'Leary, T R

    2008-05-01

    Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application.

  3. Bias and inference from misspecified mixed-effect models in stepped wedge trial analysis.

    PubMed

    Thompson, Jennifer A; Fielding, Katherine L; Davey, Calum; Aiken, Alexander M; Hargreaves, James R; Hayes, Richard J

    2017-10-15

    Many stepped wedge trials (SWTs) are analysed by using a mixed-effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common-to-all or varied-between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within-cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within-cluster comparisons in the standard model. In the SWTs simulated here, mixed-effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within-cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  4. A brief measure of attitudes toward mixed methods research in psychology.

    PubMed

    Roberts, Lynne D; Povee, Kate

    2014-01-01

    The adoption of mixed methods research in psychology has trailed behind other social science disciplines. Teaching psychology students, academics, and practitioners about mixed methodologies may increase the use of mixed methods within the discipline. However, tailoring and evaluating education and training in mixed methodologies requires an understanding of, and way of measuring, attitudes toward mixed methods research in psychology. To date, no such measure exists. In this article we present the development and initial validation of a new measure: Attitudes toward Mixed Methods Research in Psychology. A pool of 42 items developed from previous qualitative research on attitudes toward mixed methods research along with validation measures was administered via an online survey to a convenience sample of 274 psychology students, academics and psychologists. Principal axis factoring with varimax rotation on a subset of the sample produced a four-factor, 12-item solution. Confirmatory factor analysis on a separate subset of the sample indicated that a higher order four factor model provided the best fit to the data. The four factors; 'Limited Exposure,' '(in)Compatibility,' 'Validity,' and 'Tokenistic Qualitative Component'; each have acceptable internal reliability. Known groups validity analyses based on preferred research orientation and self-rated mixed methods research skills, and convergent and divergent validity analyses based on measures of attitudes toward psychology as a science and scientist and practitioner orientation, provide initial validation of the measure. This brief, internally reliable measure can be used in assessing attitudes toward mixed methods research in psychology, measuring change in attitudes as part of the evaluation of mixed methods education, and in larger research programs.

  5. Phylogeny of sipunculan worms: A combined analysis of four gene regions and morphology.

    PubMed

    Schulze, Anja; Cutler, Edward B; Giribet, Gonzalo

    2007-01-01

    The intra-phyletic relationships of sipunculan worms were analyzed based on DNA sequence data from four gene regions and 58 morphological characters. Initially we analyzed the data under direct optimization using parsimony as optimality criterion. An implied alignment resulting from the direct optimization analysis was subsequently utilized to perform a Bayesian analysis with mixed models for the different data partitions. For this we applied a doublet model for the stem regions of the 18S rRNA. Both analyses support monophyly of Sipuncula and most of the same clades within the phylum. The analyses differ with respect to the relationships among the major groups but whereas the deep nodes in the direct optimization analysis generally show low jackknife support, they are supported by 100% posterior probability in the Bayesian analysis. Direct optimization has been useful for handling sequences of unequal length and generating conservative phylogenetic hypotheses whereas the Bayesian analysis under mixed models provided high resolution in the basal nodes of the tree.

  6. Free energy of mixing of acetone and methanol: a computer simulation investigation.

    PubMed

    Idrissi, Abdenacer; Polok, Kamil; Barj, Mohammed; Marekha, Bogdan; Kiselev, Mikhail; Jedlovszky, Pál

    2013-12-19

    The change of the Helmholtz free energy, internal energy, and entropy accompanying the mixing of acetone and methanol is calculated in the entire composition range by the method of thermodynamic integration using three different potential model combinations of the two compounds. In the first system, both molecules are described by the OPLS, and in the second system, both molecules are described by the original TraPPE force field, whereas in the third system a modified version of the TraPPE potential is used for acetone in combination with the original TraPPE model of methanol. The results reveal that, in contrast with the acetone-water system, all of these three model combinations are able to reproduce the full miscibility of acetone and methanol, although the thermodynamic driving force of this mixing is very small. It is also seen, in accordance with the finding of former structural analyses, that the mixing of the two components is driven by the entropy term corresponding to the ideal mixing, which is large enough to overcompensate the effect of the energy increase and entropy loss due to the interaction of the unlike components in the mixtures. Among the three model combinations, the use of the original TraPPE model of methanol and modified TraPPE model of acetone turns out to be clearly the best in this respect, as it is able to reproduce the experimental free energy, internal energy, and entropy of mixing values within 0.15 kJ/mol, 0.2 kJ/mol, and 1 J/(mol K), respectively, in the entire composition range. The success of this model combination originates from the fact that the use of the modified TraPPE model of acetone instead of the original one in these mixtures improves the reproduction of the entropy of mixing, while it retains the ability of the original model of excellently reproducing the internal energy of mixing.

  7. Panel Stiffener Debonding Analysis using a Shell/3D Modeling Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Ratcliffe, James G.; Minguet, Pierre J.

    2008-01-01

    A shear loaded, stringer reinforced composite panel is analyzed to evaluate the fidelity of computational fracture mechanics analyses of complex structures. Shear loading causes the panel to buckle. The resulting out -of-plane deformations initiate skin/stringer separation at the location of an embedded defect. The panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot, web and noodle as well as the panel skin near the delamination front were modeled with a local 3D solid model. Across the width of the stringer fo to, the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. The objective was to study the effect of the fidelity of the local 3D finite element model on the computed mixed-mode strain energy release rates and the failure index.

  8. Panel-Stiffener Debonding and Analysis Using a Shell/3D Modeling Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Ratcliffe, James G.; Minguet, Pierre J.

    2007-01-01

    A shear loaded, stringer reinforced composite panel is analyzed to evaluate the fidelity of computational fracture mechanics analyses of complex structures. Shear loading causes the panel to buckle. The resulting out-of-plane deformations initiate skin/stringer separation at the location of an embedded defect. The panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot, web and noodle as well as the panel skin near the delamination front were modeled with a local 3D solid model. Across the width of the stringer foot, the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. The objective was to study the effect of the fidelity of the local 3D finite element model on the computed mixed-mode strain energy release rates and the failure index.

  9. Modelling diameter distributions of two-cohort forest stands with various proportions of dominant species: a two-component mixture model approach.

    Treesearch

    Rafal Podlaski; Francis Roesch

    2014-01-01

    In recent years finite-mixture models have been employed to approximate and model empirical diameter at breast height (DBH) distributions. We used two-component mixtures of either the Weibull distribution or the gamma distribution for describing the DBH distributions of mixed-species, two-cohort forest stands, to analyse the relationships between the DBH components,...

  10. Mixed ethnicity and behavioural problems in the Millennium Cohort Study

    PubMed Central

    Zilanawala, Afshin; Sacker, Amanda; Kelly, Yvonne

    2018-01-01

    Background The population of mixed ethnicity individuals in the UK is growing. Despite this demographic trend, little is known about mixed ethnicity children and their problem behaviours. We examine trajectories of behavioural problems among non-mixed and mixed ethnicity children from early to middle childhood using nationally representative cohort data in the UK. Methods Data from 16 330 children from the Millennium Cohort Study with total difficulties scores were analysed. We estimated trajectories of behavioural problems by mixed ethnicity using growth curve models. Results White mixed (mean total difficulties score: 8.3), Indian mixed (7.7), Pakistani mixed (8.9) and Bangladeshi mixed (7.2) children had fewer problem behaviours than their non-mixed counterparts at age 3 (9.4, 10.1, 13.1 and 11.9, respectively). White mixed, Pakistani mixed and Bangladeshi mixed children had growth trajectories in problem behaviours significantly different from that of their non-mixed counterparts. Conclusions Using a detailed mixed ethnic classification revealed diverging trajectories between some non-mixed and mixed children across the early life course. Future studies should investigate the mechanisms, which may influence increasing behavioural problems in mixed ethnicity children. PMID:26912571

  11. Perceived Risk of Burglary and Fear of Crime: Individual- and Country-Level Mixed Modeling.

    PubMed

    Chon, Don Soo; Wilson, Mary

    2016-02-01

    Given the scarcity of prior studies, the current research introduced country-level variables, along with individual-level ones, to test how they are related to an individual's perceived risk of burglary (PRB) and fear of crime (FC), separately, by using mixed-level logistic regression analyses. The analyses of 104,218 individuals, residing in 50 countries, showed that country-level poverty was positively associated with FC only. However, individual-level variables, such as prior property crime victimization and female gender, had consistently positive relationships with both PRB and FC. However, age group and socioeconomic status were inconsistent between those two models, suggesting that PRB and FC are two different concepts. Finally, no significant difference in the pattern of PRB and FC was found between a highly developed group of countries and a less developed one. © The Author(s) 2014.

  12. Using generalized additive (mixed) models to analyze single case designs.

    PubMed

    Shadish, William R; Zuur, Alain F; Sullivan, Kristynn J

    2014-04-01

    This article shows how to apply generalized additive models and generalized additive mixed models to single-case design data. These models excel at detecting the functional form between two variables (often called trend), that is, whether trend exists, and if it does, what its shape is (e.g., linear and nonlinear). In many respects, however, these models are also an ideal vehicle for analyzing single-case designs because they can consider level, trend, variability, overlap, immediacy of effect, and phase consistency that single-case design researchers examine when interpreting a functional relation. We show how these models can be implemented in a wide variety of ways to test whether treatment is effective, whether cases differ from each other, whether treatment effects vary over cases, and whether trend varies over cases. We illustrate diagnostic statistics and graphs, and we discuss overdispersion of data in detail, with examples of quasibinomial models for overdispersed data, including how to compute dispersion and quasi-AIC fit indices in generalized additive models. We show how generalized additive mixed models can be used to estimate autoregressive models and random effects and discuss the limitations of the mixed models compared to generalized additive models. We provide extensive annotated syntax for doing all these analyses in the free computer program R. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  13. Lepton masses and mixings in orbifold models with three Higgs families

    NASA Astrophysics Data System (ADS)

    Escudero, N.; Muñoz, C.; Teixeira, A. M.

    2007-12-01

    We analyse the phenomenological viability of heterotic Z3 orbifolds with two Wilson lines, which naturally predict three supersymmetric families of matter and Higgs fields. Given that these models can accommodate realistic scenarios for the quark sector avoiding potentially dangerous flavour-changing neutral currents, we now address the leptonic sector, finding that viable orbifold configurations can in principle be obtained. In particular, it is possible to accomodate present data on charged lepton masses, while avoiding conflict with lepton flavour-violating decays. Concerning the generation of neutrino masses and mixings, we find that Z3 orbifolds offer several interesting possibilities.

  14. Elucidating the fate of a mixed toluene, DHM, methanol, and i-propanol plume during in situ bioremediation

    NASA Astrophysics Data System (ADS)

    Verardo, E.; Atteia, O.; Prommer, H.

    2017-06-01

    Organic pollutants such as solvents or petroleum products are widespread contaminants in soil and groundwater systems. In-situ bioremediation is a commonly used remediation technology to clean up the subsurface to eliminate the risks of toxic substances to reach potential receptors in surface waters or drinking water wells. This study discusses the development of a subsurface model to analyse the performance of an actively operating field-scale enhanced bioremediation scheme. The study site was affected by a mixed toluene, dihydromyrcenol (DHM), methanol, and i-propanol plume. A high-resolution, time-series of data was used to constrain the model development and calibration. The analysis shows that the observed failure of the treatment system is linked to an inefficient oxygen injection pattern. Moreover, the model simulations also suggest that additional contaminant spillages have occurred in 2012. Those additional spillages and their associated additional oxygen demand resulted in a significant increase in contaminant fluxes that remained untreated. The study emphasises the important role that reactive transport modelling can play in data analyses and for enhancing remediation efficiency.

  15. Theoretical and experimental investigation of turbulent mixing on ejector configuration and performance in a solar-driven organic-vapor ejector cycle chiller

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kucha, E.I.

    1984-01-01

    A general method was developed to calculate two dimensional (axisymmetric) mixing of a compressible jet in a variable cross-sectional area mixing channel of the ejector. The analysis considers mixing of the primary and secondary fluids at constant pressure and incorporates finite difference approximations to the conservation equations. The flow model is based on the mixing length approximations. A detailed study and modeling of the flow phenomenon determines the best (optimum) mixing channel geometry of the ejector. The detailed ejector performance characteristics are predicted by incorporating the flow model into a solar-powered ejector cycle cooling system computer model. Freon-11 is usedmore » as both the primary and secondary fluids. Performance evaluation of the cooling system is examined for its coefficient of performance (COP) under a variety of operating conditions. A study is also conducted on a modified ejector cycle in which a secondary pump is introduced at the exit of the evaporator. Results show a significant improvement in the overall performance over that of the conventional ejector cycle (without a secondary pump). Comparison between one and two-dimensional analyses indicates that the two-dimensional ejector fluid flow analysis predicts a better overall system performance. This is true for both the conventional and modified ejector cycles.« less

  16. In-depth study of 16CygB using inversion techniques

    NASA Astrophysics Data System (ADS)

    Buldgen, G.; Salmon, S. J. A. J.; Reese, D. R.; Dupret, M. A.

    2016-12-01

    Context. The 16Cyg binary system hosts the solar-like Kepler targets with the most stringent observational constraints. Indeed, we benefit from very high quality oscillation spectra, as well as spectroscopic and interferometric observations. Moreover, this system is particularly interesting since both stars are very similar in mass but the A component is orbited by a red dwarf, whereas the B component is orbited by a Jovian planet and thus could have formed a more complex planetary system. In our previous study, we showed that seismic inversions of integrated quantities could be used to constrain microscopic diffusion in the A component. In this study, we analyse the B component in the light of a more regularised inversion. Aims: We wish to analyse independently the B component of the 16Cyg binary system using the inversion of an indicator dedicated to analyse core conditions, denoted tu. Using this independent determination, we wish to analyse any differences between both stars due to the potential influence of planetary formation on stellar structure and/or their respective evolution. Methods: First, we recall the observational constraints for 16CygB and the method we used to generate reference stellar models of this star. We then describe how we improved the inversion and how this approach could be used for future targets with a sufficient number of observed frequencies. The inversion results were then used to analyse the differences between the A and B components. Results: The inversion of the tu indicator for 16CygB shows a disagreement with models including microscopic diffusion and sharing the chemical composition previously derived for 16CygA. We show that small changes in chemical composition are insufficient to solve the problem but that extra mixing can account for the differences seen between both stars. We use a parametric approach to analyse the impact of extra mixing in the form of turbulent diffusion on the behaviour of the tu values. We conclude on the necessity of further investigations using models with a physically motivated implementation of extra mixing processes including additional constraints to further improve the accuracy with which the fundamental parameters of this system are determined.

  17. Ill-posedness in modeling mixed sediment river morphodynamics

    NASA Astrophysics Data System (ADS)

    Chavarrías, Víctor; Stecca, Guglielmo; Blom, Astrid

    2018-04-01

    In this paper we analyze the Hirano active layer model used in mixed sediment river morphodynamics concerning its ill-posedness. Ill-posedness causes the solution to be unstable to short-wave perturbations. This implies that the solution presents spurious oscillations, the amplitude of which depends on the domain discretization. Ill-posedness not only produces physically unrealistic results but may also cause failure of numerical simulations. By considering a two-fraction sediment mixture we obtain analytical expressions for the mathematical characterization of the model. Using these we show that the ill-posed domain is larger than what was found in previous analyses, not only comprising cases of bed degradation into a substrate finer than the active layer but also in aggradational cases. Furthermore, by analyzing a three-fraction model we observe ill-posedness under conditions of bed degradation into a coarse substrate. We observe that oscillations in the numerical solution of ill-posed simulations grow until the model becomes well-posed, as the spurious mixing of the active layer sediment and substrate sediment acts as a regularization mechanism. Finally we conduct an eigenstructure analysis of a simplified vertically continuous model for mixed sediment for which we show that ill-posedness occurs in a wider range of conditions than the active layer model.

  18. Updated constraints on self-interacting dark matter from Supernova 1987A

    NASA Astrophysics Data System (ADS)

    Mahoney, Cameron; Leibovich, Adam K.; Zentner, Andrew R.

    2017-08-01

    We revisit SN1987A constraints on light, hidden sector gauge bosons ("dark photons") that are coupled to the standard model through kinetic mixing with the photon. These constraints are realized because excessive bremsstrahlung radiation of the dark photon can lead to rapid cooling of the SN1987A progenitor core, in contradiction to the observed neutrinos from that event. The models we consider are of interest as phenomenological models of strongly self-interacting dark matter. We clarify several possible ambiguities in the literature and identify errors in prior analyses. We find constraints on the dark photon mixing parameter that are in rough agreement with the early estimates of Dent et al. [arXiv:1201.2683.], but only because significant errors in their analyses fortuitously canceled. Our constraints are in good agreement with subsequent analyses by Rrapaj & Reddy [Phys. Rev. C 94, 045805 (2016)., 10.1103/PhysRevC.94.045805] and Hardy & Lasenby [J. High Energy Phys. 02 (2017) 33., 10.1007/JHEP02(2017)033]. We estimate the dark photon bremsstrahlung rate using one-pion exchange (OPE), while Rrapaj & Reddy use a soft radiation approximation (SRA) to exploit measured nuclear scattering cross sections. We find that the differences between mixing parameter constraints obtained through the OPE approximation or the SRA approximation are roughly a factor of ˜2 - 3 . Hardy & Laseby [J. High Energy Phys. 02 (2017) 33., 10.1007/JHEP02(2017)033] include plasma effects in their calculations finding significantly weaker constraints on dark photon mixing for dark photon masses below ˜10 MeV . We do not consider plasma effects. Lastly, we point out that the properties of the SN1987A progenitor core remain somewhat uncertain and that this uncertainty alone causes uncertainty of at least a factor of ˜2 - 3 in the excluded values of the dark photon mixing parameter. Further refinement of these estimates is unwarranted until either the interior of the SN1987A progenitor is more well understood or additional, large, and heretofore neglected effects, such as the plasma interactions studied by Hardy & Lasenby [J. High Energy Phys. 02 (2017) 33. 10.1007/JHEP02(2017)033], are identified.

  19. Is mixed-handedness a marker of treatment response in posttraumatic stress disorder?: a pilot study.

    PubMed

    Forbes, David; Carty, Jessica; Elliott, Peter; Creamer, Mark; McHugh, Tony; Hopwood, Malcolm; Chemtob, Claude M

    2006-12-01

    Recent studies suggest that mixed-handedness is a risk factor for posttraumatic stress disorder (PTSD). This study examined whether mixed-handed veterans with combat-related PTSD respond more poorly to psychosocial treatment. Consistency of hand preference was assessed in 150 Vietnam combat veterans with PTSD using the Edinburgh Handedness Inventory (R. C. Oldfield, 1971). Growth modeling analyses using Mplus (L. K. Muthén & B. Muthén, 2002) identified that PTSD veterans with mixed-handedness reported significantly less treatment improvement on the PTSD Checklist (F. W. Weathers, B. T. Litz, D. S. Herman, J. A. Huska, & T. M. Keane, 1993) than did veterans with consistent handedness. These data suggest that mixed-handedness is associated with poorer PTSD treatment response. Several possible explanations for this finding are discussed.

  20. Characteristics of Aspergillus fumigatus in Association with Stenotrophomonas maltophilia in an In Vitro Model of Mixed Biofilm

    PubMed Central

    Melloul, Elise; Luiggi, Stéphanie; Anaïs, Leslie; Arné, Pascal; Costa, Jean-Marc; Fihman, Vincent; Briard, Benoit; Dannaoui, Eric; Guillot, Jacques; Decousser, Jean-Winoc; Beauvais, Anne; Botterel, Françoise

    2016-01-01

    Background Biofilms are communal structures of microorganisms that have long been associated with a variety of persistent infections poorly responding to conventional antibiotic or antifungal therapy. Aspergillus fumigatus fungus and Stenotrophomonas maltophilia bacteria are examples of the microorganisms that can coexist to form a biofilm especially in the respiratory tract of immunocompromised patients or cystic fibrosis patients. The aim of the present study was to develop and assess an in vitro model of a mixed biofilm associating S. maltophilia and A. fumigatus by using analytical and quantitative approaches. Materials and Methods An A. fumigatus strain (ATCC 13073) expressing a Green Fluorescent Protein (GFP) and an S. maltophilia strain (ATCC 13637) were used. Fungal and bacterial inocula (105 conidia/mL and 106 cells/mL, respectively) were simultaneously deposited to initiate the development of an in vitro mixed biofilm on polystyrene supports at 37°C for 24 h. The structure of the biofilm was analysed via qualitative microscopic techniques like scanning electron and transmission electron microscopy, and fluorescence microscopy, and by quantitative techniques including qPCR and crystal violet staining. Results Analytic methods revealed typical structures of biofilm with production of an extracellular matrix (ECM) enclosing fungal hyphae and bacteria. Quantitative methods showed a decrease of A. fumigatus growth and ECM production in the mixed biofilm with antibiosis effect of the bacteria on the fungi seen as abortive hyphae, limited hyphal growth, fewer conidia, and thicker fungal cell walls. Conclusion For the first time, a mixed A. fumigatus—S. maltophilia biofilm was validated by various analytical and quantitative approaches and the bacterial antibiosis effect on the fungus was demonstrated. The mixed biofilm model is an interesting experimentation field to evaluate efficiency of antimicrobial agents and to analyse the interactions between the biofilm and the airways epithelium. PMID:27870863

  1. Evidence of a Major Gene From Bayesian Segregation Analyses of Liability to Osteochondral Diseases in Pigs

    PubMed Central

    Kadarmideen, Haja N.; Janss, Luc L. G.

    2005-01-01

    Bayesian segregation analyses were used to investigate the mode of inheritance of osteochondral lesions (osteochondrosis, OC) in pigs. Data consisted of 1163 animals with OC and their pedigrees included 2891 animals. Mixed-inheritance threshold models (MITM) and several variants of MITM, in conjunction with Markov chain Monte Carlo methods, were developed for the analysis of these (categorical) data. Results showed major genes with significant and substantially higher variances (range 1.384–37.81), compared to the polygenic variance (\\documentclass[10pt]{article} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\pagestyle{empty} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} \\begin{equation*}{\\mathrm{{\\sigma}}}_{{\\mathrm{u}}}^{2}\\end{equation*}\\end{document}). Consequently, heritabilities for a mixed inheritance (range 0.65–0.90) were much higher than the heritabilities from the polygenes. Disease allele frequencies range was 0.38–0.88. Additional analyses estimating the transmission probabilities of the major gene showed clear evidence for Mendelian segregation of a major gene affecting osteochondrosis. The variants, MITM with informative prior on \\documentclass[10pt]{article} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\pagestyle{empty} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} \\begin{equation*}{\\mathrm{{\\sigma}}}_{{\\mathrm{u}}}^{2}\\end{equation*}\\end{document}, showed significant improvement in marginal distributions and accuracy of parameters. MITM with a “reduced polygenic model” for parameterization of polygenic effects avoided convergence problems and poor mixing encountered in an “individual polygenic model.” In all cases, “shrinkage estimators” for fixed effects avoided unidentifiability for these parameters. The mixed-inheritance linear model (MILM) was also applied to all OC lesions and compared with the MITM. This is the first study to report evidence of major genes for osteochondral lesions in pigs; these results may also form a basis for underpinning the genetic inheritance of this disease in other animals as well as in humans. PMID:16020792

  2. A brief measure of attitudes toward mixed methods research in psychology

    PubMed Central

    Roberts, Lynne D.; Povee, Kate

    2014-01-01

    The adoption of mixed methods research in psychology has trailed behind other social science disciplines. Teaching psychology students, academics, and practitioners about mixed methodologies may increase the use of mixed methods within the discipline. However, tailoring and evaluating education and training in mixed methodologies requires an understanding of, and way of measuring, attitudes toward mixed methods research in psychology. To date, no such measure exists. In this article we present the development and initial validation of a new measure: Attitudes toward Mixed Methods Research in Psychology. A pool of 42 items developed from previous qualitative research on attitudes toward mixed methods research along with validation measures was administered via an online survey to a convenience sample of 274 psychology students, academics and psychologists. Principal axis factoring with varimax rotation on a subset of the sample produced a four-factor, 12-item solution. Confirmatory factor analysis on a separate subset of the sample indicated that a higher order four factor model provided the best fit to the data. The four factors; ‘Limited Exposure,’ ‘(in)Compatibility,’ ‘Validity,’ and ‘Tokenistic Qualitative Component’; each have acceptable internal reliability. Known groups validity analyses based on preferred research orientation and self-rated mixed methods research skills, and convergent and divergent validity analyses based on measures of attitudes toward psychology as a science and scientist and practitioner orientation, provide initial validation of the measure. This brief, internally reliable measure can be used in assessing attitudes toward mixed methods research in psychology, measuring change in attitudes as part of the evaluation of mixed methods education, and in larger research programs. PMID:25429281

  3. Integrating Stomach Content and Stable Isotope Analyses to Quantify the Diets of Pygoscelid Penguins

    PubMed Central

    Polito, Michael J.; Trivelpiece, Wayne Z.; Karnovsky, Nina J.; Ng, Elizabeth; Patterson, William P.; Emslie, Steven D.

    2011-01-01

    Stomach content analysis (SCA) and more recently stable isotope analysis (SIA) integrated with isotopic mixing models have become common methods for dietary studies and provide insight into the foraging ecology of seabirds. However, both methods have drawbacks and biases that may result in difficulties in quantifying inter-annual and species-specific differences in diets. We used these two methods to simultaneously quantify the chick-rearing diet of Chinstrap (Pygoscelis antarctica) and Gentoo (P. papua) penguins and highlight methods of integrating SCA data to increase accuracy of diet composition estimates using SIA. SCA biomass estimates were highly variable and underestimated the importance of soft-bodied prey such as fish. Two-source, isotopic mixing model predictions were less variable and identified inter-annual and species-specific differences in the relative amounts of fish and krill in penguin diets not readily apparent using SCA. In contrast, multi-source isotopic mixing models had difficulty estimating the dietary contribution of fish species occupying similar trophic levels without refinement using SCA-derived otolith data. Overall, our ability to track inter-annual and species-specific differences in penguin diets using SIA was enhanced by integrating SCA data to isotopic mixing modes in three ways: 1) selecting appropriate prey sources, 2) weighting combinations of isotopically similar prey in two-source mixing models and 3) refining predicted contributions of isotopically similar prey in multi-source models. PMID:22053199

  4. Determining the impact of cell mixing on signaling during development.

    PubMed

    Uriu, Koichiro; Morelli, Luis G

    2017-06-01

    Cell movement and intercellular signaling occur simultaneously to organize morphogenesis during embryonic development. Cell movement can cause relative positional changes between neighboring cells. When intercellular signals are local such cell mixing may affect signaling, changing the flow of information in developing tissues. Little is known about the effect of cell mixing on intercellular signaling in collective cellular behaviors and methods to quantify its impact are lacking. Here we discuss how to determine the impact of cell mixing on cell signaling drawing an example from vertebrate embryogenesis: the segmentation clock, a collective rhythm of interacting genetic oscillators. We argue that comparing cell mixing and signaling timescales is key to determining the influence of mixing. A signaling timescale can be estimated by combining theoretical models with cell signaling perturbation experiments. A mixing timescale can be obtained by analysis of cell trajectories from live imaging. After comparing cell movement analyses in different experimental settings, we highlight challenges in quantifying cell mixing from embryonic timelapse experiments, especially a reference frame problem due to embryonic motions and shape changes. We propose statistical observables characterizing cell mixing that do not depend on the choice of reference frames. Finally, we consider situations in which both cell mixing and signaling involve multiple timescales, precluding a direct comparison between single characteristic timescales. In such situations, physical models based on observables of cell mixing and signaling can simulate the flow of information in tissues and reveal the impact of observed cell mixing on signaling. © 2017 Japanese Society of Developmental Biologists.

  5. A Two-Step Approach for Analysis of Nonignorable Missing Outcomes in Longitudinal Regression: an Application to Upstate KIDS Study.

    PubMed

    Liu, Danping; Yeung, Edwina H; McLain, Alexander C; Xie, Yunlong; Buck Louis, Germaine M; Sundaram, Rajeshwari

    2017-09-01

    Imperfect follow-up in longitudinal studies commonly leads to missing outcome data that can potentially bias the inference when the missingness is nonignorable; that is, the propensity of missingness depends on missing values in the data. In the Upstate KIDS Study, we seek to determine if the missingness of child development outcomes is nonignorable, and how a simple model assuming ignorable missingness would compare with more complicated models for a nonignorable mechanism. To correct for nonignorable missingness, the shared random effects model (SREM) jointly models the outcome and the missing mechanism. However, the computational complexity and lack of software packages has limited its practical applications. This paper proposes a novel two-step approach to handle nonignorable missing outcomes in generalized linear mixed models. We first analyse the missing mechanism with a generalized linear mixed model and predict values of the random effects; then, the outcome model is fitted adjusting for the predicted random effects to account for heterogeneity in the missingness propensity. Extensive simulation studies suggest that the proposed method is a reliable approximation to SREM, with a much faster computation. The nonignorability of missing data in the Upstate KIDS Study is estimated to be mild to moderate, and the analyses using the two-step approach or SREM are similar to the model assuming ignorable missingness. The two-step approach is a computationally straightforward method that can be conducted as sensitivity analyses in longitudinal studies to examine violations to the ignorable missingness assumption and the implications relative to health outcomes. © 2017 John Wiley & Sons Ltd.

  6. A Comparative Evaluation of Mixed Dentition Analysis on Reliability of Cone Beam Computed Tomography Image Compared to Plaster Model

    PubMed Central

    Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam

    2017-01-01

    Aims and Objective: The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Materials and Methods: Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t-test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Results: Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. Conclusion: CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis. PMID:28852639

  7. Calibrating and testing a gap model for simulating forest management in the Oregon Coast Range

    Treesearch

    Robert J. Pabst; Matthew N. Goslin; Steven L. Garman; Thomas A. Spies

    2008-01-01

    The complex mix of economic and ecological objectives facing today's forest managers necessitates the development of growth models with a capacity for simulating a wide range of forest conditions while producing outputs useful for economic analyses. We calibrated the gap model ZELIG to simulate stand level forest development in the Oregon Coast Range as part of a...

  8. A Call for Conducting Multivariate Mixed Analyses

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.

    2016-01-01

    Several authors have written methodological works that provide an introductory- and/or intermediate-level guide to conducting mixed analyses. Although these works have been useful for beginning and emergent mixed researchers, with very few exceptions, works are lacking that describe and illustrate advanced-level mixed analysis approaches. Thus,…

  9. The role of ice nuclei recycling in the maintenance of cloud ice in Arctic mixed-phase stratocumulus

    DOE PAGES

    Solomon, Amy; Feingold, G.; Shupe, M. D.

    2015-09-25

    This study investigates the maintenance of cloud ice production in Arctic mixed-phase stratocumulus in large eddy simulations that include a prognostic ice nuclei (IN) formulation and a diurnal cycle. Balances derived from a mixed-layer model and phase analyses are used to provide insight into buffering mechanisms that maintain ice in these cloud systems. We find that, for the case under investigation, IN recycling through subcloud sublimation considerably prolongs ice production over a multi-day integration. This effective source of IN to the cloud dominates over mixing sources from above or below the cloud-driven mixed layer. Competing feedbacks between dynamical mixing andmore » recycling are found to slow the rate of ice lost from the mixed layer when a diurnal cycle is simulated. Furthermore, the results of this study have important implications for maintaining phase partitioning of cloud ice and liquid that determine the radiative forcing of Arctic mixed-phase clouds.« less

  10. The role of ice nuclei recycling in the maintenance of cloud ice in Arctic mixed-phase stratocumulus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solomon, Amy; Feingold, G.; Shupe, M. D.

    This study investigates the maintenance of cloud ice production in Arctic mixed-phase stratocumulus in large eddy simulations that include a prognostic ice nuclei (IN) formulation and a diurnal cycle. Balances derived from a mixed-layer model and phase analyses are used to provide insight into buffering mechanisms that maintain ice in these cloud systems. We find that, for the case under investigation, IN recycling through subcloud sublimation considerably prolongs ice production over a multi-day integration. This effective source of IN to the cloud dominates over mixing sources from above or below the cloud-driven mixed layer. Competing feedbacks between dynamical mixing andmore » recycling are found to slow the rate of ice lost from the mixed layer when a diurnal cycle is simulated. Furthermore, the results of this study have important implications for maintaining phase partitioning of cloud ice and liquid that determine the radiative forcing of Arctic mixed-phase clouds.« less

  11. Preliminary Empirical Model of Crucial Determinants of Best Practice for Peer Tutoring on Academic Achievement

    ERIC Educational Resources Information Center

    Leung, Kim Chau

    2015-01-01

    Previous meta-analyses of the effects of peer tutoring on academic achievement have been plagued with theoretical and methodological flaws. Specifically, these studies have not adopted both fixed and mixed effects models for analyzing the effect size; they have not evaluated the moderating effect of some commonly used parameters, such as comparing…

  12. The effects of green areas on air surface temperature of the Kuala Lumpur city using WRF-ARW modelling and Remote Sensing technique

    NASA Astrophysics Data System (ADS)

    Isa, N. A.; Mohd, W. M. N. Wan; Salleh, S. A.; Ooi, M. C. G.

    2018-02-01

    Matured trees contain high concentration of chlorophyll that encourages the process of photosynthesis. This process produces oxygen as a by-product and releases it into the atmosphere and helps in lowering the ambient temperature. This study attempts to analyse the effect of green area on air surface temperature of the Kuala Lumpur city. The air surface temperatures of two different dates which are, in March 2006 and March 2016 were simulated using the Weather Research and Forecasting (WRF) model. The green area in the city was extracted using the Normalized Difference Vegetation Index (NDVI) from two Landsat satellite images. The relationship between the air surface temperature and the green area were analysed using linear regression models. From the study, it was found that, the green area was significantly affecting the distribution of air temperature within the city. A strong negative correlation was identified through this study which indicated that higher NDVI values tend to have lower air surface temperature distribution within the focus study area. It was also found that, different urban setting in mixed built-up and vegetated areas resulted in different distributions of air surface temperature. Future studies should focus on analysing the air surface temperature within the area of mixed built-up and vegetated area.

  13. Heavy neutrino mixing and single production at linear collider

    NASA Astrophysics Data System (ADS)

    Gluza, J.; Maalampi, J.; Raidal, M.; Zrałek, M.

    1997-02-01

    We study the single production of heavy neutrinos via the processes e- e+ -> νN and e- γ -> W- N at future linear colliders. As a base of our considerations we take a wide class of models, both with vanishing and non-vanishing left-handed Majorana neutrino mass matrix mL. We perform a model independent analyses of the existing experimental data and find connections between the characteristic of heavy neutrinos (masses, mixings, CP eigenvalues) and the mL parameters. We show that with the present experimental constraints heavy neutrino masses almost up to the collision energy can be tested in the future experiments.

  14. Can pair-instability supernova models match the observations of superluminous supernovae?

    NASA Astrophysics Data System (ADS)

    Kozyreva, Alexandra; Blinnikov, S.

    2015-12-01

    An increasing number of so-called superluminous supernovae (SLSNe) are discovered. It is believed that at least some of them with slowly fading light curves originate in stellar explosions induced by the pair instability mechanism. Recent stellar evolution models naturally predict pair instability supernovae (PISNe) from very massive stars at wide range of metallicities (up to Z = 0.006, Yusof et al.). In the scope of this study, we analyse whether PISN models can match the observational properties of SLSNe with various light-curve shapes. Specifically, we explore the influence of different degrees of macroscopic chemical mixing in PISN explosive products on the resulting observational properties. We artificially apply mixing to the 250 M⊙ PISN evolutionary model from Kozyreva et al. and explore its supernova evolution with the one-dimensional radiation hydrodynamics code STELLA. The greatest success in matching SLSN observations is achieved in the case of an extreme macroscopic mixing, where all radioactive material is ejected into the hydrogen-helium outer layer. Such an extreme macroscopic redistribution of chemicals produces events with faster light curves with high photospheric temperatures and high photospheric velocities. These properties fit a wider range of SLSNe than non-mixed PISN model. Our mixed models match the light curves, colour temperature, and photospheric velocity evolution of two well-observed SLSNe PTF12dam and LSQ12dlf. However, these models' extreme chemical redistribution may be hard to realize in massive PISNe. Therefore, alternative models such as the magnetar mechanism or wind-interaction may still to be favourable to interpret rapidly rising SLSNe.

  15. IMPACT: Investigating the impact of Models of Practice for Allied health Care in subacuTe settings. A protocol for a quasi-experimental mixed methods study of cost effectiveness and outcomes for patients exposed to different models of allied health care.

    PubMed

    Coker, Freya; Williams, Cylie M; Taylor, Nicholas F; Caspers, Kirsten; McAlinden, Fiona; Wilton, Anita; Shields, Nora; Haines, Terry P

    2018-05-10

    This protocol considers three allied health staffing models across public health subacute hospitals. This quasi-experimental mixed-methods study, including qualitative process evaluation, aims to evaluate the impact of additional allied health services in subacute care, in rehabilitation and geriatric evaluation management settings, on patient, health service and societal outcomes. This health services research will analyse outcomes of patients exposed to different allied health models of care at three health services. Each health service will have a control ward (routine care) and an intervention ward (additional allied health). This project has two parts. Part 1: a whole of site data extraction for included wards. Outcome measures will include: length of stay, rate of readmissions, discharge destinations, community referrals, patient feedback and staff perspectives. Part 2: Functional Independence Measure scores will be collected every 2-3 days for the duration of 60 patient admissions.Data from part 1 will be analysed by linear regression analysis for continuous outcomes using patient-level data and logistic regression analysis for binary outcomes. Qualitative data will be analysed using a deductive thematic approach. For part 2, a linear mixed model analysis will be conducted using therapy service delivery and days since admission to subacute care as fixed factors in the model and individual participant as a random factor. Graphical analysis will be used to examine the growth curve of the model and transformations. The days since admission factor will be used to examine non-linear growth trajectories to determine if they lead to better model fit. Findings will be disseminated through local reports and to the Department of Health and Human Services Victoria. Results will be presented at conferences and submitted to peer-reviewed journals. The Monash Health Human Research Ethics committee approved this multisite research (HREC/17/MonH/144 and HREC/17/MonH/547). © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  16. Mixed-Mode Decohesion Elements for Analyses of Progressive Delamination

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Camanho, Pedro P.; deMoura, Marcelo F.

    2001-01-01

    A new 8-node decohesion element with mixed mode capability is proposed and demonstrated. The element is used at the interface between solid finite elements to model the initiation and propagation of delamination. A single displacement-based damage parameter is used in a strain softening law to track the damage state of the interface. The method can be used in conjunction with conventional material degradation procedures to account for inplane and intra-laminar damage modes. The accuracy of the predictions is evaluated in single mode delamination tests, in the mixed-mode bending test, and in a structural configuration consisting of the debonding of a stiffener flange from its skin.

  17. Using existing case-mix methods to fund trauma cases.

    PubMed

    Monakova, Julia; Blais, Irene; Botz, Charles; Chechulin, Yuriy; Picciano, Gino; Basinski, Antoni

    2010-01-01

    Policymakers frequently face the need to increase funding in isolated and frequently heterogeneous (clinically and in terms of resource consumption) patient subpopulations. This article presents a methodologic solution for testing the appropriateness of using existing grouping and weighting methodologies for funding subsets of patients in the scenario where a case-mix approach is preferable to a flat-rate based payment system. Using as an example the subpopulation of trauma cases of Ontario lead trauma hospitals, the statistical techniques of linear and nonlinear regression models, regression trees, and spline models were applied to examine the fit of the existing case-mix groups and reference weights for the trauma cases. The analyses demonstrated that for funding Ontario trauma cases, the existing case-mix systems can form the basis for rational and equitable hospital funding, decreasing the need to develop a different grouper for this subset of patients. This study confirmed that Injury Severity Score is a poor predictor of costs for trauma patients. Although our analysis used the Canadian case-mix classification system and cost weights, the demonstrated concept of using existing case-mix systems to develop funding rates for specific subsets of patient populations may be applicable internationally.

  18. A Methodology for Conducting Integrative Mixed Methods Research and Data Analyses

    PubMed Central

    Castro, Felipe González; Kellison, Joshua G.; Boyd, Stephen J.; Kopak, Albert

    2011-01-01

    Mixed methods research has gained visibility within the last few years, although limitations persist regarding the scientific caliber of certain mixed methods research designs and methods. The need exists for rigorous mixed methods designs that integrate various data analytic procedures for a seamless transfer of evidence across qualitative and quantitative modalities. Such designs can offer the strength of confirmatory results drawn from quantitative multivariate analyses, along with “deep structure” explanatory descriptions as drawn from qualitative analyses. This article presents evidence generated from over a decade of pilot research in developing an integrative mixed methods methodology. It presents a conceptual framework and methodological and data analytic procedures for conducting mixed methods research studies, and it also presents illustrative examples from the authors' ongoing integrative mixed methods research studies. PMID:22167325

  19. To mix or not to mix venous blood samples collected in vacuum tubes?

    PubMed

    Parenmark, Anna; Landberg, Eva

    2011-09-08

    There are recommendations to mix venous blood samples by inverting the tubes immediately after venipuncture. Though mixing allows efficient anticoagulation in plasma tubes and fast initiation of coagulation in serum tubes, the effect on laboratory analyses and risk of haemolysis has not been thoroughly evaluated. Venous blood samples were collected by venipuncture in vacuum tubes from 50 patients (10 or 20 patients in each group). Four types of tubes and 18 parameters used in routine clinical chemistry were evaluated. For each patient and tube, three types of mixing strategies were used: instant mixing, no mixing and 5 min of rest followed by mixing. Most analyses did not differ significantly in samples admitted to different mixing strategies. Plasma lactate dehydrogenase and haemolysis index showed a small but significant increase in samples omitted to instant mixing compared to samples without mixing. However, in one out of twenty non-mixed samples, activated partial thromboplastin time was seriously affected. These results indicate that mixing blood samples after venipuncture is not mandatory for all types of tubes. Instant mixing may introduce interference for those analyses susceptible to haemolysis. However, tubes with liquid-based citrate buffer for coagulation testing should be mixed to avoid clotting.

  20. Genetic analyses using GGE model and a mixed linear model approach, and stability analyses using AMMI bi-plot for late-maturity alpha-amylase activity in bread wheat genotypes.

    PubMed

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Fofana, Bourlaye

    2017-06-01

    Low falling number and discounting grain when it is downgraded in class are the consequences of excessive late-maturity α-amylase activity (LMAA) in bread wheat (Triticum aestivum L.). Grain expressing high LMAA produces poorer quality bread products. To effectively breed for low LMAA, it is necessary to understand what genes control it and how they are expressed, particularly when genotypes are grown in different environments. In this study, an International Collection (IC) of 18 spring wheat genotypes and another set of 15 spring wheat cultivars adapted to South Dakota (SD), USA were assessed to characterize the genetic component of LMAA over 5 and 13 environments, respectively. The data were analysed using a GGE model with a mixed linear model approach and stability analysis was presented using an AMMI bi-plot on R software. All estimated variance components and their proportions to the total phenotypic variance were highly significant for both sets of genotypes, which were validated by the AMMI model analysis. Broad-sense heritability for LMAA was higher in SD adapted cultivars (53%) compared to that in IC (49%). Significant genetic effects and stability analyses showed some genotypes, e.g. 'Lancer', 'Chester' and 'LoSprout' from IC, and 'Alsen', 'Traverse' and 'Forefront' from SD cultivars could be used as parents to develop new cultivars expressing low levels of LMAA. Stability analysis using an AMMI bi-plot revealed that 'Chester', 'Lancer' and 'Advance' were the most stable across environments, while in contrast, 'Kinsman', 'Lerma52' and 'Traverse' exhibited the lowest stability for LMAA across environments.

  1. Fixed versus mixed RSA: Explaining visual representations by fixed and mixed feature sets from shallow and deep computational models.

    PubMed

    Khaligh-Razavi, Seyed-Mahdi; Henriksson, Linda; Kay, Kendrick; Kriegeskorte, Nikolaus

    2017-02-01

    Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area's representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies in the training set did not match their frequencies in natural experience or their behavioural importance. The latter factors might determine the representational prominence of semantic dimensions in higher-level ventral-stream areas. Our results demonstrate the benefits of testing both the specific representational hypothesis expressed by a model's original feature space and the hypothesis space generated by linear transformations of that feature space.

  2. Studies on the detection and identification of the explosives in the terahertz range

    NASA Astrophysics Data System (ADS)

    Zhou, Qing-li; Zhang, Cun-lin; Li, Wei-Wei; Mu, Kai-jun; Feng, Rui-shu

    2008-03-01

    The sensing of the explosives and the related compounds is very important for homeland security and defense. Based on the non-invasive terahertz (THz) technology, we have studied some pure and mixed explosives by using the THz time-domain spectroscopy and have obtained the absorption spectra of those samples. The obtained results show that those explosives can be identified due to their different characterized finger-prints in the terahertz frequency region of 0.2-2.5 THz. Furthermore, the spectra analyses indicate that the shape and peak positions of the spectra for these mixed explosive are mainly determined by their explosive components. In order to identify those different kinds of explosives, we have applied the artificial neural network, which is a mathematical device for modeling complex and non-linear functionalities, to our present work. After the repetitive modeling and adequate training with the known input-output data, the identification of the explosive is realized roughly on a multi-hidden-layers model. It is shown that the neural network analyses of the THz spectra would positively identify the explosives and reduce false alarm rates.

  3. Application of CFX-10 to the Investigation of RPV Coolant Mixing in VVER Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moretti, Fabio; Melideo, Daniele; Terzuoli, Fulvio

    2006-07-01

    Coolant mixing phenomena occurring in the pressure vessel of a nuclear reactor constitute one of the main objectives of investigation by researchers concerned with nuclear reactor safety. For instance, mixing plays a relevant role in reactivity-induced accidents initiated by de-boration or boron dilution events, followed by transport of a de-borated slug into the vessel of a pressurized water reactor. Another example is constituted by temperature mixing, which may sensitively affect the consequences of a pressurized thermal shock scenario. Predictive analysis of mixing phenomena is strongly improved by the availability of computational tools able to cope with the inherent three-dimensionality ofmore » such problem, like system codes with three-dimensional capabilities, and Computational Fluid Dynamics (CFD) codes. The present paper deals with numerical analyses of coolant mixing in the reactor pressure vessel of a VVER-1000 reactor, performed by the ANSYS CFX-10 CFD code. In particular, the 'swirl' effect that has been observed to take place in the downcomer of such kind of reactor has been addressed, with the aim of assessing the capability of the codes to predict that effect, and to understand the reasons for its occurrence. Results have been compared against experimental data from V1000CT-2 Benchmark. Moreover, a boron mixing problem has been investigated, in the hypothesis that a de-borated slug, transported by natural circulation, enters the vessel. Sensitivity analyses have been conducted on some geometrical features, model parameters and boundary conditions. (authors)« less

  4. Random regression analyses using B-splines to model growth of Australian Angus cattle

    PubMed Central

    Meyer, Karin

    2005-01-01

    Regression on the basis function of B-splines has been advocated as an alternative to orthogonal polynomials in random regression analyses. Basic theory of splines in mixed model analyses is reviewed, and estimates from analyses of weights of Australian Angus cattle from birth to 820 days of age are presented. Data comprised 84 533 records on 20 731 animals in 43 herds, with a high proportion of animals with 4 or more weights recorded. Changes in weights with age were modelled through B-splines of age at recording. A total of thirteen analyses, considering different combinations of linear, quadratic and cubic B-splines and up to six knots, were carried out. Results showed good agreement for all ages with many records, but fluctuated where data were sparse. On the whole, analyses using B-splines appeared more robust against "end-of-range" problems and yielded more consistent and accurate estimates of the first eigenfunctions than previous, polynomial analyses. A model fitting quadratic B-splines, with knots at 0, 200, 400, 600 and 821 days and a total of 91 covariance components, appeared to be a good compromise between detailedness of the model, number of parameters to be estimated, plausibility of results, and fit, measured as residual mean square error. PMID:16093011

  5. An in-depth assessment of a diagnosis-based risk adjustment model based on national health insurance claims: the application of the Johns Hopkins Adjusted Clinical Group case-mix system in Taiwan.

    PubMed

    Chang, Hsien-Yen; Weiner, Jonathan P

    2010-01-18

    Diagnosis-based risk adjustment is becoming an important issue globally as a result of its implications for payment, high-risk predictive modelling and provider performance assessment. The Taiwanese National Health Insurance (NHI) programme provides universal coverage and maintains a single national computerized claims database, which enables the application of diagnosis-based risk adjustment. However, research regarding risk adjustment is limited. This study aims to examine the performance of the Adjusted Clinical Group (ACG) case-mix system using claims-based diagnosis information from the Taiwanese NHI programme. A random sample of NHI enrollees was selected. Those continuously enrolled in 2002 were included for concurrent analyses (n = 173,234), while those in both 2002 and 2003 were included for prospective analyses (n = 164,562). Health status measures derived from 2002 diagnoses were used to explain the 2002 and 2003 health expenditure. A multivariate linear regression model was adopted after comparing the performance of seven different statistical models. Split-validation was performed in order to avoid overfitting. The performance measures were adjusted R2 and mean absolute prediction error of five types of expenditure at individual level, and predictive ratio of total expenditure at group level. The more comprehensive models performed better when used for explaining resource utilization. Adjusted R2 of total expenditure in concurrent/prospective analyses were 4.2%/4.4% in the demographic model, 15%/10% in the ACGs or ADGs (Aggregated Diagnosis Group) model, and 40%/22% in the models containing EDCs (Expanded Diagnosis Cluster). When predicting expenditure for groups based on expenditure quintiles, all models underpredicted the highest expenditure group and overpredicted the four other groups. For groups based on morbidity burden, the ACGs model had the best performance overall. Given the widespread availability of claims data and the superior explanatory power of claims-based risk adjustment models over demographics-only models, Taiwan's government should consider using claims-based models for policy-relevant applications. The performance of the ACG case-mix system in Taiwan was comparable to that found in other countries. This suggested that the ACG system could be applied to Taiwan's NHI even though it was originally developed in the USA. Many of the findings in this paper are likely to be relevant to other diagnosis-based risk adjustment methodologies.

  6. Qualitative and numerical analyses of the effects of river inflow variations on mixing diagrams in estuaries

    USGS Publications Warehouse

    Cifuentes, L.A.; Schemel, L.E.; Sharp, J.H.

    1990-01-01

    The effects of river inflow variations on alkalinity/salinity distributions in San Francisco Bay and nitrate/salinity distributions in Delaware Bay are described. One-dimensional, advective-dispersion equations for salinity and the dissolved constituents are solved numerically and are used to simulate mixing in the estuaries. These simulations account for time-varying river inflow, variations in estuarine cross-sectional area, and longitudinally varying dispersion coefficients. The model simulates field observations better than models that use constant hydrodynamic coefficients and uniform estuarine geometry. Furthermore, field observations and model simulations are consistent with theoretical 'predictions' that the curvature of propery-salinity distributions depends on the relation between the estuarine residence time and the period of river concentration variation. ?? 1990.

  7. Uncertainty in Analyzed Water and Energy Budgets at Continental Scales

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Robertson, F. R.; Mocko, D.; Chen, J.

    2011-01-01

    Operational analyses and retrospective-analyses provide all the physical terms of mater and energy budgets, guided by the assimilation of atmospheric observations. However, there is significant reliance on the numerical models, and so, uncertainty in the budget terms is always present. Here, we use a recently developed data set consisting of a mix of 10 analyses (both operational and retrospective) to quantify the uncertainty of analyzed water and energy budget terms for GEWEX continental-scale regions, following the evaluation of Dr. John Roads using individual reanalyses data sets.

  8. A Fatty Acid Based Bayesian Approach for Inferring Diet in Aquatic Consumers

    PubMed Central

    Holtgrieve, Gordon W.; Ward, Eric J.; Ballantyne, Ashley P.; Burns, Carolyn W.; Kainz, Martin J.; Müller-Navarra, Doerthe C.; Persson, Jonas; Ravet, Joseph L.; Strandberg, Ursula; Taipale, Sami J.; Alhgren, Gunnel

    2015-01-01

    We modified the stable isotope mixing model MixSIR to infer primary producer contributions to consumer diets based on their fatty acid composition. To parameterize the algorithm, we generated a ‘consumer-resource library’ of FA signatures of Daphnia fed different algal diets, using 34 feeding trials representing diverse phytoplankton lineages. This library corresponds to the resource or producer file in classic Bayesian mixing models such as MixSIR or SIAR. Because this library is based on the FA profiles of zooplankton consuming known diets, and not the FA profiles of algae directly, trophic modification of consumer lipids is directly accounted for. To test the model, we simulated hypothetical Daphnia comprised of 80% diatoms, 10% green algae, and 10% cryptophytes and compared the FA signatures of these known pseudo-mixtures to outputs generated by the mixing model. The algorithm inferred these simulated consumers were comprised of 82% (63-92%) [median (2.5th to 97.5th percentile credible interval)] diatoms, 11% (4-22%) green algae, and 6% (0-25%) cryptophytes. We used the same model with published phytoplankton stable isotope (SI) data for δ13C and δ15N to examine how a SI based approach resolved a similar scenario. With SI, the algorithm inferred that the simulated consumer assimilated 52% (4-91%) diatoms, 23% (1-78%) green algae, and 18% (1-73%) cyanobacteria. The accuracy and precision of SI based estimates was extremely sensitive to both resource and consumer uncertainty, as well as the trophic fractionation assumption. These results indicate that when using only two tracers with substantial uncertainty for the putative resources, as is often the case in this class of analyses, the underdetermined constraint in consumer-resource SI analyses may be intractable. The FA based approach alleviated the underdetermined constraint because many more FA biomarkers were utilized (n < 20), different primary producers (e.g., diatoms, green algae, and cryptophytes) have very characteristic FA compositions, and the FA profiles of many aquatic primary consumers are strongly influenced by their diets. PMID:26114945

  9. Characterisation and modelling of mixing processes in groundwaters of a potential geological repository for nuclear wastes in crystalline rocks of Sweden.

    PubMed

    Gómez, Javier B; Gimeno, María J; Auqué, Luis F; Acero, Patricia

    2014-01-15

    This paper presents the mixing modelling results for the hydrogeochemical characterisation of groundwaters in the Laxemar area (Sweden). This area is one of the two sites that have been investigated, under the financial patronage of the Swedish Nuclear Waste and Management Co. (SKB), as possible candidates for hosting the proposed repository for the long-term storage of spent nuclear fuel. The classical geochemical modelling, interpreted in the light of the palaeohydrogeological history of the system, has shown that the driving process in the geochemical evolution of this groundwater system is the mixing between four end-member waters: a deep and old saline water, a glacial meltwater, an old marine water, and a meteoric water. In this paper we put the focus on mixing and its effects on the final chemical composition of the groundwaters using a comprehensive methodology that combines principal component analysis with mass balance calculations. This methodology allows us to test several combinations of end member waters and several combinations of compositional variables in order to find optimal solutions in terms of mixing proportions. We have applied this methodology to a dataset of 287 groundwater samples from the Laxemar area collected and analysed by SKB. The best model found uses four conservative elements (Cl, Br, oxygen-18 and deuterium), and computes mixing proportions with respect to three end member waters (saline, glacial and meteoric). Once the first order effect of mixing has been taken into account, water-rock interaction can be used to explain the remaining variability. In this way, the chemistry of each water sample can be obtained by using the mixing proportions for the conservative elements, only affected by mixing, or combining the mixing proportions and the chemical reactions for the non-conservative elements in the system, establishing the basis for predictive calculations. © 2013 Elsevier B.V. All rights reserved.

  10. Simulation of particle diversity and mixing state over Greater Paris: a model-measurement inter-comparison.

    PubMed

    Zhu, Shupeng; Sartelet, Karine N; Healy, Robert M; Wenger, John C

    2016-07-18

    Air quality models are used to simulate and forecast pollutant concentrations, from continental scales to regional and urban scales. These models usually assume that particles are internally mixed, i.e. particles of the same size have the same chemical composition, which may vary in space and time. Although this assumption may be realistic for continental-scale simulations, where particles originating from different sources have undergone sufficient mixing to achieve a common chemical composition for a given model grid cell and time, it may not be valid for urban-scale simulations, where particles from different sources interact on shorter time scales. To investigate the role of the mixing state assumption on the formation of particles, a size-composition resolved aerosol model (SCRAM) was developed and coupled to the Polyphemus air quality platform. Two simulations, one with the internal mixing hypothesis and another with the external mixing hypothesis, have been carried out for the period 15 January to 11 February 2010, when the MEGAPOLI winter field measurement campaign took place in Paris. The simulated bulk concentrations of chemical species and the concentrations of individual particle classes are compared with the observations of Healy et al. (Atmos. Chem. Phys., 2013, 13, 9479-9496) for the same period. The single particle diversity and the mixing-state index are computed based on the approach developed by Riemer et al. (Atmos. Chem. Phys., 2013, 13, 11423-11439), and they are compared to the measurement-based analyses of Healy et al. (Atmos. Chem. Phys., 2014, 14, 6289-6299). The average value of the single particle diversity, which represents the average number of species within each particle, is consistent between simulation and measurement (2.91 and 2.79 respectively). Furthermore, the average value of the mixing-state index is also well represented in the simulation (69% against 59% from the measurements). The spatial distribution of the mixing-state index shows that the particles are not mixed in urban areas, while they are well mixed in rural areas. This indicates that the assumption of internal mixing traditionally used in transport chemistry models is well suited to rural areas, but this assumption is less realistic for urban areas close to emission sources.

  11. Poverty, hunger, education, and residential status impact survival in HIV.

    PubMed

    McMahon, James; Wanke, Christine; Terrin, Norma; Skinner, Sally; Knox, Tamsin

    2011-10-01

    Despite combination antiretroviral therapy (ART), HIV infected people have higher mortality than non-infected. Lower socioeconomic status (SES) predicts higher mortality in many chronic illnesses but data in people with HIV is limited. We evaluated 878 HIV infected individuals followed from 1995 to 2005. Cox proportional hazards for all-cause mortality were estimated for SES measures and other factors. Mixed effects analyses examined how SES impacts factors predicting death. The 200 who died were older, had lower CD4 counts, and higher viral loads (VL). Age, transmission category, education, albumin, CD4 counts, VL, hunger, and poverty predicted death in univariate analyses; age, CD4 counts, albumin, VL, and poverty in the multivariable model. Mixed models showed associations between (1) CD4 counts with education and hunger; (2) albumin with education, homelessness, and poverty; and (3) VL with education and hunger. SES contributes to mortality in HIV infected persons directly and indirectly, and should be a target of health policy in this population.

  12. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    PubMed

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  13. Estimates of lake trout (Salvelinus namaycush) diet in Lake Ontario using two and three isotope mixing models

    USGS Publications Warehouse

    Colborne, Scott F.; Rush, Scott A.; Paterson, Gordon; Johnson, Timothy B.; Lantry, Brian F.; Fisk, Aaron T.

    2016-01-01

    Recent development of multi-dimensional stable isotope models for estimating both foraging patterns and niches have presented the analytical tools to further assess the food webs of freshwater populations. One approach to refine predictions from these analyses is to include a third isotope to the more common two-isotope carbon and nitrogen mixing models to increase the power to resolve different prey sources. We compared predictions made with two-isotope carbon and nitrogen mixing models and three-isotope models that also included sulphur (δ34S) for the diets of Lake Ontario lake trout (Salvelinus namaycush). We determined the isotopic compositions of lake trout and potential prey fishes sampled from Lake Ontario and then used quantitative estimates of resource use generated by two- and three-isotope Bayesian mixing models (SIAR) to infer feeding patterns of lake trout. Both two- and three-isotope models indicated that alewife (Alosa pseudoharengus) and round goby (Neogobius melanostomus) were the primary prey items, but the three-isotope models were more consistent with recent measures of prey fish abundances and lake trout diets. The lake trout sampled directly from the hatcheries had isotopic compositions derived from the hatchery food which were distinctively different from those derived from the natural prey sources. Those hatchery signals were retained for months after release, raising the possibility to distinguish hatchery-reared yearlings and similarly sized naturally reproduced lake trout based on isotopic compositions. Addition of a third-isotope resulted in mixing model results that confirmed round goby have become an important component of lake trout diet and may be overtaking alewife as a prey resource.

  14. Improved accuracy for finite element structural analysis via an integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  15. Study on processing immiscible materials in zero gravity

    NASA Technical Reports Server (NTRS)

    Reger, J. L.; Mendelson, R. A.

    1975-01-01

    An experimental investigation was conducted to evaluate mixing immiscible metal combinations under several process conditions. Under one-gravity, these included thermal processing, thermal plus electromagnetic mixing, and thermal plus acoustic mixing. The same process methods were applied during free fall on the MSFC drop tower facility. The design is included of drop tower apparatus to provide the electromagnetic and acoustic mixing equipment, and a thermal model was prepared to design the specimen and cooling procedure. Materials systems studied were Ca-La, Cd-Ga and Al-Bi; evaluation of the processed samples included the morphology and electronic property measurements. The morphology was developed using optical and scanning electron microscopy and microprobe analyses. Electronic property characterization of the superconducting transition temperatures were made using an impedance change-tuned coil method.

  16. Development and Validation of a 3-Dimensional CFB Furnace Model

    NASA Astrophysics Data System (ADS)

    Vepsäläinen, Arl; Myöhänen, Karl; Hyppäneni, Timo; Leino, Timo; Tourunen, Antti

    At Foster Wheeler, a three-dimensional CFB furnace model is essential part of knowledge development of CFB furnace process regarding solid mixing, combustion, emission formation and heat transfer. Results of laboratory and pilot scale phenomenon research are utilized in development of sub-models. Analyses of field-test results in industrial-scale CFB boilers including furnace profile measurements are simultaneously carried out with development of 3-dimensional process modeling, which provides a chain of knowledge that is utilized as feedback for phenomenon research. Knowledge gathered by model validation studies and up-to-date parameter databases are utilized in performance prediction and design development of CFB boiler furnaces. This paper reports recent development steps related to modeling of combustion and formation of char and volatiles of various fuel types in CFB conditions. Also a new model for predicting the formation of nitrogen oxides is presented. Validation of mixing and combustion parameters for solids and gases are based on test balances at several large-scale CFB boilers combusting coal, peat and bio-fuels. Field-tests including lateral and vertical furnace profile measurements and characterization of solid materials provides a window for characterization of fuel specific mixing and combustion behavior in CFB furnace at different loads and operation conditions. Measured horizontal gas profiles are projection of balance between fuel mixing and reactions at lower part of furnace and are used together with both lateral temperature profiles at bed and upper parts of furnace for determination of solid mixing and combustion model parameters. Modeling of char and volatile based formation of NO profiles is followed by analysis of oxidizing and reducing regions formed due lower furnace design and mixing characteristics of fuel and combustion airs effecting to formation ofNO furnace profile by reduction and volatile-nitrogen reactions. This paper presents CFB process analysis focused on combustion and NO profiles in pilot and industrial scale bituminous coal combustion.

  17. Socioeconomic Strata, Mobile Technology, and Education: A Comparative Analysis

    ERIC Educational Resources Information Center

    Kim, Paul; Hagashi, Teresita; Carillo, Laura; Gonzales, Irina; Makany, Tamas; Lee, Bommi; Garate, Alberto

    2011-01-01

    Mobile devices are highly portable, easily distributable, substantially affordable, and have the potential to be pedagogically complementary resources in education. This study, incorporating mixed method analyses, discusses the implications of a mobile learning technology-based learning model in two public primary schools near the Mexico-USA…

  18. Modelling the vertical distribution of Prochlorococcus and Synechococcus in the North Pacific Subtropical Ocean.

    PubMed

    Rabouille, Sophie; Edwards, Christopher A; Zehr, Jonathan P

    2007-10-01

    A simple model was developed to examine the vertical distribution of Prochlorococcus and Synechococcus ecotypes in the water column, based on their adaptation to light intensity. Model simulations were compared with a 14-year time series of Prochlorococcus and Synechococcus cell abundances at Station ALOHA in the North Pacific Subtropical Gyre. Data were analysed to examine spatial and temporal patterns in abundances and their ranges of variability in the euphotic zone, the surface mixed layer and the layer in the euphotic zone but below the base of the mixed layer. Model simulations show that the apparent occupation of the whole euphotic zone by a genus can be the result of a co-occurrence of different ecotypes that segregate vertically. The segregation of ecotypes can result simply from differences in light response. A sensitivity analysis of the model, performed on the parameter alpha (initial slope of the light-response curve) and the DIN concentration in the upper water column, demonstrates that the model successfully reproduces the observed range of vertical distributions. Results support the idea that intermittent mixing events may have important ecological and geochemical impacts on the phytoplankton community at Station ALOHA.

  19. Innovative Equipment and Production Method for Mixed Fodder in the Conditions of Agricultural Enterprises

    NASA Astrophysics Data System (ADS)

    Sabiev, U. K.; Demchuk, E. V.; Myalo, V. V.; Soyunov, A. S.

    2017-07-01

    It is recommended to feed the cattle and poultry with grain fodder in the form of feed mixture balanced according to the content. Feeding of grain fodder in the form of stock feed is inefficient and economically unreasonable. The article is devoted to actual problem - the preparation of mixed fodder in the conditions of agricultural enterprises. Review and critical analyses of mixed fodder assemblies and aggregates are given. Structural and technical schemes of small-size mixed fodder aggregate with intensified attachments of vibrating and percussive action for preparation of bulk feed mixture in the conditions of agricultural enterprises were developed. The mixed fodder aggregate for its preparation in the places of direct consumption from own grain fodder production and purchased protein and vitamin supplements is also suggested. Mixed fodder aggregate allows to get prepared mixed fodder of high uniformity at low cost of energy and price of production that is becoming profitable for livestock breeding. Model line-up of suggested mixed fodder aggregate with different productivity both for small and big agricultural enterprises is considered.

  20. Multilevel mixed effects parametric survival models using adaptive Gauss-Hermite quadrature with application to recurrent events and individual participant data meta-analysis.

    PubMed

    Crowther, Michael J; Look, Maxime P; Riley, Richard D

    2014-09-28

    Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta-analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log-cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non-PH (time-dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss-Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta-analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta-analysis of prognostic factor studies in patients with breast cancer. User-friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.

  1. Modeling condensation with a noncondensable gas for mixed convection flow

    NASA Astrophysics Data System (ADS)

    Liao, Yehong

    2007-05-01

    This research theoretically developed a novel mixed convection model for condensation with a noncondensable gas. The model developed herein is comprised of three components: a convection regime map; a mixed convection correlation; and a generalized diffusion layer model. These components were developed in a way to be consistent with the three-level methodology in MELCOR. The overall mixed convection model was implemented into MELCOR and satisfactorily validated with data covering a wide variety of test conditions. In the development of the convection regime map, two analyses with approximations of the local similarity method were performed to solve the multi-component two-phase boundary layer equations. The first analysis studied effects of the bulk velocity on a basic natural convection condensation process and setup conditions to distinguish natural convection from mixed convection. It was found that the superimposed velocity increases condensation heat transfer by sweeping away the noncondensable gas accumulated at the condensation boundary. The second analysis studied effects of the buoyancy force on a basic forced convection condensation process and setup conditions to distinguish forced convection from mixed convection. It was found that the superimposed buoyancy force increases condensation heat transfer by thinning the liquid film thickness and creating a steeper noncondensable gas concentration profile near the condensation interface. In the development of the mixed convection correlation accounting for suction effects, numerical data were obtained from boundary layer analysis for the three convection regimes and used to fit a curve for the Nusselt number of the mixed convection regime as a function of the Nusselt numbers of the natural and forced convection regimes. In the development of the generalized diffusion layer model, the driving potential for mass transfer was expressed as the temperature difference between the bulk and the liquid-gas interface using the Clausius-Clapeyron equation. The model was developed on a mass basis instead of a molar basis to be consistent with general conservation equations. It was found that vapor diffusion is not only driven by a gradient of the molar fraction but also a gradient of the mixture molecular weight at the diffusion layer.

  2. The Divergent Meanings of Life Satisfaction: Item Response Modeling of the Satisfaction with Life Scale in Greenland and Norway

    ERIC Educational Resources Information Center

    Vitterso, Joar; Biswas-Diener, Robert; Diener, Ed

    2005-01-01

    Cultural differences in response to the Satisfaction With Life Scale (SWLS) items is investigated. Data were fit to a mixed Rasch model in order to identify latent classes of participants in a combined sample of Norwegians (N = 461) and Greenlanders (N = 180). Initial analyses showed no mean difference in life satisfaction between the two…

  3. Analytical solution for reactive solute transport considering incomplete mixing within a reference elementary volume

    NASA Astrophysics Data System (ADS)

    Chiogna, Gabriele; Bellin, Alberto

    2013-05-01

    The laboratory experiments of Gramling et al. (2002) showed that incomplete mixing at the pore scale exerts a significant impact on transport of reactive solutes and that assuming complete mixing leads to overestimation of product concentration in bimolecular reactions. Successively, several attempts have been made to model this experiment, either considering spatial segregation of the reactants, non-Fickian transport applying a Continuous Time Random Walk (CTRW) or an effective upscaled time-dependent kinetic reaction term. Previous analyses of these experimental results showed that, at the Darcy scale, conservative solute transport is well described by a standard advection dispersion equation, which assumes complete mixing at the pore scale. However, reactive transport is significantly affected by incomplete mixing at smaller scales, i.e., within a reference elementary volume (REV). We consider here the family of equilibrium reactions for which the concentration of the reactants and the product can be expressed as a function of the mixing ratio, the concentration of a fictitious non reactive solute. For this type of reactions we propose, in agreement with previous studies, to model the effect of incomplete mixing at scales smaller than the Darcy scale assuming that the mixing ratio is distributed within an REV according to a Beta distribution. We compute the parameters of the Beta model by imposing that the mean concentration is equal to the value that the concentration assumes at the continuum Darcy scale, while the variance decays with time as a power law. We show that our model reproduces the concentration profiles of the reaction product measured in the Gramling et al. (2002) experiments using the transport parameters obtained from conservative experiments and an instantaneous reaction kinetic. The results are obtained applying analytical solutions both for conservative and for reactive solute transport, thereby providing a method to handle the effect of incomplete mixing on multispecies reactive solute transport, which is simpler than other previously developed methods.

  4. Precipitation and growth of barite within hydrothermal vent deposits from the Endeavour Segment, Juan de Fuca Ridge

    NASA Astrophysics Data System (ADS)

    Jamieson, John William; Hannington, Mark D.; Tivey, Margaret K.; Hansteen, Thor; Williamson, Nicole M.-B.; Stewart, Margaret; Fietzke, Jan; Butterfield, David; Frische, Matthias; Allen, Leigh; Cousens, Brian; Langer, Julia

    2016-01-01

    Hydrothermal vent deposits form on the seafloor as a result of cooling and mixing of hot hydrothermal fluids with cold seawater. Amongst the major sulfide and sulfate minerals that are preserved at vent sites, barite (BaSO4) is unique because it requires the direct mixing of Ba-rich hydrothermal fluid with sulfate-rich seawater in order for precipitation to occur. Because of its extremely low solubility, barite crystals preserve geochemical fingerprints associated with conditions of formation. Here, we present data from petrographic and geochemical analyses of hydrothermal barite from the Endeavour Segment of the Juan de Fuca Ridge, northeast Pacific Ocean, in order to determine the physical and chemical conditions under which barite precipitates within seafloor hydrothermal vent systems. Petrographic analyses of 22 barite-rich samples show a range of barite crystal morphologies: dendritic and acicular barite forms near the exterior vent walls, whereas larger bladed and tabular crystals occur within the interior of chimneys. A two component mixing model based on Sr concentrations and 87Sr/86Sr of both seawater and hydrothermal fluid, combined with 87Sr/86Sr data from whole rock and laser-ablation ICP-MS analyses of barite crystals indicate that barite precipitates from mixtures containing as low as 17% and as high as 88% hydrothermal fluid component, relative to seawater. Geochemical modelling of the relationship between aqueous species concentrations and degree of fluid mixing indicates that Ba2+ availability is the dominant control on mineral saturation. Observations combined with model results support that dendritic barite forms from fluids of less than 40% hydrothermal component and with a saturation index greater than ∼0.6, whereas more euhedral crystals form at lower levels of supersaturation associated with greater contributions of hydrothermal fluid. Fluid inclusions within barite indicate formation temperatures of between ∼120 °C and 240 °C during barite crystallization. The comparison of fluid inclusion formation temperatures to modelled mixing temperatures indicates that conductive cooling of the vent fluid accounts for 60-120 °C reduction in fluid temperature. Strontium zonation within individual barite crystals records fluctuations in the amount of conductive cooling within chimney walls that may result from cyclical oscillations in hydrothermal fluid flux. Barite chemistry and morphology can be used as a reliable indicator for past conditions of mineralization within both extinct seafloor hydrothermal deposits and ancient land-based volcanogenic massive sulfide deposits.

  5. Accuracies of univariate and multivariate genomic prediction models in African cassava.

    PubMed

    Okeke, Uche Godfrey; Akdemir, Deniz; Rabbi, Ismail; Kulakow, Peter; Jannink, Jean-Luc

    2017-12-04

    Genomic selection (GS) promises to accelerate genetic gain in plant breeding programs especially for crop species such as cassava that have long breeding cycles. Practically, to implement GS in cassava breeding, it is necessary to evaluate different GS models and to develop suitable models for an optimized breeding pipeline. In this paper, we compared (1) prediction accuracies from a single-trait (uT) and a multi-trait (MT) mixed model for a single-environment genetic evaluation (Scenario 1), and (2) accuracies from a compound symmetric multi-environment model (uE) parameterized as a univariate multi-kernel model to a multivariate (ME) multi-environment mixed model that accounts for genotype-by-environment interaction for multi-environment genetic evaluation (Scenario 2). For these analyses, we used 16 years of public cassava breeding data for six target cassava traits and a fivefold cross-validation scheme with 10-repeat cycles to assess model prediction accuracies. In Scenario 1, the MT models had higher prediction accuracies than the uT models for all traits and locations analyzed, which amounted to on average a 40% improved prediction accuracy. For Scenario 2, we observed that the ME model had on average (across all locations and traits) a 12% improved prediction accuracy compared to the uE model. We recommend the use of multivariate mixed models (MT and ME) for cassava genetic evaluation. These models may be useful for other plant species.

  6. Environmental and international tariffs in a mixed duopoly

    NASA Astrophysics Data System (ADS)

    Ferreira, Fernanda A.; Ferreira, Flávio

    2013-10-01

    In this paper, we study the effects of environmental and trade policies in an international mixed duopoly serving two markets, in which the public firm maximizes the sum of consumer surplus and its profit. We also analyse the effects of privatization. The model has two stages. In the first stage, governments choose environmental taxes and import tariffs, simultaneously. Then, the firms engage in a Cournot competition, choosing output levels for the domestic market and to export. We compare the results obtained in the three different ways of moving on the decision make of the firms.

  7. MIXING MODELS IN ANALYSES OF DIET USING MULTIPLE STABLE ISOTOPES: A CRITIQUE

    EPA Science Inventory

    Stable isotopes have become widely used in ecology to quantify the importance of different sources based on their isotopic signature. One example of this has been the determination of food webs, where the isotopic signatures of a predator and various prey items can be used to de...

  8. UNCERTAINTY IN SOURCE PARTITIONING USING STABLE ISOTOPES

    EPA Science Inventory

    Stable isotope analyses are often used to quantify the contribution of multiple sources to a mixture, such as proportions of food sources in an animal's diet, C3 vs. C4 plant inputs to soil organic carbon, etc. Linear mixing models can be used to partition two sources with a sin...

  9. Impact of Case Mix Severity on Quality Improvement in a Patient-centered Medical Home (PCMH) in the Maryland Multi-Payor Program.

    PubMed

    Khanna, Niharika; Shaya, Fadia T; Chirikov, Viktor V; Sharp, David; Steffen, Ben

    2016-01-01

    We present data on quality of care (QC) improvement in 35 of 45 National Quality Forum metrics reported annually by 52 primary care practices recognized as patient-centered medical homes (PCMHs) that participated in the Maryland Multi-Payor Program from 2011 to 2013. We assigned QC metrics to (1) chronic, (2) preventive, and (3) mental health care domains. The study used a panel data design with no control group. Using longitudinal fixed-effects regressions, we modeled QC and case mix severity in a PCMH. Overall, 35 of 45 quality metrics reported by 52 PCMHs demonstrated improvement over 3 years, and case mix severity did not affect the achievement of quality improvement. From 2011 to 2012, QC increased by 0.14 (P < .01) for chronic, 0.15 (P < .01) for preventive, and 0.34 (P < .01) for mental health care domains; from 2012 to 2013 these domains increased by 0.03 (P = .06), 0.04 (P = .05), and 0.07 (P = .12), respectively. In univariate analyses, lower National Commission on Quality Assurance PCMH level was associated with higher QC for the mental health care domain, whereas case mix severity did not correlate with QC. In multivariate analyses, higher QC correlated with larger practices, greater proportion of older patients, and readmission visits. Rural practices had higher proportions of Medicaid patients, lower QC, and higher QC improvement in interaction analyses with time. The gains in QC in the chronic disease domain, the preventive care domain, and, most significantly, the mental health care domain were observed over time regardless of patient case mix severity. QC improvement was generally not modified by practice characteristics, except for rurality. © Copyright 2016 by the American Board of Family Medicine.

  10. Conventional Energy and Macronutrient Variables Distort the Accuracy of Children’s Dietary Reports: Illustrative Data from a Validation Study of Effect of Order Prompts

    PubMed Central

    Baxter, Suzanne Domel; Smith, Albert F.; Hardin, James W.; Nichols, Michele D.

    2008-01-01

    Objective Validation-study data are used to illustrate that conventional energy and macronutrient (protein, carbohydrate, fat) variables, which disregard accuracy of reported items and amounts, misrepresent reporting accuracy. Reporting-error-sensitive variables are proposed which classify reported items as matches or intrusions, and reported amounts as corresponding or overreported. Methods 58 girls and 63 boys were each observed eating school meals on 2 days separated by ≥4 weeks, and interviewed the morning after each observation day. One interview per child had forward-order (morning-to-evening) prompts; one had reverse-order prompts. Original food-item-level analyses found a sex-x-order prompt interaction for omission rates. Current analyses compared reference (observed) and reported information transformed to energy and macronutrients. Results Using conventional variables, reported amounts were less than reference amounts (ps<0.001; paired t-tests); report rates were higher for the first than second interview for energy, protein, and carbohydrate (ps≤0.049; mixed models). Using reporting-error-sensitive variables, correspondence rates were higher for girls with forward- but boys with reverse-order prompts (ps≤0.041; mixed models); inflation ratios were lower with reverse- than forward-order prompts for energy, carbohydrate, and fat (ps≤0.045; mixed models). Conclusions Conventional variables overestimated reporting accuracy and masked order prompt and sex effects. Reporting-error-sensitive variables are recommended when assessing accuracy for energy and macronutrients in validation studies. PMID:16959308

  11. The Problem of Auto-Correlation in Parasitology

    PubMed Central

    Pollitt, Laura C.; Reece, Sarah E.; Mideo, Nicole; Nussey, Daniel H.; Colegrave, Nick

    2012-01-01

    Explaining the contribution of host and pathogen factors in driving infection dynamics is a major ambition in parasitology. There is increasing recognition that analyses based on single summary measures of an infection (e.g., peak parasitaemia) do not adequately capture infection dynamics and so, the appropriate use of statistical techniques to analyse dynamics is necessary to understand infections and, ultimately, control parasites. However, the complexities of within-host environments mean that tracking and analysing pathogen dynamics within infections and among hosts poses considerable statistical challenges. Simple statistical models make assumptions that will rarely be satisfied in data collected on host and parasite parameters. In particular, model residuals (unexplained variance in the data) should not be correlated in time or space. Here we demonstrate how failure to account for such correlations can result in incorrect biological inference from statistical analysis. We then show how mixed effects models can be used as a powerful tool to analyse such repeated measures data in the hope that this will encourage better statistical practices in parasitology. PMID:22511865

  12. An Assessment of Southern Ocean Water Masses and Sea Ice During 1988-2007 in a Suite of Interannual CORE-II Simulations

    NASA Technical Reports Server (NTRS)

    Downes, Stephanie M.; Farneti, Riccardo; Uotila, Petteri; Griffies, Stephen M.; Marsland, Simon J.; Bailey, David; Behrens, Erik; Bentsen, Mats; Bi, Daohua; Biastoch, Arne; hide

    2015-01-01

    We characterise the representation of the Southern Ocean water mass structure and sea ice within a suite of 15 global ocean-ice models run with the Coordinated Ocean-ice Reference Experiment Phase II (CORE-II) protocol. The main focus is the representation of the present (1988-2007) mode and intermediate waters, thus framing an analysis of winter and summer mixed layer depths; temperature, salinity, and potential vorticity structure; and temporal variability of sea ice distributions. We also consider the interannual variability over the same 20 year period. Comparisons are made between models as well as to observation-based analyses where available. The CORE-II models exhibit several biases relative to Southern Ocean observations, including an underestimation of the model mean mixed layer depths of mode and intermediate water masses in March (associated with greater ocean surface heat gain), and an overestimation in September (associated with greater high latitude ocean heat loss and a more northward winter sea-ice extent). In addition, the models have cold and fresh/warm and salty water column biases centred near 50 deg S. Over the 1988-2007 period, the CORE-II models consistently simulate spatially variable trends in sea-ice concentration, surface freshwater fluxes, mixed layer depths, and 200-700 m ocean heat content. In particular, sea-ice coverage around most of the Antarctic continental shelf is reduced, leading to a cooling and freshening of the near surface waters. The shoaling of the mixed layer is associated with increased surface buoyancy gain, except in the Pacific where sea ice is also influential. The models are in disagreement, despite the common CORE-II atmospheric state, in their spatial pattern of the 20-year trends in the mixed layer depth and sea-ice.

  13. Mixed-grade rejection and its association with overt aggression, relational aggression, anxious-withdrawal, and psychological maladjustment.

    PubMed

    Bowker, Julie C; Etkin, Rebecca G

    2014-01-01

    The authors examined the associations between mixed-grade rejection (rejection by peers in a different school grade), anxious-withdrawal, aggression, and psychological adjustment in a middle school setting. Participants were 181 seventh-grade and 180 eighth-grade students (M age = 13.20 years, SD = 0.68 years) who completed peer nomination and self-report measures in their classes. Analyses indicated that in general, same- and mixed-grade rejection were related to overt and relational aggression, but neither type was related to anxious-withdrawal. Mixed-grade rejection was associated uniquely and negatively with self-esteem for seventh-grade boys, while increasing the loneliness associated with anxious-withdrawal. The results suggest that school-wide models of peer relations may be promising for understanding the ways in which different peer contexts contribute to adjustment in middle school settings.

  14. Boundary Layer Depth In Coastal Regions

    NASA Astrophysics Data System (ADS)

    Porson, A.; Schayes, G.

    The results of earlier studies performed about sea breezes simulations have shown that this is a relevant feature of the Planetary Boundary Layer that still requires effort to be diagnosed properly by atmospheric models. Based on the observations made during the ESCOMPTE campaign, over the Mediterranean Sea, different CBL and SBL height estimation processes have been tested with a meso-scale model, TVM. The aim was to compare the critical points of the BL height determination computed using turbulent kinetic energy profile with some other standard evaluations. Moreover, these results have been analysed with different mixing length formulation. The sensitivity of formulation is also analysed with a simple coastal configuration.

  15. Diet behaviour among young people in transition to adulthood (18-25 year olds): a mixed method study.

    PubMed

    Poobalan, Amudha S; Aucott, Lorna S; Clarke, Amanda; Smith, William Cairns S

    2014-01-01

    Background : Young people (18-25 years) during the adolescence/adulthood transition are vulnerable to weight gain and notoriously hard to reach. Despite increased levels of overweight/obesity in this age group, diet behaviour, a major contributor to obesity, is poorly understood. The purpose of this study was to explore diet behaviour among 18-25 year olds with influential factors including attitudes, motivators and barriers. Methods : An explanatory mixed method study design, based on health Behaviour Change Theories was used. Those at University/college and in the community, including those Not in Education, Employment or Training (NEET) were included. An initial quantitative questionnaire survey underpinned by the Theory of Planned Behaviour and Social Cognitive Theory was conducted and the results from this were incorporated into the qualitative phase. Seven focus groups were conducted among similar young people, varying in education and socioeconomic status. Exploratory univariate analysis was followed by multi-staged modelling to analyse the quantitative data. 'Framework Analysis' was used to analyse the focus groups. Results : 1313 questionnaires were analysed. Self-reported overweight/obesity prevalence was 22%, increasing with age, particularly in males. Based on the survey, 40% of young people reported eating an adequate amount of fruits and vegetables and 59% eating regular meals, but 32% reported unhealthy snacking. Based on the statistical modelling, positive attitudes towards diet and high intention (89%), did not translate into healthy diet behaviour. From the focus group discussions, the main motivators for diet behaviour were 'self-appearance' and having 'variety of food'. There were mixed opinions on 'cost' of food and 'taste'. Conclusion : Elements deemed really important to young people have been identified. This mixed method study is the largest in this vulnerable and neglected group covering a wide spectrum of the community. It provides evidence base to inform tailored interventions for a healthy diet within this age group.

  16. Diet behaviour among young people in transition to adulthood (18–25 year olds): a mixed method study

    PubMed Central

    Poobalan, Amudha S.; Aucott, Lorna S.; Clarke, Amanda; Smith, William Cairns S.

    2014-01-01

    Background : Young people (18–25 years) during the adolescence/adulthood transition are vulnerable to weight gain and notoriously hard to reach. Despite increased levels of overweight/obesity in this age group, diet behaviour, a major contributor to obesity, is poorly understood. The purpose of this study was to explore diet behaviour among 18–25 year olds with influential factors including attitudes, motivators and barriers. Methods: An explanatory mixed method study design, based on health Behaviour Change Theories was used. Those at University/college and in the community, including those Not in Education, Employment or Training (NEET) were included. An initial quantitative questionnaire survey underpinned by the Theory of Planned Behaviour and Social Cognitive Theory was conducted and the results from this were incorporated into the qualitative phase. Seven focus groups were conducted among similar young people, varying in education and socioeconomic status. Exploratory univariate analysis was followed by multi-staged modelling to analyse the quantitative data. ‘Framework Analysis’ was used to analyse the focus groups. Results: 1313 questionnaires were analysed. Self-reported overweight/obesity prevalence was 22%, increasing with age, particularly in males. Based on the survey, 40% of young people reported eating an adequate amount of fruits and vegetables and 59% eating regular meals, but 32% reported unhealthy snacking. Based on the statistical modelling, positive attitudes towards diet and high intention (89%), did not translate into healthy diet behaviour. From the focus group discussions, the main motivators for diet behaviour were ‘self-appearance’ and having ‘variety of food’. There were mixed opinions on ‘cost’ of food and ‘taste’. Conclusion: Elements deemed really important to young people have been identified. This mixed method study is the largest in this vulnerable and neglected group covering a wide spectrum of the community. It provides evidence base to inform tailored interventions for a healthy diet within this age group. PMID:25750826

  17. Relevance of workplace social mixing during influenza pandemics: an experimental modelling study of workplace cultures.

    PubMed

    Timpka, T; Eriksson, H; Holm, E; Strömgren, M; Ekberg, J; Spreco, A; Dahlström, Ö

    2016-07-01

    Workplaces are one of the most important regular meeting places in society. The aim of this study was to use simulation experiments to examine the impact of different workplace cultures on influenza dissemination during pandemics. The impact is investigated by experiments with defined social-mixing patterns at workplaces using semi-virtual models based on authentic sociodemographic and geographical data from a North European community (population 136 000). A simulated pandemic outbreak was found to affect 33% of the total population in the community with the reference academic-creative workplace culture; virus transmission at the workplace accounted for 10·6% of the cases. A model with a prevailing industrial-administrative workplace culture generated 11% lower incidence than the reference model, while the model with a self-employed workplace culture (also corresponding to a hypothetical scenario with all workplaces closed) produced 20% fewer cases. The model representing an academic-creative workplace culture with restricted workplace interaction generated 12% lower cumulative incidence compared to the reference model. The results display important theoretical associations between workplace social-mixing cultures and community-level incidence rates during influenza pandemics. Social interaction patterns at workplaces should be taken into consideration when analysing virus transmission patterns during influenza pandemics.

  18. Drug awareness in adolescents attending a mental health service: analysis of longitudinal data.

    PubMed

    Arnau, Jaume; Bono, Roser; Díaz, Rosa; Goti, Javier

    2011-11-01

    One of the procedures used most recently with longitudinal data is linear mixed models. In the context of health research the increasing number of studies that now use these models bears witness to the growing interest in this type of analysis. This paper describes the application of linear mixed models to a longitudinal study of a sample of Spanish adolescents attending a mental health service, the aim being to investigate their knowledge about the consumption of alcohol and other drugs. More specifically, the main objective was to compare the efficacy of a motivational interviewing programme with a standard approach to drug awareness. The models used to analyse the overall indicator of drug awareness were as follows: (a) unconditional linear growth curve model; (b) growth model with subject-associated variables; and (c) individual curve model with predictive variables. The results showed that awareness increased over time and that the variable 'schooling years' explained part of the between-subjects variation. The effect of motivational interviewing was also significant.

  19. Neutrinos and flavor symmetries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanimoto, Morimitsu

    2015-07-15

    We discuss the recent progress of flavor models with the non-Abelian discrete symmetry in the lepton sector focusing on the θ{sub 13} and CP violating phase. In both direct approach and indirect approach of the flavor symmetry, the non-vanishing θ{sub 13} is predictable. The flavor symmetry with the generalised CP symmetry can also predicts the CP violating phase. We show the phenomenological analyses of neutrino mixing for the typical flavor models.

  20. Sensitivity of the ocean overturning circulation to wind and mixing: theoretical scalings and global ocean models

    NASA Astrophysics Data System (ADS)

    Nikurashin, Maxim; Gunn, Andrew

    2017-04-01

    The meridional overturning circulation (MOC) is a planetary-scale oceanic flow which is of direct importance to the climate system: it transports heat meridionally and regulates the exchange of CO2 with the atmosphere. The MOC is forced by wind and heat and freshwater fluxes at the surface and turbulent mixing in the ocean interior. A number of conceptual theories for the sensitivity of the MOC to changes in forcing have recently been developed and tested with idealized numerical models. However, the skill of the simple conceptual theories to describe the MOC simulated with higher complexity global models remains largely unknown. In this study, we present a systematic comparison of theoretical and modelled sensitivity of the MOC and associated deep ocean stratification to vertical mixing and southern hemisphere westerlies. The results show that theories that simplify the ocean into a single-basin, zonally-symmetric box are generally in a good agreement with a realistic, global ocean circulation model. Some disagreement occurs in the abyssal ocean, where complex bottom topography is not taken into account by simple theories. Distinct regimes, where the MOC has a different sensitivity to wind or mixing, as predicted by simple theories, are also clearly shown by the global ocean model. The sensitivity of the Indo-Pacific, Atlantic, and global basins is analysed separately to validate the conceptual understanding of the upper and lower overturning cells in the theory.

  1. Posttest Analyses of the Steel Containment Vessel Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costello, J.F.; Hessheimer, M.F.; Ludwigsen, J.S.

    A high pressure test of a scale model of a steel containment vessel (SCV) was conducted on December 11-12, 1996 at Sandia National Laboratories, Albuquerque, NM, USA. The test model is a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of an improved Mark II boiling water reactor (BWR) containment. This testis part of a program to investigate the response of representative models of nuclear containment structures to pressure loads beyond the design basis accident. The posttest analyses of this test focused on three areas where the pretest analysis effort did not adequately predict the model behavior duringmore » the test. These areas are the onset of global yielding, the strain concentrations around the equipment hatch and the strain concentrations that led to a small tear near a weld relief opening that was not modeled in the pretest analysis.« less

  2. A Note on Recurring Misconceptions When Fitting Nonlinear Mixed Models.

    PubMed

    Harring, Jeffrey R; Blozis, Shelley A

    2016-01-01

    Nonlinear mixed-effects (NLME) models are used when analyzing continuous repeated measures data taken on each of a number of individuals where the focus is on characteristics of complex, nonlinear individual change. Challenges with fitting NLME models and interpreting analytic results have been well documented in the statistical literature. However, parameter estimates as well as fitted functions from NLME analyses in recent articles have been misinterpreted, suggesting the need for clarification of these issues before these misconceptions become fact. These misconceptions arise from the choice of popular estimation algorithms, namely, the first-order linearization method (FO) and Gaussian-Hermite quadrature (GHQ) methods, and how these choices necessarily lead to population-average (PA) or subject-specific (SS) interpretations of model parameters, respectively. These estimation approaches also affect the fitted function for the typical individual, the lack-of-fit of individuals' predicted trajectories, and vice versa.

  3. An independent component analysis confounding factor correction framework for identifying broad impact expression quantitative trait loci

    PubMed Central

    Ju, Jin Hyun; Crystal, Ronald G.

    2017-01-01

    Genome-wide expression Quantitative Trait Loci (eQTL) studies in humans have provided numerous insights into the genetics of both gene expression and complex diseases. While the majority of eQTL identified in genome-wide analyses impact a single gene, eQTL that impact many genes are particularly valuable for network modeling and disease analysis. To enable the identification of such broad impact eQTL, we introduce CONFETI: Confounding Factor Estimation Through Independent component analysis. CONFETI is designed to address two conflicting issues when searching for broad impact eQTL: the need to account for non-genetic confounding factors that can lower the power of the analysis or produce broad impact eQTL false positives, and the tendency of methods that account for confounding factors to model broad impact eQTL as non-genetic variation. The key advance of the CONFETI framework is the use of Independent Component Analysis (ICA) to identify variation likely caused by broad impact eQTL when constructing the sample covariance matrix used for the random effect in a mixed model. We show that CONFETI has better performance than other mixed model confounding factor methods when considering broad impact eQTL recovery from synthetic data. We also used the CONFETI framework and these same confounding factor methods to identify eQTL that replicate between matched twin pair datasets in the Multiple Tissue Human Expression Resource (MuTHER), the Depression Genes Networks study (DGN), the Netherlands Study of Depression and Anxiety (NESDA), and multiple tissue types in the Genotype-Tissue Expression (GTEx) consortium. These analyses identified both cis-eQTL and trans-eQTL impacting individual genes, and CONFETI had better or comparable performance to other mixed model confounding factor analysis methods when identifying such eQTL. In these analyses, we were able to identify and replicate a few broad impact eQTL although the overall number was small even when applying CONFETI. In light of these results, we discuss the broad impact eQTL that have been previously reported from the analysis of human data and suggest that considerable caution should be exercised when making biological inferences based on these reported eQTL. PMID:28505156

  4. An independent component analysis confounding factor correction framework for identifying broad impact expression quantitative trait loci.

    PubMed

    Ju, Jin Hyun; Shenoy, Sushila A; Crystal, Ronald G; Mezey, Jason G

    2017-05-01

    Genome-wide expression Quantitative Trait Loci (eQTL) studies in humans have provided numerous insights into the genetics of both gene expression and complex diseases. While the majority of eQTL identified in genome-wide analyses impact a single gene, eQTL that impact many genes are particularly valuable for network modeling and disease analysis. To enable the identification of such broad impact eQTL, we introduce CONFETI: Confounding Factor Estimation Through Independent component analysis. CONFETI is designed to address two conflicting issues when searching for broad impact eQTL: the need to account for non-genetic confounding factors that can lower the power of the analysis or produce broad impact eQTL false positives, and the tendency of methods that account for confounding factors to model broad impact eQTL as non-genetic variation. The key advance of the CONFETI framework is the use of Independent Component Analysis (ICA) to identify variation likely caused by broad impact eQTL when constructing the sample covariance matrix used for the random effect in a mixed model. We show that CONFETI has better performance than other mixed model confounding factor methods when considering broad impact eQTL recovery from synthetic data. We also used the CONFETI framework and these same confounding factor methods to identify eQTL that replicate between matched twin pair datasets in the Multiple Tissue Human Expression Resource (MuTHER), the Depression Genes Networks study (DGN), the Netherlands Study of Depression and Anxiety (NESDA), and multiple tissue types in the Genotype-Tissue Expression (GTEx) consortium. These analyses identified both cis-eQTL and trans-eQTL impacting individual genes, and CONFETI had better or comparable performance to other mixed model confounding factor analysis methods when identifying such eQTL. In these analyses, we were able to identify and replicate a few broad impact eQTL although the overall number was small even when applying CONFETI. In light of these results, we discuss the broad impact eQTL that have been previously reported from the analysis of human data and suggest that considerable caution should be exercised when making biological inferences based on these reported eQTL.

  5. CFD analysis of jet mixing in low NOx flametube combustors

    NASA Technical Reports Server (NTRS)

    Talpallikar, M. V.; Smith, C. E.; Lai, M. C.; Holdeman, J. D.

    1991-01-01

    The Rich-burn/Quick-mix/Lean-burn (RQL) combustor was identified as a potential gas turbine combustor concept to reduce NO(x) emissions in High Speed Civil Transport (HSCT) aircraft. To demonstrate reduced NO(x) levels, cylindrical flametube versions of RQL combustors are being tested at NASA Lewis Research Center. A critical technology needed for the RQL combustor is a method of quickly mixing by-pass combustion air with rich-burn gases. Jet mixing in a cylindrical quick-mix section was numerically analyzed. The quick-mix configuration was five inches in diameter and employed twelve radial-inflow slots. The numerical analyses were performed with an advanced, validated 3-D Computational Fluid Dynamics (CFD) code named REFLEQS. Parametric variation of jet-to-mainstream momentum flux ratio (J) and slot aspect ratio was investigated. Both non-reacting and reacting analyses were performed. Results showed mixing and NO(x) emissions to be highly sensitive to J and slot aspect ratio. Lowest NO(x) emissions occurred when the dilution jet penetrated to approximately mid-radius. The viability of using 3-D CFD analyses for optimizing jet mixing was demonstrated.

  6. CFD analysis of jet mixing in low NO(x) flametube combustors

    NASA Technical Reports Server (NTRS)

    Talpallikar, M. V.; Smith, C. E.; Lai, M. C.; Holdeman, J. D.

    1991-01-01

    The Rich-burn/Quick-mix/Lean-burn (RQL) combustor has been identified as a potential gas turbine combustor concept to reduce NO(x) emissions in High Speed Civil Transport (HSCT) aircraft. To demonstrate reduced NO(x) levels, cylindrical flametube versions of RQL combustors are being tested at NASA Lewis Research Center. A critical technology needed for the RQL combustor is a method of quickly mixing by-pass combustion air with rich-burn gases. Jet mixing in a cylindrical quick-mix section was numerically analyzed. The quick-mix configuration was five inches in diameter and employed twelve radial-inflow slots. The numerical analyses were performed with an advanced, validated 3D Computational Fluid Dynamics (CFD) code named REFLEQS. Parametric variation of jet-to-mainstream momentum flux ratio (J) and slot aspect ratio was investigated. Both non-reacting and reacting analyses were performed. Results showed mixing and NO(x) emissions to be highly sensitive to J and slot aspect ratio. Lowest NO(x) emissions occurred when the dilution jet penetrated to approximately mid-radius. The viability of using 3D CFD analyses for optimizing jet mixing was demonstrated.

  7. BeiDou inter-satellite-type bias evaluation and calibration for mixed receiver attitude determination.

    PubMed

    Nadarajah, Nandakumaran; Teunissen, Peter J G; Raziq, Noor

    2013-07-22

    The Chinese BeiDou system (BDS), having different types of satellites, is an important addition to the ever growing system of Global Navigation Satellite Systems (GNSS). It consists of Geostationary Earth Orbit (GEO) satellites, Inclined Geosynchronous Satellite Orbit (IGSO) satellites and Medium Earth Orbit (MEO) satellites. This paper investigates the receiver-dependent bias between these satellite types, for which we coined the name "inter-satellite-type bias" (ISTB), and its impact on mixed receiver attitude determination. Assuming different receiver types may have different delays/biases for different satellite types, we model the differential ISTBs among three BeiDou satellite types and investigate their existence and their impact on mixed receiver attitude determination. Our analyses using the real data sets from Curtin's GNSS array consisting of different types of BeiDou enabled receivers and series of zero-baseline experiments with BeiDou-enabled receivers reveal the existence of non-zero ISTBs between different BeiDou satellite types. We then analyse the impact of these biases on BeiDou-only attitude determination using the constrained (C-)LAMBDA method, which exploits the knowledge of baseline length. Results demonstrate that these biases could seriously affect the integer ambiguity resolution for attitude determination using mixed receiver types and that a priori correction of these biases will dramatically improve the success rate.

  8. Effects of Morphological Family Size for Young Readers

    ERIC Educational Resources Information Center

    Perdijk, Kors; Schreuder, Robert; Baayen, R. Harald; Verhoeven, Ludo

    2012-01-01

    Dutch children, from the second and fourth grade of primary school, were each given a visual lexical decision test on 210 Dutch monomorphemic words. After removing words not recognized by a majority of the younger group, (lexical) decisions were analysed by mixed-model regression methods to see whether morphological Family Size influenced decision…

  9. The Role of Perceived Maternal Favoritism in Sibling Relations in Midlife

    ERIC Educational Resources Information Center

    Suitor, J. Jill; Sechrist, Jori; Plikuhn, Mari; Pardo, Seth T.; Gilligan, Megan; Pillemer, Karl

    2009-01-01

    Data were collected from 708 adult children nested within 274 later-life families from the Within-Family Differences Study to explore the role of perceived maternal favoritism in the quality of sibling relations in midlife. Mixed-model analyses revealed that regardless of which sibling was favored, perceptions of current favoritism and…

  10. Evidence of major genes affecting stress response in rainbow trout using Bayesian methods of complex segregation analysis

    USDA-ARS?s Scientific Manuscript database

    As a first step towards the genetic mapping of quantitative trait loci (QTL) affecting stress response variation in rainbow trout, we performed complex segregation analyses (CSA) fitting mixed inheritance models of plasma cortisol using Bayesian methods in large full-sib families of rainbow trout. ...

  11. Children's Models about Colours in Nahuatl-Speaking Communities

    ERIC Educational Resources Information Center

    Gallegos-Cázares, Leticia; Flores-Camacho, Fernando; Calderón-Canales, Elena; Perrusquía-Máximo, Elvia; García-Rivera, Beatriz

    2014-01-01

    This paper presents the development and structure of indigenous children's ideas about mixing colours as well as their ideas about each colour, derived from their traditions. The children were interviewed both at school and outside it, and an educational proposal was implemented. Ideas expressed in the school context were analysed using the…

  12. Does H → γγ taste like vanilla new physics?

    NASA Astrophysics Data System (ADS)

    Almeida, L. G.; Bertuzzo, E.; Machado, P. A. N.; Funchal, R. Zukanovich

    2012-11-01

    We analyse the interplay between the Higgs to diphoton rate and electroweak precision measurements constraints in extensions of the Standard Model with new uncolored charged fermions that do not mix with the ordinary ones. We also compute the pair production cross sections for the lightest fermion and compare them with current bounds.

  13. Integrated investigation of the mixed origin of lunar sample 72161,11

    NASA Technical Reports Server (NTRS)

    Basu, A.; Des Marais, D. J.; Hayes, J. M.; Meinschein, W. G.

    1975-01-01

    The comminution-agglutination model and the solar-wind implantation-retention model are used to postulate the origins of the particulate components of lunar sample (72161,11), a submillimeter fraction of a surface sample for the dark mantle regolith at LRV-3. Grain-size analysis was performed by wet sieving with liquid argon, and analyses for CO2, CO, CH4, and H2 were carried out by stepwise pyrolysis in a helium atmosphere. The results indicate that the present sample is from a mature regolith, but the agglutinate content is only 30% in the particle-size range between 90 and 177 microns, indicating an apparent departure from steady state. Analyses of the carbon, methane, and hydrogen concentrations in size fractions larger than 149 microns show that the volume-correlated component of these species increases with increased grain size. It is suggested that the observed increase can be explained in terms of mixing of a dominant local population of coarser agglutinates having high carbon and hydrogen concentrations with an imported population of finer agglutinates relatively poor in carbon and hydrogen.

  14. Swarm Intelligence for Urban Dynamics Modelling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghnemat, Rawan; Bertelle, Cyrille; Duchamp, Gerard H. E.

    2009-04-16

    In this paper, we propose swarm intelligence algorithms to deal with dynamical and spatial organization emergence. The goal is to model and simulate the developement of spatial centers using multi-criteria. We combine a decentralized approach based on emergent clustering mixed with spatial constraints or attractions. We propose an extension of the ant nest building algorithm with multi-center and adaptive process. Typically, this model is suitable to analyse and simulate urban dynamics like gentrification or the dynamics of the cultural equipment in urban area.

  15. The Rayleigh curve as a model for effort distribution over the life of medium scale software systems. M.S. Thesis - Maryland Univ.

    NASA Technical Reports Server (NTRS)

    Picasso, G. O.; Basili, V. R.

    1982-01-01

    It is noted that previous investigations into the applicability of Rayleigh curve model to medium scale software development efforts have met with mixed results. The results of these investigations are confirmed by analyses of runs and smoothing. The reasons for the models' failure are found in the subcycle effort data. There are four contributing factors: uniqueness of the environment studied, the influence of holidays, varying management techniques and differences in the data studied.

  16. Swarm Intelligence for Urban Dynamics Modelling

    NASA Astrophysics Data System (ADS)

    Ghnemat, Rawan; Bertelle, Cyrille; Duchamp, Gérard H. E.

    2009-04-01

    In this paper, we propose swarm intelligence algorithms to deal with dynamical and spatial organization emergence. The goal is to model and simulate the developement of spatial centers using multi-criteria. We combine a decentralized approach based on emergent clustering mixed with spatial constraints or attractions. We propose an extension of the ant nest building algorithm with multi-center and adaptive process. Typically, this model is suitable to analyse and simulate urban dynamics like gentrification or the dynamics of the cultural equipment in urban area.

  17. Improved accuracy for finite element structural analysis via a new integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  18. Immersion freezing of internally and externally mixed mineral dust species analyzed by stochastic and deterministic models

    NASA Astrophysics Data System (ADS)

    Wong, B.; Kilthau, W.; Knopf, D. A.

    2017-12-01

    Immersion freezing is recognized as the most important ice crystal formation process in mixed-phase cloud environments. It is well established that mineral dust species can act as efficient ice nucleating particles. Previous research has focused on determination of the ice nucleation propensity of individual mineral dust species. In this study, the focus is placed on how different mineral dust species such as illite, kaolinite and feldspar, initiate freezing of water droplets when present in internal and external mixtures. The frozen fraction data for single and multicomponent mineral dust droplet mixtures are recorded under identical cooling rates. Additionally, the time dependence of freezing is explored. Externally and internally mixed mineral dust droplet samples are exposed to constant temperatures (isothermal freezing experiments) and frozen fraction data is recorded based on time intervals. Analyses of single and multicomponent mineral dust droplet samples include different stochastic and deterministic models such as the derivation of the heterogeneous ice nucleation rate coefficient (J­­het), the single contact angle (α) description, the α-PDF model, active sites representation, and the deterministic model. Parameter sets derived from freezing data of single component mineral dust samples are evaluated for prediction of cooling rate dependent and isothermal freezing of multicomponent externally or internally mixed mineral dust samples. The atmospheric implications of our findings are discussed.

  19. CyTOF workflow: differential discovery in high-throughput high-dimensional cytometry datasets

    PubMed Central

    Nowicka, Malgorzata; Krieg, Carsten; Weber, Lukas M.; Hartmann, Felix J.; Guglietta, Silvia; Becher, Burkhard; Levesque, Mitchell P.; Robinson, Mark D.

    2017-01-01

    High dimensional mass and flow cytometry (HDCyto) experiments have become a method of choice for high throughput interrogation and characterization of cell populations.Here, we present an R-based pipeline for differential analyses of HDCyto data, largely based on Bioconductor packages. We computationally define cell populations using FlowSOM clustering, and facilitate an optional but reproducible strategy for manual merging of algorithm-generated clusters. Our workflow offers different analysis paths, including association of cell type abundance with a phenotype or changes in signaling markers within specific subpopulations, or differential analyses of aggregated signals. Importantly, the differential analyses we show are based on regression frameworks where the HDCyto data is the response; thus, we are able to model arbitrary experimental designs, such as those with batch effects, paired designs and so on. In particular, we apply generalized linear mixed models to analyses of cell population abundance or cell-population-specific analyses of signaling markers, allowing overdispersion in cell count or aggregated signals across samples to be appropriately modeled. To support the formal statistical analyses, we encourage exploratory data analysis at every step, including quality control (e.g. multi-dimensional scaling plots), reporting of clustering results (dimensionality reduction, heatmaps with dendrograms) and differential analyses (e.g. plots of aggregated signals). PMID:28663787

  20. Quantitative assessment of the flow pattern in the southern Arava Valley (Israel) by environmental tracers and a mixing cell model

    NASA Astrophysics Data System (ADS)

    Adar, E. M.; Rosenthal, E.; Issar, A. S.; Batelaan, O.

    1992-08-01

    This paper demonstrates the implementation of a novel mathematical model to quantify subsurface inflows from various sources into the arid alluvial basin of the southern Arava Valley divided between Israel and Jordan. The model is based on spatial distribution of environmental tracers and is aimed for use on basins with complex hydrogeological structure and/or with scarce physical hydrologic information. However, a sufficient qualified number of wells and springs are required to allow water sampling for chemical and isotopic analyses. Environmental tracers are used in a multivariable cluster analysis to define potential sources of recharge, and also to delimit homogeneous mixing compartments within the modeled aquifer. Six mixing cells were identified based on 13 constituents. A quantitative assessment of 11 significant subsurface inflows was obtained. Results revealed that the total recharge into the southern Arava basin is around 12.52 × 10 6m3year-1. The major source of inflow into the alluvial aquifer is from the Nubian sandstone aquifer which comprises 65-75% of the total recharge. Only 19-24% of the recharge, but the most important source of fresh water, originates over the eastern Jordanian mountains and alluvial fans.

  1. Modelling the effect of environmental factors on resource allocation in mixed plants systems

    NASA Astrophysics Data System (ADS)

    Gayler, Sebastian; Priesack, Eckart

    2010-05-01

    In most cases, growth of plants is determined by competition against neighbours for the local resources light, water and nutrients and by defending against herbivores and pathogens. Consequently, it is important for a plant to grow fast without neglecting defence. However, plant internal substrates and energy required to support maintenance, growth and defence are limited and the total demand for these processes cannot be met in most cases. Therefore, allocation of carbohydrates to growth related primary metabolism or to defence related secondary metabolism can be seen as a trade-off between the demand of plants for being competitive against neighbours and for being more resistant against pathogens. A modelling approach is presented which can be used to simulate competition for light, water and nutrients between plant individuals in mixed canopies. The balance of resource allocation between growth processes and synthesis of secondary compounds is modelled by a concept originating from different plant defence hypothesis. The model is used to analyse the impact of environmental factors such as soil water and nitrogen availability, planting density and atmospheric concentration of CO2 on growth of plant individuals within mixed canopies and variations in concentration of carbon-based secondary metabolites in plant tissues.

  2. Constraining Carbonaceous Aerosol Climate Forcing by Bridging Laboratory, Field and Modeling Studies

    NASA Astrophysics Data System (ADS)

    Dubey, M. K.; Aiken, A. C.; Liu, S.; Saleh, R.; Cappa, C. D.; Williams, L. R.; Donahue, N. M.; Gorkowski, K.; Ng, N. L.; Mazzoleni, C.; China, S.; Sharma, N.; Yokelson, R. J.; Allan, J. D.; Liu, D.

    2014-12-01

    Biomass and fossil fuel combustion emits black (BC) and brown carbon (BrC) aerosols that absorb sunlight to warm climate and organic carbon (OC) aerosols that scatter sunlight to cool climate. The net forcing depends strongly on the composition, mixing state and transformations of these carbonaceous aerosols. Complexities from large variability of fuel types, combustion conditions and aging processes have confounded their treatment in models. We analyse recent laboratory and field measurements to uncover fundamental mechanism that control the chemical, optical and microphysical properties of carbonaceous aerosols that are elaborated below: Wavelength dependence of absorption and the single scattering albedo (ω) of fresh biomass burning aerosols produced from many fuels during FLAME-4 was analysed to determine the factors that control the variability in ω. Results show that ω varies strongly with fire-integrated modified combustion efficiency (MCEFI)—higher MCEFI results in lower ω values and greater spectral dependence of ω (Liu et al GRL 2014). A parameterization of ω as a function of MCEFI for fresh BB aerosols is derived from the laboratory data and is evaluated by field data, including BBOP. Our laboratory studies also demonstrate that BrC production correlates with BC indicating that that they are produced by a common mechanism that is driven by MCEFI (Saleh et al NGeo 2014). We show that BrC absorption is concentrated in the extremely low volatility component that favours long-range transport. We observe substantial absorption enhancement for internally mixed BC from diesel and wood combustion near London during ClearFlo. While the absorption enhancement is due to BC particles coated by co-emitted OC in urban regions, it increases with photochemical age in rural areas and is simulated by core-shell models. We measure BrC absorption that is concentrated in the extremely low volatility components and attribute it to wood burning. Our results support enhanced light absorption by internally mixed BC parameterizations in models and identify mixed biomass and fossil combustion regions where this effect is large. We unify the treatment of carbonaceous aerosol components and their interactions to simplify and verify their representation in climate models, and re-evaluate their direct radiative forcing.

  3. Manual of phosphoric acid fuel cell power plant optimization model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    An optimized cost and performance model for a phosphoric acid fuel cell power plant system was derived and developed into a modular FORTRAN computer code. Cost, energy, mass, and electrochemical analyses were combined to develop a mathematical model for optimizing the steam to methane ratio in the reformer, hydrogen utilization in the PAFC plates per stack. The nonlinear programming code, COMPUTE, was used to solve this model, in which the method of mixed penalty function combined with Hooke and Jeeves pattern search was chosen to evaluate this specific optimization problem.

  4. Insights into failed lexical retrieval from network science.

    PubMed

    Vitevitch, Michael S; Chan, Kit Ying; Goldstein, Rutherford

    2014-02-01

    Previous network analyses of the phonological lexicon (Vitevitch, 2008) observed a web-like structure that exhibited assortative mixing by degree: words with dense phonological neighborhoods tend to have as neighbors words that also have dense phonological neighborhoods, and words with sparse phonological neighborhoods tend to have as neighbors words that also have sparse phonological neighborhoods. Given the role that assortative mixing by degree plays in network resilience, we examined instances of real and simulated lexical retrieval failures in computer simulations, analysis of a slips-of-the-ear corpus, and three psycholinguistic experiments for evidence of this network characteristic in human behavior. The results of the various analyses support the hypothesis that the structure of words in the mental lexicon influences lexical processing. The implications of network science for current models of spoken word recognition, language processing, and cognitive psychology more generally are discussed. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Insights into failed lexical retrieval from network science

    PubMed Central

    Vitevitch, Michael S.; Chan, Kit Ying; Goldstein, Rutherford

    2013-01-01

    Previous network analyses of the phonological lexicon (Vitevitch, 2008) observed a web-like structure that exhibited assortative mixing by degree: words with dense phonological neighborhoods tend to have as neighbors words that also have dense phonological neighborhoods, and words with sparse phonological neighborhoods tend to have as neighbors words that also have sparse phonological neighborhoods. Given the role that assortative mixing by degree plays in network resilience, we examined instances of real and simulated lexical retrieval failures in computer simulations, analysis of a slips-of-the-ear corpus, and three psycholinguistic experiments for evidence of this network characteristic in human behavior. The results of the various analyses support the hypothesis that the structure of words in the mental lexicon influences lexical processing. The implications of network science for current models of spoken word recognition, language processing, and cognitive psychology more generally are discussed. PMID:24269488

  6. Thermal stratification potential in rocket engine coolant channels

    NASA Technical Reports Server (NTRS)

    Kacynski, Kenneth J.

    1992-01-01

    The potential for rocket engine coolant channel flow stratification was computationally studied. A conjugate, 3-D, conduction/advection analysis code (SINDA/FLUINT) was used. Core fluid temperatures were predicted to vary by over 360 K across the coolant channel, at the throat section, indicating that the conventional assumption of a fully mixed fluid may be extremely inaccurate. Because of the thermal stratification of the fluid, the walls exposed to the rocket engine exhaust gases will be hotter than an assumption of full mixing would imply. In this analysis, wall temperatures were 160 K hotter in the turbulent mixing case than in the full mixing case. The discrepancy between the full mixing and turbulent mixing analyses increased with increasing heat transfer. Both analysis methods predicted identical channel resistances at the coolant inlet, but in the stratified analysis the thermal resistance was negligible. The implications are significant. Neglect of thermal stratification could lead to underpredictions in nozzle wall temperatures. Even worse, testing at subscale conditions may be inadequate for modeling conditions that would exist in a full scale engine.

  7. Three Approaches to Modeling Gene-Environment Interactions in Longitudinal Family Data: Gene-Smoking Interactions in Blood Pressure.

    PubMed

    Basson, Jacob; Sung, Yun Ju; de Las Fuentes, Lisa; Schwander, Karen L; Vazquez, Ana; Rao, Dabeeru C

    2016-01-01

    Blood pressure (BP) has been shown to be substantially heritable, yet identified genetic variants explain only a small fraction of the heritability. Gene-smoking interactions have detected novel BP loci in cross-sectional family data. Longitudinal family data are available and have additional promise to identify BP loci. However, this type of data presents unique analysis challenges. Although several methods for analyzing longitudinal family data are available, which method is the most appropriate and under what conditions has not been fully studied. Using data from three clinic visits from the Framingham Heart Study, we performed association analysis accounting for gene-smoking interactions in BP at 31,203 markers on chromosome 22. We evaluated three different modeling frameworks: generalized estimating equations (GEE), hierarchical linear modeling, and pedigree-based mixed modeling. The three models performed somewhat comparably, with multiple overlaps in the most strongly associated loci from each model. Loci with the greatest significance were more strongly supported in the longitudinal analyses than in any of the component single-visit analyses. The pedigree-based mixed model was more conservative, with less inflation in the variant main effect and greater deflation in the gene-smoking interactions. The GEE, but not the other two models, resulted in substantial inflation in the tail of the distribution when variants with minor allele frequency <1% were included in the analysis. The choice of analysis method should depend on the model and the structure and complexity of the familial and longitudinal data. © 2015 WILEY PERIODICALS, INC.

  8. Pyroxene-melt equilibria. [for lunar maria basalts

    NASA Technical Reports Server (NTRS)

    Nielsen, R. L.; Drake, M. J.

    1979-01-01

    A thermodynamic analysis of pyroxene-melt equilibria is performed through use of a literature survey of analyses of high-Ca pyroxene and coexisting silicate melt pairs and analyses of low-Ca pyroxene silicate melt pairs. Reference is made to a modified version of a model developed by Bottinga and Weill (1972) which more successfully accounts for variations in melt composition than does a model which considers the melt to be composed of simple oxides which mix ideally. By using a variety of pyroxene melt relations, several pyroxene-melt and low-Ca pyroxene-high-Ca pyroxene geothermometers are developed which have internally consistant precisions of approximately + or - 20 C. Finally, it is noted that these equations may have application in modeling the evolution of mineral compositions during differentiation of basaltic magmas.

  9. Influence assessment in censored mixed-effects models using the multivariate Student’s-t distribution

    PubMed Central

    Matos, Larissa A.; Bandyopadhyay, Dipankar; Castro, Luis M.; Lachos, Victor H.

    2015-01-01

    In biomedical studies on HIV RNA dynamics, viral loads generate repeated measures that are often subjected to upper and lower detection limits, and hence these responses are either left- or right-censored. Linear and non-linear mixed-effects censored (LMEC/NLMEC) models are routinely used to analyse these longitudinal data, with normality assumptions for the random effects and residual errors. However, the derived inference may not be robust when these underlying normality assumptions are questionable, especially the presence of outliers and thick-tails. Motivated by this, Matos et al. (2013b) recently proposed an exact EM-type algorithm for LMEC/NLMEC models using a multivariate Student’s-t distribution, with closed-form expressions at the E-step. In this paper, we develop influence diagnostics for LMEC/NLMEC models using the multivariate Student’s-t density, based on the conditional expectation of the complete data log-likelihood. This partially eliminates the complexity associated with the approach of Cook (1977, 1986) for censored mixed-effects models. The new methodology is illustrated via an application to a longitudinal HIV dataset. In addition, a simulation study explores the accuracy of the proposed measures in detecting possible influential observations for heavy-tailed censored data under different perturbation and censoring schemes. PMID:26190871

  10. Toward Better Modeling of Supercritical Turbulent Mixing

    NASA Technical Reports Server (NTRS)

    Selle, Laurent; Okongo'o, Nora; Bellan, Josette; Harstad, Kenneth

    2008-01-01

    study was done as part of an effort to develop computational models representing turbulent mixing under thermodynamic supercritical (here, high pressure) conditions. The question was whether the large-eddy simulation (LES) approach, developed previously for atmospheric-pressure compressible-perfect-gas and incompressible flows, can be extended to real-gas non-ideal (including supercritical) fluid mixtures. [In LES, the governing equations are approximated such that the flow field is spatially filtered and subgrid-scale (SGS) phenomena are represented by models.] The study included analyses of results from direct numerical simulation (DNS) of several such mixing layers based on the Navier-Stokes, total-energy, and conservation- of-chemical-species governing equations. Comparison of LES and DNS results revealed the need to augment the atmospheric- pressure LES equations with additional SGS momentum and energy terms. These new terms are the direct result of high-density-gradient-magnitude regions found in the DNS and observed experimentally under fully turbulent flow conditions. A model has been derived for the new term in the momentum equation and was found to perform well at small filter size but to deteriorate with increasing filter size. Several alternative models were derived for the new SGS term in the energy equation that would need further investigations to determine if they are too computationally intensive in LES.

  11. Simulation of the as-cast structure of Al-4.0wt.%Cu ingots with a 5-phase mixed columnar-equiaxed solidification model

    NASA Astrophysics Data System (ADS)

    Wu, M.; Ahmadein, M.; Kharicha, A.; Ludwig, A.; Li, J. H.; Schumacher, P.

    2012-07-01

    Empirical knowledge about the formation of the as-cast structure, mostly obtained before 1980s, has revealed two critical issues: one is the origin of the equiaxed crystals; one is the competing growth of the columnar and equiaxed structures, and the columnar-to-equiaxed transition (CET). Unfortunately, the application of empirical knowledge to predict and control the as-cast structure was very limited, as the flow and crystal transport were not considered. Therefore, a 5-phase mixed columnar-equiaxed solidification model was recently proposed by the current authors based on modeling the multiphase transport phenomena. The motivation of the recent work is to determine and evaluate the necessary modeling parameters, and to validate the mixed columnar-equiaxed solidification model by comparison with laboratory castings. In this regard an experimental method was recommended for in-situ determination of the nucleation parameters. Additionally, some classical experiments of the Al-Cu ingots were conducted and the as-cast structural information including distinct columnar and equiaxed zones, macrosegregation, and grain size distribution were analysed. The final simulation results exhibited good agreement with experiments in the case of high pouring temperature, whereas disagreement in the case of low pouring temperature. The reasons for the disagreement are discussed.

  12. Sedentary Activity and Body Composition of Middle School Girls: The Trial of Activity for Adolescent Girls

    ERIC Educational Resources Information Center

    Pratt, Charlotte; Webber, Larry S.; Baggett, Chris D.; Ward, Dianne; Pate, Russell R.; Murray, David; Lohman, Timothy; Lytle, Leslie; Elder, John P.

    2008-01-01

    This study describes the relationships between sedentary activity and body composition in 1,458 sixth-grade girls from 36 middle schools across the United States. Multivariate associations between sedentary activity and body composition were examined with regression analyses using general linear mixed models. Mean age, body mass index, and…

  13. Associations between Responsible Beverage Service Laws and Binge Drinking and Alcohol-Impaired Driving

    ERIC Educational Resources Information Center

    Linde, Ann C.; Toomey, Traci L.; Wolfson, Julian; Lenk, Kathleen M.; Jones-Webb, Rhonda; Erickson, Darin J.

    2016-01-01

    We explored potential associations between the strength of state Responsible Beverage Service (RBS) laws and self-reported binge drinking and alcohol-impaired driving in the U.S. A multi-level logistic mixed-effects model was used, adjusting for potential confounders. Analyses were conducted on the overall BRFSS sample and drinkers only. Seven…

  14. Some Dilemmas Regarding Teacher Training: On the Teacher's (Not) Being a Role Model

    ERIC Educational Resources Information Center

    Seker, Hasan; Deniz, Sabahattin

    2016-01-01

    In the research, the primary and secondary school teachers' styles of teaching, beliefs and applications related to the teaching approach have been analyzed in terms of teaching applications. In the research, in which descriptive and qualitative analyses have been carried out, mixed method has been used. The research has been performed with…

  15. An Easy A or a Question of Belief: Pupil Attitudes to Catholic Religious Education in Croatia

    ERIC Educational Resources Information Center

    Jokic, Boris; Hargreaves, Linda

    2015-01-01

    This paper describes the results of a mixed model research that, as the first of its kind, aimed to determine the nature of, and underlying factors influencing, Croatian elementary pupils' attitudes towards confessional Catholic religious education (RE). Analyses of the questionnaire responses of the eighth-grade pupils from the stratified sample…

  16. Bayesian model selection techniques as decision support for shaping a statistical analysis plan of a clinical trial: An example from a vertigo phase III study with longitudinal count data as primary endpoint

    PubMed Central

    2012-01-01

    Background A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. Methods We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). Results The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. Conclusions The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint. PMID:22962944

  17. Bayesian model selection techniques as decision support for shaping a statistical analysis plan of a clinical trial: an example from a vertigo phase III study with longitudinal count data as primary endpoint.

    PubMed

    Adrion, Christine; Mansmann, Ulrich

    2012-09-10

    A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint.

  18. Determination of community structure through deconvolution of PLFA-FAME signature of mixed population.

    PubMed

    Dey, Dipesh K; Guha, Saumyen

    2007-02-15

    Phospholipid fatty acids (PLFAs) as biomarkers are well established in the literature. A general method based on least square approximation (LSA) was developed for the estimation of community structure from the PLFA signature of a mixed population where biomarker PLFA signatures of the component species were known. Fatty acid methyl ester (FAME) standards were used as species analogs and mixture of the standards as representative of the mixed population. The PLFA/FAME signatures were analyzed by gas chromatographic separation, followed by detection in flame ionization detector (GC-FID). The PLFAs in the signature were quantified as relative weight percent of the total PLFA. The PLFA signatures were analyzed by the models to predict community structure of the mixture. The LSA model results were compared with the existing "functional group" approach. Both successfully predicted community structure of mixed population containing completely unrelated species with uncommon PLFAs. For slightest intersection in PLFA signatures of component species, the LSA model produced better results. This was mainly due to inability of the "functional group" approach to distinguish the relative amounts of the common PLFA coming from more than one species. The performance of the LSA model was influenced by errors in the chromatographic analyses. Suppression (or enhancement) of a component's PLFA signature in chromatographic analysis of the mixture, led to underestimation (or overestimation) of the component's proportion in the mixture by the model. In mixtures of closely related species with common PLFAs, the errors in the common components were adjusted across the species by the model.

  19. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach

    PubMed Central

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2018-01-01

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach, and has several attractive features compared to the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, since the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. PMID:26303591

  20. Optimal mix of renewable power generation in the MENA region as a basis for an efficient electricity supply to europe

    NASA Astrophysics Data System (ADS)

    Alhamwi, Alaa; Kleinhans, David; Weitemeyer, Stefan; Vogt, Thomas

    2014-12-01

    Renewable Energy sources are gaining importance in the Middle East and North Africa (MENA) region. The purpose of this study is to quantify the optimal mix of renewable power generation in the MENA region, taking Morocco as a case study. Based on hourly meteorological data and load data, a 100% solar-plus-wind only scenario for Morocco is investigated. For the optimal mix analyses, a mismatch energy modelling approach is adopted with the objective to minimise the required storage capacities. For a hypothetical Moroccan energy supply system which is entirely based on renewable energy sources, our results show that the minimum storage capacity is achieved at a share of 63% solar and 37% wind power generations.

  1. Compositional characteristics of some Apollo 14 clastic materials.

    NASA Technical Reports Server (NTRS)

    Lindstrom, M. M.; Duncan, A. R.; Fruchter, J. S.; Mckay, S. M.; Stoeser, J. W.; Goles, G. G.; Lindstrom, D. J.

    1972-01-01

    Eighty-two subsamples of Apollo 14 materials have been analyzed by instrumental neutron activation analysis techniques for as many as 25 elements. In many cases, it was necessary to develop new procedures to allow analyses of small specimens. Compositional relationships among Apollo 14 materials indicate that there are small but systematic differences between regolith from the valley terrain and that from Cone Crater ejecta. Fragments from 1-2 mm size fractions of regolith samples may be divided into compositional classes, and the 'soil breccias' among them are very similar to valley soils. Multicomponent linear mixing models have been used as interpretive tools in dealing with data on regolith fractions and subsamples from breccia 14321. These mixing models show systematic compositional variations with inferred age for Apollo 14 clastic materials.

  2. Source partitioning of anthropogenic groundwater nitrogen in a mixed-use landscape, Tutuila, American Samoa

    NASA Astrophysics Data System (ADS)

    Shuler, Christopher K.; El-Kadi, Aly I.; Dulai, Henrietta; Glenn, Craig R.; Fackrell, Joseph

    2017-12-01

    This study presents a modeling framework for quantifying human impacts and for partitioning the sources of contamination related to water quality in the mixed-use landscape of a small tropical volcanic island. On Tutuila, the main island of American Samoa, production wells in the most populated region (the Tafuna-Leone Plain) produce most of the island's drinking water. However, much of this water has been deemed unsafe to drink since 2009. Tutuila has three predominant anthropogenic non-point-groundwater-pollution sources of concern: on-site disposal systems (OSDS), agricultural chemicals, and pig manure. These sources are broadly distributed throughout the landscape and are located near many drinking-water wells. Water quality analyses show a link between elevated levels of total dissolved groundwater nitrogen (TN) and areas with high non-point-source pollution density, suggesting that TN can be used as a tracer of groundwater contamination from these sources. The modeling framework used in this study integrates land-use information, hydrological data, and water quality analyses with nitrogen loading and transport models. The approach utilizes a numerical groundwater flow model, a nitrogen-loading model, and a multi-species contaminant transport model. Nitrogen from each source is modeled as an independent component in order to trace the impact from individual land-use activities. Model results are calibrated and validated with dissolved groundwater TN concentrations and inorganic δ15N values, respectively. Results indicate that OSDS contribute significantly more TN to Tutuila's aquifers than other sources, and thus should be prioritized in future water-quality management efforts.

  3. An investigation of the predictors of photoprotection and UVR dose to the face in patients with XP: a protocol using observational mixed methods

    PubMed Central

    Walburn, Jessica; Sarkany, Robert; Norton, Sam; Foster, Lesley; Morgan, Myfanwy; Sainsbury, Kirby; Araújo-Soares, Vera; Anderson, Rebecca; Garrood, Isabel; Heydenreich, Jakob; Sniehotta, Falko F; Vieira, Rute; Wulf, Hans Christian; Weinman, John

    2017-01-01

    Introduction Xeroderma pigmentosum (XP) is a rare genetic condition caused by defective nucleotide excision repair and characterised by skin cancer, ocular and neurological involvement. Stringent ultraviolet protection is the only way to prevent skin cancer. Despite the risks, some patients’ photoprotection is poor, with a potentially devastating impact on their prognosis. The aim of this research is to identify disease-specific and psychosocial predictors of photoprotection behaviour and ultraviolet radiation (UVR) dose to the face. Methods and analysis Mixed methods research based on 45 UK patients will involve qualitative interviews to identify individuals’ experience of XP and the influences on their photoprotection behaviours and a cross-sectional quantitative survey to assess biopsychosocial correlates of these behaviours at baseline. This will be followed by objective measurement of UVR exposure for 21 days by wrist-worn dosimeter and daily recording of photoprotection behaviours and psychological variables for up to 50 days in the summer months. This novel methodology will enable UVR dose reaching the face to be calculated and analysed as a clinically relevant endpoint. A range of qualitative and quantitative analytical approaches will be used, reflecting the mixed methods (eg, cross-sectional qualitative interviews, n-of-1 studies). Framework analysis will be used to analyse the qualitative interviews; mixed-effects longitudinal models will be used to examine the association of clinical and psychosocial factors with the average daily UVR dose; dynamic logistic regression models will be used to investigate participant-specific psychosocial factors associated with photoprotection behaviours. Ethics and dissemination This research has been approved by Camden and King’s Cross Research Ethics Committee 15/LO/1395. The findings will be published in peer-reviewed journals and presented at national and international scientific conferences. PMID:28827277

  4. Analyses and simulations of the upper ocean's response to Hurricane Felix at the Bermuda Testbed Mooring site: 13-23 August 1995

    NASA Astrophysics Data System (ADS)

    Zedler, S. E.; Dickey, T. D.; Doney, S. C.; Price, J. F.; Yu, X.; Mellor, G. L.

    2002-12-01

    The center of Hurricane Felix passed 85 km to the southwest of the Bermuda Testbed Mooring (BTM; 31°44'N, 64°10'W) site on 15 August 1995. Data collected in the upper ocean from the BTM during this encounter provide a rare opportunity to investigate the physical processes that occur in a hurricane's wake. Data analyses indicate that the storm caused a large increase in kinetic energy at near-inertial frequencies, internal gravity waves in the thermocline, and inertial pumping, mixed layer deepening, and significant vertical redistribution of heat, with cooling of the upper 30 m and warming at depths of 30-70 m. The temperature evolution was simulated using four one-dimensional mixed layer models: Price-Weller-Pinkel (PWP), K Profile Parameterization (KPP), Mellor-Yamada 2.5 (MY), and a modified version of MY2.5 (MY2). The primary differences in the model results were in their simulations of temperature evolution. In particular, when forced using a drag coefficient that had a linear dependence on wind speed, the KPP model predicted sea surface cooling, mixed layer currents, and the maximum depth of cooling closer to the observations than any of the other models. This was shown to be partly because of a special parameterization for gradient Richardson number (RgKPP) shear instability mixing in response to resolved shear in the interior. The MY2 model predicted more sea surface cooling and greater depth penetration of kinetic energy than the MY model. In the MY2 model the dissipation rate of turbulent kinetic energy is parameterized as a function of a locally defined Richardson number (RgMY2) allowing for a reduction in dissipation rate for stable Richardson numbers (RgMY2) when internal gravity waves are likely to be present. Sensitivity simulations with the PWP model, which has specifically defined mixing procedures, show that most of the heat lost from the upper layer was due to entrainment (parameterized as a function of bulk Richardson number RbPWP), with the remainder due to local Richardson number (RgPWP) instabilities. With the exception of the MY model the models predicted reasonable estimates of the north and east current components during and after the hurricane passage at 25 and 45 m. Although the results emphasize differences between the modeled responses to a given wind stress, current controversy over the formulation of wind stress from wind speed measurements (including possible sea state and wave age and sheltering effects) cautions against using our results for assessing model skill. In particular, sensitivity studies show that MY2 simulations of the temperature evolution are excellent when the wind stress is increased, albeit with currents that are larger than observed. Sensitivity experiments also indicate that preexisting inertial motion modulated the amplitude of poststorm currents, but that there was probably not a significant resonant response because of clockwise wind rotation for our study site.

  5. Model for toroidal velocity in H-mode plasmas in the presence of internal transport barriers

    NASA Astrophysics Data System (ADS)

    Chatthong, B.; Onjun, T.; Singhsomroje, W.

    2010-06-01

    A model for predicting toroidal velocity in H-mode plasmas in the presence of internal transport barriers (ITBs) is developed using an empirical approach. In this model, it is assumed that the toroidal velocity is directly proportional to the local ion temperature. This model is implemented in the BALDUR integrated predictive modelling code so that simulations of ITB plasmas can be carried out self-consistently. In these simulations, a combination of a semi-empirical mixed Bohm/gyro-Bohm (mixed B/gB) core transport model that includes ITB effects and NCLASS neoclassical transport is used to compute a core transport. The boundary is taken to be at the top of the pedestal, where the pedestal values are described using a theory-based pedestal model based on a combination of magnetic and flow shear stabilization pedestal width scaling and an infinite-n ballooning pressure gradient model. The combination of the mixed B/gB core transport model with ITB effects, together with the pedestal and the toroidal velocity models, is used to simulate the time evolution of plasma current, temperature and density profiles of 10 JET optimized shear discharges. It is found that the simulations can reproduce an ITB formation in these discharges. Statistical analyses including root mean square error (RMSE) and offset are used to quantify the agreement. It is found that the averaged RMSE and offset among these discharges are about 24.59% and -0.14%, respectively.

  6. Discovering human germ cell mutagens with whole genome sequencing: Insights from power calculations reveal the importance of controlling for between-family variability.

    PubMed

    Webster, R J; Williams, A; Marchetti, F; Yauk, C L

    2018-07-01

    Mutations in germ cells pose potential genetic risks to offspring. However, de novo mutations are rare events that are spread across the genome and are difficult to detect. Thus, studies in this area have generally been under-powered, and no human germ cell mutagen has been identified. Whole Genome Sequencing (WGS) of human pedigrees has been proposed as an approach to overcome these technical and statistical challenges. WGS enables analysis of a much wider breadth of the genome than traditional approaches. Here, we performed power analyses to determine the feasibility of using WGS in human families to identify germ cell mutagens. Different statistical models were compared in the power analyses (ANOVA and multiple regression for one-child families, and mixed effect model sampling between two to four siblings per family). Assumptions were made based on parameters from the existing literature, such as the mutation-by-paternal age effect. We explored two scenarios: a constant effect due to an exposure that occurred in the past, and an accumulating effect where the exposure is continuing. Our analysis revealed the importance of modeling inter-family variability of the mutation-by-paternal age effect. Statistical power was improved by models accounting for the family-to-family variability. Our power analyses suggest that sufficient statistical power can be attained with 4-28 four-sibling families per treatment group, when the increase in mutations ranges from 40 to 10% respectively. Modeling family variability using mixed effect models provided a reduction in sample size compared to a multiple regression approach. Much larger sample sizes were required to detect an interaction effect between environmental exposures and paternal age. These findings inform study design and statistical modeling approaches to improve power and reduce sequencing costs for future studies in this area. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.

  7. CO concentration in the upper stratosphere and mesosphere of Titan from VIMS dayside limb observations at 4.7 μm

    NASA Astrophysics Data System (ADS)

    Fabiano, F.; López Puertas, M.; Adriani, A.; Moriconi, M. L.; D'Aversa, E.; Funke, B.; López-Valverde, M. A.; Ridolfi, M.; Dinelli, B. M.

    2017-09-01

    During the last 30 years, many works have focused on the determination of the CO abundance in Titan's atmosphere, but no measurement above 300 km has been done yet due to the faint signal of CO. Nevertheless, such measurements are particularly awaited as a confirmation of photochemical models predictions that CO is uniformly mixed in the whole atmosphere. Moreover, since CO is the main atmospheric reservoir of oxygen, its actual abundance has implications on the origins of Titan's atmosphere. In this work, we analyse a set of Cassini VIMS daytime limb observations of Titan at 4.7 μm, which is dominated by solar-pumped non-LTE (non-local thermodynamic equilibrium) emission of CO ro-vibrational bands. In order to retrieve the CO abundance from these observations, we developed a non-LTE model for the CO vibrational levels. The retrieval of the CO concentration is performed following a bayesian approach and using the calculated non-LTE populations. The data set analysed consists of 47 limb scanning sequences -about 1500 spectra- acquired by VIMS in 2006 and 2007. CO relative abundance profiles from 200 to 500 km are obtained, for each set analysed. The mean result shows no significant variations with altitude and is consistent with the prediction of a well-mixed vertical profile. However, if compared with Earth-based mm measurements, a small vertical gradient is plausible.

  8. Higher and lowest order mixed finite element approximation of subsurface flow problems with solutions of low regularity

    NASA Astrophysics Data System (ADS)

    Bause, Markus

    2008-02-01

    In this work we study mixed finite element approximations of Richards' equation for simulating variably saturated subsurface flow and simultaneous reactive solute transport. Whereas higher order schemes have proved their ability to approximate reliably reactive solute transport (cf., e.g. [Bause M, Knabner P. Numerical simulation of contaminant biodegradation by higher order methods and adaptive time stepping. Comput Visual Sci 7;2004:61-78]), the Raviart- Thomas mixed finite element method ( RT0) with a first order accurate flux approximation is popular for computing the underlying water flow field (cf. [Bause M, Knabner P. Computation of variably saturated subsurface flow by adaptive mixed hybrid finite element methods. Adv Water Resour 27;2004:565-581, Farthing MW, Kees CE, Miller CT. Mixed finite element methods and higher order temporal approximations for variably saturated groundwater flow. Adv Water Resour 26;2003:373-394, Starke G. Least-squares mixed finite element solution of variably saturated subsurface flow problems. SIAM J Sci Comput 21;2000:1869-1885, Younes A, Mosé R, Ackerer P, Chavent G. A new formulation of the mixed finite element method for solving elliptic and parabolic PDE with triangular elements. J Comp Phys 149;1999:148-167, Woodward CS, Dawson CN. Analysis of expanded mixed finite element methods for a nonlinear parabolic equation modeling flow into variably saturated porous media. SIAM J Numer Anal 37;2000:701-724]). This combination might be non-optimal. Higher order techniques could increase the accuracy of the flow field calculation and thereby improve the prediction of the solute transport. Here, we analyse the application of the Brezzi- Douglas- Marini element ( BDM1) with a second order accurate flux approximation to elliptic, parabolic and degenerate problems whose solutions lack the regularity that is assumed in optimal order error analyses. For the flow field calculation a superiority of the BDM1 approach to the RT0 one is observed, which however is less significant for the accompanying solute transport.

  9. Effect of river excavation on a bank filtration site - assessing transient surface water - groundwater interaction by 3D heat and solute transport modelling

    NASA Astrophysics Data System (ADS)

    Wang, W.; Oswald, S. E.; Munz, M.; Strasser, D.

    2017-12-01

    Bank filtration is widely used either as main- or pre-treatment process for water supply. The colmation of the river bottom as interface to groundwater plays a key role for hydraulic control of flow paths and location of several beneficial attenuation processes, such as pathogen filtration, mixing, biodegradation and sorption. Along the flow path, mixing happens between the `young' infiltrated water and ambient `old' groundwater. To clarify the mechanisms and their interaction, modelling is often used for analysing spatial and temporal distribution of the travelling time, quantifying mixing ratios, and estimating the biochemical reaction rates. As the most comprehensive tool, 2-D or 3-D spatially-explicit modelling is used in several studies, and for area with geological heterogeneity, the adaptation of different natural tracers could constrain the model in respect to model non-uniqueness and improve the interpretation of the flow field. In our study, we have evaluated the influence of a river excavation and bank reconstruction project on the groundwater-surface water exchange at a bank filtration site. With data from years of field site monitoring, we could include besides heads and temperature also the analysis of stable isotope data and ions to differentiate between infiltrated water and groundwater. Thus, we have set up a 3-D transient heat and mass transport groundwater model, taking the strong local geological heterogeneity into consideration, especially between river and water work wells. By transferring the effect of the river excavation into a changing hydraulic conductivity of the riverbed, model could be calibrated against both water head and temperature time-series observed. Finally, electrical conductivity dominated by river input was included as quasi-conservative tracer. The `triple' calibrated, transient model was then used to i) understand the flow field and quantify the long term changes in infiltration rate and distribution brought by the excavation ii) compare among temperature, electrical conductivity and stable isotope values calculated and interpret the performance and deviations iii) analyse from this modelling basis about the implications of the excavation induced changes on further water quality data and travelling time distributions, also with seasonal aspects.

  10. Study of Variable Turbulent Prandtl Number Model for Heat Transfer to Supercritical Fluids in Vertical Tubes

    NASA Astrophysics Data System (ADS)

    Tian, Ran; Dai, Xiaoye; Wang, Dabiao; Shi, Lin

    2018-06-01

    In order to improve the prediction performance of the numerical simulations for heat transfer of supercritical pressure fluids, a variable turbulent Prandtl number (Prt) model for vertical upward flow at supercritical pressures was developed in this study. The effects of Prt on the numerical simulation were analyzed, especially for the heat transfer deterioration conditions. Based on the analyses, the turbulent Prandtl number was modeled as a function of the turbulent viscosity ratio and molecular Prandtl number. The model was evaluated using experimental heat transfer data of CO2, water and Freon. The wall temperatures, including the heat transfer deterioration cases, were more accurately predicted by this model than by traditional numerical calculations with a constant Prt. By analyzing the predicted results with and without the variable Prt model, it was found that the predicted velocity distribution and turbulent mixing characteristics with the variable Prt model are quite different from that predicted by a constant Prt. When heat transfer deterioration occurs, the radial velocity profile deviates from the log-law profile and the restrained turbulent mixing then leads to the deteriorated heat transfer.

  11. Uncertainty quantification and experimental design based on unsupervised machine learning identification of contaminant sources and groundwater types using hydrogeochemical data

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.

    2017-12-01

    Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical species. Numerous geochemical constituents and processes may need to be simulated in these models which further complicates the analyses. As a result, these types of model analyses are typically extremely challenging. Here, we demonstrate a new contaminant source identification approach that performs decomposition of the observation mixtures based on Nonnegative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. We also demonstrate how NMFk can be extended to perform uncertainty quantification and experimental design related to real-world site characterization. The NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios). The NMFk algorithm has been extensively tested on synthetic datasets; NMFk analyses have been actively performed on real-world data collected at the Los Alamos National Laboratory (LANL) groundwater sites related to Chromium and RDX contamination.

  12. The Influence of Study-Level Inference Models and Study Set Size on Coordinate-Based fMRI Meta-Analyses

    PubMed Central

    Bossier, Han; Seurinck, Ruth; Kühn, Simone; Banaschewski, Tobias; Barker, Gareth J.; Bokde, Arun L. W.; Martinot, Jean-Luc; Lemaitre, Herve; Paus, Tomáš; Millenet, Sabina; Moerkerke, Beatrijs

    2018-01-01

    Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1) the balance between false and true positives and (2) the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS), or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE) that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35). To do this, we apply a resampling scheme on a large dataset (N = 1,400) to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results. PMID:29403344

  13. BeiDou Inter-Satellite-Type Bias Evaluation and Calibration for Mixed Receiver Attitude Determination

    PubMed Central

    Nadarajah, Nandakumaran; Teunissen, Peter J. G.; Raziq, Noor

    2013-01-01

    The Chinese BeiDou system (BDS), having different types of satellites, is an important addition to the ever growing system of Global Navigation Satellite Systems (GNSS). It consists of Geostationary Earth Orbit (GEO) satellites, Inclined Geosynchronous Satellite Orbit (IGSO) satellites and Medium Earth Orbit (MEO) satellites. This paper investigates the receiver-dependent bias between these satellite types, for which we coined the name “inter-satellite-type bias” (ISTB), and its impact on mixed receiver attitude determination. Assuming different receiver types may have different delays/biases for different satellite types, we model the differential ISTBs among three BeiDou satellite types and investigate their existence and their impact on mixed receiver attitude determination. Our analyses using the real data sets from Curtin's GNSS array consisting of different types of BeiDou enabled receivers and series of zero-baseline experiments with BeiDou-enabled receivers reveal the existence of non-zero ISTBs between different BeiDou satellite types. We then analyse the impact of these biases on BeiDou-only attitude determination using the constrained (C-)LAMBDA method, which exploits the knowledge of baseline length. Results demonstrate that these biases could seriously affect the integer ambiguity resolution for attitude determination using mixed receiver types and that a priori correction of these biases will dramatically improve the success rate. PMID:23881141

  14. Investigating the source, transport, and isotope composition of water vapor in the planetary boundary layer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffis, Timothy J.; Wood, Jeffrey D.; Baker, John M.

    Increasing atmospheric humidity and convective precipitation over land provide evidence of intensification of the hydrologic cycle – an expected response to surface warming. The extent to which terrestrial ecosystems modulate these hydrologic factors is important to understand feedbacks in the climate system. We measured the oxygen and hydrogen isotope composition of water vapor at a very tall tower (185 m) in the upper Midwest, United States, to diagnose the sources, transport, and fractionation of water vapor in the planetary boundary layer (PBL) over a 3-year period (2010 to 2012). These measurements represent the first set of annual water vapor isotopemore » observations for this region. Several simple isotope models and cross-wavelet analyses were used to assess the importance of the Rayleigh distillation process, evaporation, and PBL entrainment processes on the isotope composition of water vapor. The vapor isotope composition at this tall tower site showed a large seasonal amplitude (mean monthly δ 18O v ranged from –40.2 to –15.9 ‰ and δ 2H v ranged from –278.7 to –113.0 ‰) and followed the familiar Rayleigh distillation relation with water vapor mixing ratio when considering the entire hourly data set. However, this relation was strongly modulated by evaporation and PBL entrainment processes at timescales ranging from hours to several days. The wavelet coherence spectra indicate that the oxygen isotope ratio and the deuterium excess ( d v) of water vapor are sensitive to synoptic and PBL processes. According to the phase of the coherence analyses, we show that evaporation often leads changes in d v, confirming that it is a potential tracer of regional evaporation. Isotope mixing models indicate that on average about 31 % of the growing season PBL water vapor is derived from regional evaporation. However, isoforcing calculations and mixing model analyses for high PBL water vapor mixing ratio events ( > 25 mmol mol –1) indicate that regional evaporation can account for 40 to 60 % of the PBL water vapor. These estimates are in relatively good agreement with that derived from numerical weather model simulations. This relatively large fraction of evaporation-derived water vapor implies that evaporation has an important impact on the precipitation recycling ratio within the region. In conclusion, based on multiple constraints, we estimate that the summer season recycling fraction is about 30 %, indicating a potentially important link with convective precipitation.« less

  15. Investigating the source, transport, and isotope composition of water vapor in the planetary boundary layer

    DOE PAGES

    Griffis, Timothy J.; Wood, Jeffrey D.; Baker, John M.; ...

    2016-04-25

    Increasing atmospheric humidity and convective precipitation over land provide evidence of intensification of the hydrologic cycle – an expected response to surface warming. The extent to which terrestrial ecosystems modulate these hydrologic factors is important to understand feedbacks in the climate system. We measured the oxygen and hydrogen isotope composition of water vapor at a very tall tower (185 m) in the upper Midwest, United States, to diagnose the sources, transport, and fractionation of water vapor in the planetary boundary layer (PBL) over a 3-year period (2010 to 2012). These measurements represent the first set of annual water vapor isotopemore » observations for this region. Several simple isotope models and cross-wavelet analyses were used to assess the importance of the Rayleigh distillation process, evaporation, and PBL entrainment processes on the isotope composition of water vapor. The vapor isotope composition at this tall tower site showed a large seasonal amplitude (mean monthly δ 18O v ranged from –40.2 to –15.9 ‰ and δ 2H v ranged from –278.7 to –113.0 ‰) and followed the familiar Rayleigh distillation relation with water vapor mixing ratio when considering the entire hourly data set. However, this relation was strongly modulated by evaporation and PBL entrainment processes at timescales ranging from hours to several days. The wavelet coherence spectra indicate that the oxygen isotope ratio and the deuterium excess ( d v) of water vapor are sensitive to synoptic and PBL processes. According to the phase of the coherence analyses, we show that evaporation often leads changes in d v, confirming that it is a potential tracer of regional evaporation. Isotope mixing models indicate that on average about 31 % of the growing season PBL water vapor is derived from regional evaporation. However, isoforcing calculations and mixing model analyses for high PBL water vapor mixing ratio events ( > 25 mmol mol –1) indicate that regional evaporation can account for 40 to 60 % of the PBL water vapor. These estimates are in relatively good agreement with that derived from numerical weather model simulations. This relatively large fraction of evaporation-derived water vapor implies that evaporation has an important impact on the precipitation recycling ratio within the region. In conclusion, based on multiple constraints, we estimate that the summer season recycling fraction is about 30 %, indicating a potentially important link with convective precipitation.« less

  16. Cloud and boundary layer interactions over the Arctic sea-ice in late summer

    NASA Astrophysics Data System (ADS)

    Shupe, M. D.; Persson, P. O. G.; Brooks, I. M.; Tjernström, M.; Sedlar, J.; Mauritsen, T.; Sjogren, S.; Leck, C.

    2013-05-01

    Observations from the Arctic Summer Cloud Ocean Study (ASCOS), in the central Arctic sea-ice pack in late summer 2008, provide a detailed view of cloud-atmosphere-surface interactions and vertical mixing processes over the sea-ice environment. Measurements from a suite of ground-based remote sensors, near surface meteorological and aerosol instruments, and profiles from radiosondes and a helicopter are combined to characterize a week-long period dominated by low-level, mixed-phase, stratocumulus clouds. Detailed case studies and statistical analyses are used to develop a conceptual model for the cloud and atmosphere structure and their interactions in this environment. Clouds were persistent during the period of study, having qualities that suggest they were sustained through a combination of advective influences and in-cloud processes, with little contribution from the surface. Radiative cooling near cloud top produced buoyancy-driven, turbulent eddies that contributed to cloud formation and created a cloud-driven mixed layer. The depth of this mixed layer was related to the amount of turbulence and condensed cloud water. Coupling of this cloud-driven mixed layer to the surface boundary layer was primarily determined by proximity. For 75% of the period of study, the primary stratocumulus cloud-driven mixed layer was decoupled from the surface and typically at a warmer potential temperature. Since the near-surface temperature was constrained by the ocean-ice mixture, warm temperatures aloft suggest that these air masses had not significantly interacted with the sea-ice surface. Instead, back trajectory analyses suggest that these warm airmasses advected into the central Arctic Basin from lower latitudes. Moisture and aerosol particles likely accompanied these airmasses, providing necessary support for cloud formation. On the occasions when cloud-surface coupling did occur, back trajectories indicated that these air masses advected at low levels, while mixing processes kept the mixed layer in equilibrium with the near-surface environment. Rather than contributing buoyancy forcing for the mixed-layer dynamics, the surface instead simply appeared to respond to the mixed-layer processes aloft. Clouds in these cases often contained slightly higher condensed water amounts, potentially due to additional moisture sources from below.

  17. Cloud and boundary layer interactions over the Arctic sea ice in late summer

    NASA Astrophysics Data System (ADS)

    Shupe, M. D.; Persson, P. O. G.; Brooks, I. M.; Tjernström, M.; Sedlar, J.; Mauritsen, T.; Sjogren, S.; Leck, C.

    2013-09-01

    Observations from the Arctic Summer Cloud Ocean Study (ASCOS), in the central Arctic sea-ice pack in late summer 2008, provide a detailed view of cloud-atmosphere-surface interactions and vertical mixing processes over the sea-ice environment. Measurements from a suite of ground-based remote sensors, near-surface meteorological and aerosol instruments, and profiles from radiosondes and a helicopter are combined to characterize a week-long period dominated by low-level, mixed-phase, stratocumulus clouds. Detailed case studies and statistical analyses are used to develop a conceptual model for the cloud and atmosphere structure and their interactions in this environment. Clouds were persistent during the period of study, having qualities that suggest they were sustained through a combination of advective influences and in-cloud processes, with little contribution from the surface. Radiative cooling near cloud top produced buoyancy-driven, turbulent eddies that contributed to cloud formation and created a cloud-driven mixed layer. The depth of this mixed layer was related to the amount of turbulence and condensed cloud water. Coupling of this cloud-driven mixed layer to the surface boundary layer was primarily determined by proximity. For 75% of the period of study, the primary stratocumulus cloud-driven mixed layer was decoupled from the surface and typically at a warmer potential temperature. Since the near-surface temperature was constrained by the ocean-ice mixture, warm temperatures aloft suggest that these air masses had not significantly interacted with the sea-ice surface. Instead, back-trajectory analyses suggest that these warm air masses advected into the central Arctic Basin from lower latitudes. Moisture and aerosol particles likely accompanied these air masses, providing necessary support for cloud formation. On the occasions when cloud-surface coupling did occur, back trajectories indicated that these air masses advected at low levels, while mixing processes kept the mixed layer in equilibrium with the near-surface environment. Rather than contributing buoyancy forcing for the mixed-layer dynamics, the surface instead simply appeared to respond to the mixed-layer processes aloft. Clouds in these cases often contained slightly higher condensed water amounts, potentially due to additional moisture sources from below.

  18. Genetic parameters and signatures of selection in two divergent laying hen lines selected for feather pecking behaviour.

    PubMed

    Grams, Vanessa; Wellmann, Robin; Preuß, Siegfried; Grashorn, Michael A; Kjaer, Jörgen B; Bessei, Werner; Bennewitz, Jörn

    2015-09-30

    Feather pecking (FP) in laying hens is a well-known and multi-factorial behaviour with a genetic background. In a selection experiment, two lines were developed for 11 generations for high (HFP) and low (LFP) feather pecking, respectively. Starting with the second generation of selection, there was a constant difference in mean number of FP bouts between both lines. We used the data from this experiment to perform a quantitative genetic analysis and to map selection signatures. Pedigree and phenotypic data were available for the last six generations of both lines. Univariate quantitative genetic analyses were conducted using mixed linear and generalized mixed linear models assuming a Poisson distribution. Selection signatures were mapped using 33,228 single nucleotide polymorphisms (SNPs) genotyped on 41 HFP and 34 LFP individuals of generation 11. For each SNP, we estimated Wright's fixation index (FST). We tested the null hypothesis that FST is driven purely by genetic drift against the alternative hypothesis that it is driven by genetic drift and selection. The mixed linear model failed to analyze the LFP data because of the large number of 0s in the observation vector. The Poisson model fitted the data well and revealed a small but continuous genetic trend in both lines. Most of the 17 genome-wide significant SNPs were located on chromosomes 3 and 4. Thirteen clusters with at least two significant SNPs within an interval of 3 Mb maximum were identified. Two clusters were mapped on chromosomes 3, 4, 8 and 19. Of the 17 genome-wide significant SNPs, 12 were located within the identified clusters. This indicates a non-random distribution of significant SNPs and points to the presence of selection sweeps. Data on FP should be analysed using generalised linear mixed models assuming a Poisson distribution, especially if the number of FP bouts is small and the distribution is heavily peaked at 0. The FST-based approach was suitable to map selection signatures that need to be confirmed by linkage or association mapping.

  19. pKa values of hyodeoxycholic and cholic acids in the binary mixed micelles sodium-hyodeoxycholate-Tween 40 and sodium-cholate-Tween 40: Thermodynamic stability of the micelle and the cooperative hydrogen bond formation with the steroid skeleton.

    PubMed

    Poša, Mihalj; Pilipović, Ana; Bećarević, Mirjana; Farkaš, Zita

    2017-01-01

    Due to a relatively small size of bile acid salts, their mixed micelles with nonionic surfactants are analysed. Of the special interests are real binary mixed micelles that are thermodynamically more stable than ideal mixed micelles. Thermodynamic stability is expressed with an excess Gibbs energy (G E ) or over an interaction parameter (β ij ). In this paper sodium salts of cholic (C) and hyodeoxycholic acid (HD) in their mixed micelles with Tween 40 (T40) are analysed by potentiometric titration and their pKa values are determined. Examined bile acids in mixed micelles with T40 have higher pKa values than free bile acids. The increase of ΔpKa acid constant of micellary bound C and HD is in a correlation with absolute values of an interaction parameter. According to an interaction parameter and an excess Gibbs energy, mixed micelle HD-T40 are thermodynamically more stable than mixed micelles C-T40. ΔpKa values are higher for mixed micelles with Tween 40 whose second building unit is HD, related to the building unit C. In both micellar systems, ΔpKa increases with the rise of a molar fraction of Tween 40 in binary mixtures of surfactants with sodium salts of bile acids. This suggests that, ΔpKa can be a measure of a thermodynamic stabilization of analysed binary mixed micelles as well as an interaction parameter. ΔpKa values are confirmed by determination of a distribution coefficient of HD and C in systems: water phase with Tween 40 in a micellar concentration and 1-octanol, with a change of a pH value of a water phase. Conformational analyses suggests that synergistic interactions between building units of analysed binary micelles originates from formation of hydrogen bonds between steroid OH groups and polyoxyethylene groups of the T40. Relative similarity and spatial orientation of C 3 and C 6 OH group allows cooperative formation of hydrogen bonds between T40 and HD - excess entropy in formation of mixed micelle. If a water solution of analysed binary mixtures of surfactants contains urea in concentration of 4M significant decreases of an interaction parameter value happens which confirms the importance of hydrogen bonds in synergistic interactions (urea compete in hydrogen bonds). Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Spatial and temporal variability of total organic carbon along 140°W in the equatorial Pacific Ocean in 1992

    NASA Astrophysics Data System (ADS)

    Peltzer, Edward T.; Hayward, Nancy A.

    Total organic carbon (TOC) was analyzed on four transects along 140°W in 1992 using a high temperature combustion/discrete injection (HTC/DI) analyzer. For two of the transects, the analyses were conducted on-board ship. Mixed-layer concentrations of organic carbon varied from about 80 μM C at either end of the transect (12°N and 12°S) to about 60 μM C at the equator. Total organic carbon concentrations decreased rapidly below the mixed-layer to about 38-40 μM C at 1000 m across the transect. Little variation was observed below this depth; deep water concentrations below 2000m were virtually monotonic at about 36 μM C. Repeat measurements made on subsequent cruises consistently found the same concentrations at 1000 m or deeper, but substantial variations were observed in the mixed-layer and the upper water column above 400 m depth. Linear mixing models of total organic carbon versus σθ exhibited zones of organic carbon formation and consumption. TOC was found to be inversely correlated with apparent oxygen utilization (AOU) in the region between the mixed-layer and the oxygen minimum. In the mixed-layer, TOC concentrations varied seasonally. Part of the variations in TOC at the equator was driven by changes in the upwelling rate in response to variations in physical forcing related to an El Niño and to the passage of tropical instability waves. TOC export fluxes, calculated from simple box models, averaged 8±4 mmol C m -2day -1 at the equator and also varied seasonally. These export fluxes account for 50-75% of the total carbon deficit and are consistent with other estimates and model predictions.

  1. Worldwide impact of economic cycles on suicide trends over 3 decades: differences according to level of development. A mixed effect model study

    PubMed Central

    Perez-Rodriguez, M Mercedes; Garcia-Nieto, Rebeca; Fernandez-Navarro, Pablo; Galfalvy, Hanga; de Leon, Jose; Baca-Garcia, Enrique

    2012-01-01

    Objectives To investigate the trends and correlations of gross domestic product (GDP) adjusted for purchasing power parity (PPP) per capita on suicide rates in 10 WHO regions during the past 30 years. Design Analyses of databases of PPP-adjusted GDP per capita and suicide rates. Countries were grouped according to the Global Burden of Disease regional classification system. Data sources World Bank's official website and WHO's mortality database. Statistical analyses After graphically displaying PPP-adjusted GDP per capita and suicide rates, mixed effect models were used for representing and analysing clustered data. Results Three different groups of countries, based on the correlation between the PPP-adjusted GDP per capita and suicide rates, are reported: (1) positive correlation: developing (lower middle and upper middle income) Latin-American and Caribbean countries, developing countries in the South East Asian Region including India, some countries in the Western Pacific Region (such as China and South Korea) and high-income Asian countries, including Japan; (2) negative correlation: high-income and developing European countries, Canada, Australia and New Zealand and (3) no correlation was found in an African country. Conclusions PPP-adjusted GDP per capita may offer a simple measure for designing the type of preventive interventions aimed at lowering suicide rates that can be used across countries. Public health interventions might be more suitable for developing countries. In high-income countries, however, preventive measures based on the medical model might prove more useful. PMID:22586285

  2. Field management of asphalt concrete mixes.

    DOT National Transportation Integrated Search

    1988-01-01

    Marshall properties were determined on mixes from three different contractors each producing a 1/2-in and a 3/4-in top-sized aggregate mix. From these data, statistical analyses were made to determine differences among contractors and between mix typ...

  3. Characterization and Analyses of Valves, Feed Lines and Tanks used in Propellant Delivery Systems at NASA SSC

    NASA Technical Reports Server (NTRS)

    Ryan, Harry M.; Coote, David J.; Ahuja, Vineet; Hosangadi, Ashvin

    2006-01-01

    Accurate modeling of liquid rocket engine test processes involves assessing critical fluid mechanic and heat and mass transfer mechanisms within a cryogenic environment, and accurately modeling fluid properties such as vapor pressure and liquid and gas densities as a function of pressure and temperature. The Engineering and Science Directorate at the NASA John C. Stennis Space Center has developed and implemented such analytic models and analysis processes that have been used over a broad range of thermodynamic systems and resulted in substantial improvements in rocket propulsion testing services. In this paper, we offer an overview of the analyses techniques used to simulate pressurization and propellant fluid systems associated with the test stands at the NASA John C. Stennis Space Center. More specifically, examples of the global performance (one-dimensional) of a propellant system are provided as predicted using the Rocket Propulsion Test Analysis (RPTA) model. Computational fluid dynamic (CFD) analyses utilizing multi-element, unstructured, moving grid capability of complex cryogenic feed ducts, transient valve operation, and pressurization and mixing in propellant tanks are provided as well.

  4. Composition and structure of Pinus koraiensis mixed forest respond to spatial climatic changes.

    PubMed

    Zhang, Jingli; Zhou, Yong; Zhou, Guangsheng; Xiao, Chunwang

    2014-01-01

    Although some studies have indicated that climate changes can affect Pinus koraiensis mixed forest, the responses of composition and structure of Pinus koraiensis mixed forests to climatic changes are unknown and the key climatic factors controlling the composition and structure of Pinus koraiensis mixed forest are uncertain. Field survey was conducted in the natural Pinus koraiensis mixed forests along a latitudinal gradient and an elevational gradient in Northeast China. In order to build the mathematical models for simulating the relationships of compositional and structural attributes of the Pinus koraiensis mixed forest with climatic and non-climatic factors, stepwise linear regression analyses were performed, incorporating 14 dependent variables and the linear and quadratic components of 9 factors. All the selected new models were computed under the +2°C and +10% precipitation and +4°C and +10% precipitation scenarios. The Max Temperature of Warmest Month, Mean Temperature of Warmest Quarter and Precipitation of Wettest Month were observed to be key climatic factors controlling the stand densities and total basal areas of Pinus koraiensis mixed forest. Increased summer temperatures and precipitations strongly enhanced the stand densities and total basal areas of broadleaf trees but had little effect on Pinus koraiensis under the +2°C and +10% precipitation scenario and +4°C and +10% precipitation scenario. These results show that the Max Temperature of Warmest Month, Mean Temperature of Warmest Quarter and Precipitation of Wettest Month are key climatic factors which shape the composition and structure of Pinus koraiensis mixed forest. Although the Pinus koraiensis would persist, the current forests dominated by Pinus koraiensis in the region would all shift and become broadleaf-dominated forests due to the dramatic increase of broadleaf trees under the future global warming and increased precipitation.

  5. Therapist self-disclosure and the therapeutic alliance in the treatment of eating problems.

    PubMed

    Simonds, Laura M; Spokes, Naomi

    2017-01-01

    Evidence is mixed regarding the potential utility of therapist self-disclosure. The current study modelled relationships between perceived helpfulness of therapist self-disclosures, therapeutic alliance, patient non-disclosure, and shame in participants (n = 120; 95% women) with a history of eating problems. Serial multiple mediator analyses provided support for a putative model connecting the perceived helpfulness of therapist self-disclosures with current eating disorder symptom severity through therapeutic alliance, patient self-disclosure, and shame. The analyses presented provide support for the contention that therapist self-disclosure, if perceived as helpful, might strengthen the therapeutic alliance. A strong therapeutic alliance, in turn, has the potential to promote patient disclosure and reduce shame and eating problems.

  6. Performance of nonlinear mixed effects models in the presence of informative dropout.

    PubMed

    Björnsson, Marcus A; Friberg, Lena E; Simonsson, Ulrika S H

    2015-01-01

    Informative dropout can lead to bias in statistical analyses if not handled appropriately. The objective of this simulation study was to investigate the performance of nonlinear mixed effects models with regard to bias and precision, with and without handling informative dropout. An efficacy variable and dropout depending on that efficacy variable were simulated and model parameters were reestimated, with or without including a dropout model. The Laplace and FOCE-I estimation methods in NONMEM 7, and the stochastic simulations and estimations (SSE) functionality in PsN, were used in the analysis. For the base scenario, bias was low, less than 5% for all fixed effects parameters, when a dropout model was used in the estimations. When a dropout model was not included, bias increased up to 8% for the Laplace method and up to 21% if the FOCE-I estimation method was applied. The bias increased with decreasing number of observations per subject, increasing placebo effect and increasing dropout rate, but was relatively unaffected by the number of subjects in the study. This study illustrates that ignoring informative dropout can lead to biased parameters in nonlinear mixed effects modeling, but even in cases with few observations or high dropout rate, the bias is relatively low and only translates into small effects on predictions of the underlying effect variable. A dropout model is, however, crucial in the presence of informative dropout in order to make realistic simulations of trial outcomes.

  7. An overview of longitudinal data analysis methods for neurological research.

    PubMed

    Locascio, Joseph J; Atri, Alireza

    2011-01-01

    The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models.

  8. A conceptual model for analysing informal learning in online social networks for health professionals.

    PubMed

    Li, Xin; Gray, Kathleen; Chang, Shanton; Elliott, Kristine; Barnett, Stephen

    2014-01-01

    Online social networking (OSN) provides a new way for health professionals to communicate, collaborate and share ideas with each other for informal learning on a massive scale. It has important implications for ongoing efforts to support Continuing Professional Development (CPD) in the health professions. However, the challenge of analysing the data generated in OSNs makes it difficult to understand whether and how they are useful for CPD. This paper presents a conceptual model for using mixed methods to study data from OSNs to examine the efficacy of OSN in supporting informal learning of health professionals. It is expected that using this model with the dataset generated in OSNs for informal learning will produce new and important insights into how well this innovation in CPD is serving professionals and the healthcare system.

  9. N-of-1-pathways MixEnrich: advancing precision medicine via single-subject analysis in discovering dynamic changes of transcriptomes.

    PubMed

    Li, Qike; Schissler, A Grant; Gardeux, Vincent; Achour, Ikbel; Kenost, Colleen; Berghout, Joanne; Li, Haiquan; Zhang, Hao Helen; Lussier, Yves A

    2017-05-24

    Transcriptome analytic tools are commonly used across patient cohorts to develop drugs and predict clinical outcomes. However, as precision medicine pursues more accurate and individualized treatment decisions, these methods are not designed to address single-patient transcriptome analyses. We previously developed and validated the N-of-1-pathways framework using two methods, Wilcoxon and Mahalanobis Distance (MD), for personal transcriptome analysis derived from a pair of samples of a single patient. Although, both methods uncover concordantly dysregulated pathways, they are not designed to detect dysregulated pathways with up- and down-regulated genes (bidirectional dysregulation) that are ubiquitous in biological systems. We developed N-of-1-pathways MixEnrich, a mixture model followed by a gene set enrichment test, to uncover bidirectional and concordantly dysregulated pathways one patient at a time. We assess its accuracy in a comprehensive simulation study and in a RNA-Seq data analysis of head and neck squamous cell carcinomas (HNSCCs). In presence of bidirectionally dysregulated genes in the pathway or in presence of high background noise, MixEnrich substantially outperforms previous single-subject transcriptome analysis methods, both in the simulation study and the HNSCCs data analysis (ROC Curves; higher true positive rates; lower false positive rates). Bidirectional and concordant dysregulated pathways uncovered by MixEnrich in each patient largely overlapped with the quasi-gold standard compared to other single-subject and cohort-based transcriptome analyses. The greater performance of MixEnrich presents an advantage over previous methods to meet the promise of providing accurate personal transcriptome analysis to support precision medicine at point of care.

  10. The cataract national data set electronic multi-centre audit of 55,567 operations: case-mix adjusted surgeon's outcomes for posterior capsule rupture.

    PubMed

    Sparrow, J M; Taylor, H; Qureshi, K; Smith, R; Johnston, R L

    2011-08-01

    To develop a methodology for case-mix adjustment of surgical outcomes for individual cataract surgeons using electronically collected multi-centre data conforming to the cataract national data set (CND). Routinely collected anonymised data were remotely extracted from electronic patient record (EPR) systems in 12 participating NHS Trusts undertaking cataract surgery. Following data checks and cleaning, analyses were carried out to risk adjust outcomes for posterior capsule rupture rates for individual surgeons, with stratification by surgical grade. A total of 406 surgeons from 12 NHS Trusts submitted data on 55,567 cataract operations between November 2001 and July 2006 (86% from January 2004). In all, 283 surgeons contributed data on >25 cases, providing 54,319 operations suitable for detailed analysis. Case-mix adjusted results of individual surgeons are presented as funnel plots for all surgeons together, and separately for three different grades of surgeon. Plots include 95 and 99.8% confidence limits around the case-mix adjusted outcomes for detection of surgical outliers. Routinely collected electronic data conforming to the CND provides sufficient detail for case-mix adjustment of cataract surgical outcomes. The validation of these risk indicators should be carried out using fresh data to confirm the validity of the risk model. Once validated this model should provide an equitable approach for peer-to-peer comparisons in the context of revalidation.

  11. Adapt-Mix: learning local genetic correlation structure improves summary statistics-based analyses

    PubMed Central

    Park, Danny S.; Brown, Brielin; Eng, Celeste; Huntsman, Scott; Hu, Donglei; Torgerson, Dara G.; Burchard, Esteban G.; Zaitlen, Noah

    2015-01-01

    Motivation: Approaches to identifying new risk loci, training risk prediction models, imputing untyped variants and fine-mapping causal variants from summary statistics of genome-wide association studies are playing an increasingly important role in the human genetics community. Current summary statistics-based methods rely on global ‘best guess’ reference panels to model the genetic correlation structure of the dataset being studied. This approach, especially in admixed populations, has the potential to produce misleading results, ignores variation in local structure and is not feasible when appropriate reference panels are missing or small. Here, we develop a method, Adapt-Mix, that combines information across all available reference panels to produce estimates of local genetic correlation structure for summary statistics-based methods in arbitrary populations. Results: We applied Adapt-Mix to estimate the genetic correlation structure of both admixed and non-admixed individuals using simulated and real data. We evaluated our method by measuring the performance of two summary statistics-based methods: imputation and joint-testing. When using our method as opposed to the current standard of ‘best guess’ reference panels, we observed a 28% decrease in mean-squared error for imputation and a 73.7% decrease in mean-squared error for joint-testing. Availability and implementation: Our method is publicly available in a software package called ADAPT-Mix available at https://github.com/dpark27/adapt_mix. Contact: noah.zaitlen@ucsf.edu PMID:26072481

  12. Fuel/oxidizer-rich high-pressure preburners. [staged-combustion rocket engine

    NASA Technical Reports Server (NTRS)

    Schoenman, L.

    1981-01-01

    The analyses, designs, fabrication, and cold-flow acceptance testing of LOX/RP-1 preburner components required for a high-pressure staged-combustion rocket engine are discussed. Separate designs of injectors, combustion chambers, turbine simulators, and hot-gas mixing devices are provided for fuel-rich and oxidizer-rich operation. The fuel-rich design addresses the problem of non-equilibrium LOX/RP-1 combustion. The development and use of a pseudo-kinetic combustion model for predicting operating efficiency, physical properties of the combustion products, and the potential for generating solid carbon is presented. The oxygen-rich design addresses the design criteria for the prevention of metal ignition. This is accomplished by the selection of materials and the generation of well-mixed gases. The combining of unique propellant injector element designs with secondary mixing devices is predicted to be the best approach.

  13. Estimation and interpretation of genetic effects with epistasis using the NOIA model.

    PubMed

    Alvarez-Castro, José M; Carlborg, Orjan; Rönnegård, Lars

    2012-01-01

    We introduce this communication with a brief outline of the historical landmarks in genetic modeling, especially concerning epistasis. Then, we present methods for the use of genetic modeling in QTL analyses. In particular, we summarize the essential expressions of the natural and orthogonal interactions (NOIA) model of genetic effects. Our motivation for reviewing that theory here is twofold. First, this review presents a digest of the expressions for the application of the NOIA model, which are often mixed with intermediate and additional formulae in the original articles. Second, we make the required theory handy for the reader to relate the genetic concepts to the particular mathematical expressions underlying them. We illustrate those relations by providing graphical interpretations and a diagram summarizing the key features for applying genetic modeling with epistasis in comprehensive QTL analyses. Finally, we briefly review some examples of the application of NOIA to real data and the way it improves the interpretability of the results.

  14. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.

    PubMed

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2016-01-15

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Real medical benefit assessed by indirect comparison.

    PubMed

    Falissard, Bruno; Zylberman, Myriam; Cucherat, Michel; Izard, Valérie; Meyer, François

    2009-01-01

    Frequently, in data packages submitted for Marketing Approval to the CHMP, there is a lack of relevant head-to-head comparisons of medicinal products that could enable national authorities responsible for the approval of reimbursement to assess the Added Therapeutic Value (ASMR) of new clinical entities or line extensions of existing therapies.Indirect or mixed treatment comparisons (MTC) are methods stemming from the field of meta-analysis that have been designed to tackle this problem. Adjusted indirect comparisons, meta-regressions, mixed models, Bayesian network analyses pool results of randomised controlled trials (RCTs), enabling a quantitative synthesis.The REAL procedure, recently developed by the HAS (French National Authority for Health), is a mixture of an MTC and effect model based on expert opinions. It is intended to translate the efficacy observed in the trials into effectiveness expected in day-to-day clinical practice in France.

  16. TANK48 CFD MODELING ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S.

    2011-05-17

    The process of recovering the waste in storage tanks at the Savannah River Site (SRS) typically requires mixing the contents of the tank to ensure uniformity of the discharge stream. Mixing is accomplished with one to four dual-nozzle slurry pumps located within the tank liquid. For the work, a Tank 48 simulation model with a maximum of four slurry pumps in operation has been developed to estimate flow patterns for efficient solid mixing. The modeling calculations were performed by using two modeling approaches. One approach is a single-phase Computational Fluid Dynamics (CFD) model to evaluate the flow patterns and qualitativemore » mixing behaviors for a range of different modeling conditions since the model was previously benchmarked against the test results. The other is a two-phase CFD model to estimate solid concentrations in a quantitative way by solving the Eulerian governing equations for the continuous fluid and discrete solid phases over the entire fluid domain of Tank 48. The two-phase results should be considered as the preliminary scoping calculations since the model was not validated against the test results yet. A series of sensitivity calculations for different numbers of pumps and operating conditions has been performed to provide operational guidance for solids suspension and mixing in the tank. In the analysis, the pump was assumed to be stationary. Major solid obstructions including the pump housing, the pump columns, and the 82 inch central support column were included. The steady state and three-dimensional analyses with a two-equation turbulence model were performed with FLUENT{trademark} for the single-phase approach and CFX for the two-phase approach. Recommended operational guidance was developed assuming that local fluid velocity can be used as a measure of sludge suspension and spatial mixing under single-phase tank model. For quantitative analysis, a two-phase fluid-solid model was developed for the same modeling conditions as the single-phase model. The modeling results show that the flow patterns driven by four pump operation satisfy the solid suspension requirement, and the average solid concentration at the plane of the transfer pump inlet is about 12% higher than the tank average concentrations for the 70 inch tank level and about the same as the tank average value for the 29 inch liquid level. When one of the four pumps is not operated, the flow patterns are satisfied with the minimum suspension velocity criterion. However, the solid concentration near the tank bottom is increased by about 30%, although the average solid concentrations near the transfer pump inlet have about the same value as the four-pump baseline results. The flow pattern results show that although the two-pump case satisfies the minimum velocity requirement to suspend the sludge particles, it provides the marginal mixing results for the heavier or larger insoluble materials such as MST and KTPB particles. The results demonstrated that when more than one jet are aiming at the same position of the mixing tank domain, inefficient flow patterns are provided due to the highly localized momentum dissipation, resulting in inactive suspension zone. Thus, after completion of the indexed solids suspension, pump rotations are recommended to avoid producing the nonuniform flow patterns. It is noted that when tank liquid level is reduced from the highest level of 70 inches to the minimum level of 29 inches for a given number of operating pumps, the solid mixing efficiency becomes better since the ratio of the pump power to the mixing volume becomes larger. These results are consistent with the literature results.« less

  17. Influences of organic carbon speciation on hyporheic corridor biogeochemistry and microbial ecology.

    PubMed

    Stegen, James C; Johnson, Tim; Fredrickson, James K; Wilkins, Michael J; Konopka, Allan E; Nelson, William C; Arntzen, Evan V; Chrisler, William B; Chu, Rosalie K; Fansler, Sarah J; Graham, Emily B; Kennedy, David W; Resch, Charles T; Tfaily, Malak; Zachara, John

    2018-02-08

    The hyporheic corridor (HC) encompasses the river-groundwater continuum, where the mixing of groundwater (GW) with river water (RW) in the HC can stimulate biogeochemical activity. Here we propose a novel thermodynamic mechanism underlying this phenomenon and reveal broader impacts on dissolved organic carbon (DOC) and microbial ecology. We show that thermodynamically favorable DOC accumulates in GW despite lower DOC concentration, and that RW contains thermodynamically less-favorable DOC, but at higher concentrations. This indicates that GW DOC is protected from microbial oxidation by low total energy within the DOC pool, whereas RW DOC is protected by lower thermodynamic favorability of carbon species. We propose that GW-RW mixing overcomes these protections and stimulates respiration. Mixing models coupled with geophysical and molecular analyses further reveal tipping points in spatiotemporal dynamics of DOC and indicate important hydrology-biochemistry-microbial feedbacks. Previously unrecognized thermodynamic mechanisms regulated by GW-RW mixing may therefore strongly influence biogeochemical and microbial dynamics in riverine ecosystems.

  18. AST Critical Propulsion and Noise Reduction Technologies for Future Commercial Subsonic Engines: Separate-Flow Exhaust System Noise Reduction Concept Evaluation

    NASA Technical Reports Server (NTRS)

    Janardan, B. A.; Hoff, G. E.; Barter, J. W.; Martens, S.; Gliebe, P. R.; Mengle, V.; Dalton, W. N.; Saiyed, Naseem (Technical Monitor)

    2000-01-01

    This report describes the work performed by General Electric Aircraft Engines (GEAE) and Allison Engine Company (AEC) on NASA Contract NAS3-27720 AoI 14.3. The objective of this contract was to generate quality jet noise acoustic data for separate-flow nozzle models and to design and verify new jet-noise-reduction concepts over a range of simulated engine cycles and flight conditions. Five baseline axisymmetric separate-flow nozzle models having bypass ratios of five and eight with internal and external plugs and 11 different mixing-enhancer model nozzles (including chevrons, vortex-generator doublets, and a tongue mixer) were designed and tested in model scale. Using available core and fan nozzle hardware in various combinations, 28 GEAE/AEC separate-flow nozzle/mixing-enhancer configurations were acoustically evaluated in the NASA Glenn Research Center Aeroacoustic and Propulsion Laboratory. This report describes model nozzle features, facility and data acquisition/reduction procedures, the test matrix, and measured acoustic data analyses. A number of tested core and fan mixing enhancer devices and combinations of devices gave significant jet noise reduction relative to separate-flow baseline nozzles. Inward-flip and alternating-flip core chevrons combined with a straight-chevron fan nozzle exceeded the NASA stretch goal of 3 EPNdB jet noise reduction at typical sideline certification conditions.

  19. The Causes and Evolutionary Consequences of Mixed Singing in Two Hybridizing Songbird Species (Luscinia spp.)

    PubMed Central

    Vokurková, Jana; Petrusková, Tereza; Reifová, Radka; Kozman, Alexandra; Mořkovský, Libor; Kipper, Silke; Weiss, Michael; Reif, Jiří; Dolata, Paweł T.; Petrusek, Adam

    2013-01-01

    Bird song plays an important role in the establishment and maintenance of prezygotic reproductive barriers. When two closely related species come into secondary contact, song convergence caused by acquisition of heterospecific songs into the birds’ repertoires is often observed. The proximate mechanisms responsible for such mixed singing, and its effect on the speciation process, are poorly understood. We used a combination of genetic and bioacoustic analyses to test whether mixed singing observed in the secondary contact zone of two passerine birds, the Thrush Nightingale (Luscinia luscinia) and the Common Nightingale (L. megarhynchos), is caused by introgressive hybridization. We analysed song recordings of both species from allopatric and sympatric populations together with genotype data from one mitochondrial and seven nuclear loci. Semi-automated comparisons of our recordings with an extensive catalogue of Common Nightingale song types confirmed that most of the analysed sympatric Thrush Nightingale males were ‘mixed singers’ that use heterospecific song types in their repertoires. None of these ‘mixed singers’ possessed any alleles introgressed from the Common Nightingale, suggesting that they were not backcross hybrids. We also analysed songs of five individuals with intermediate phenotype, which were identified as F1 hybrids between the Thrush Nightingale female and the Common Nightingale male by genetic analysis. Songs of three of these hybrids corresponded to the paternal species (Common Nightingale) but the remaining two sung a mixed song. Our results suggest that although hybridization might increase the tendency for learning songs from both parental species, interspecific cultural transmission is the major proximate mechanism explaining the occurrence of mixed singers among the sympatric Thrush Nightingales. We also provide evidence that mixed singing does not substantially increase the rate of interspecific hybridization and discuss the possible adaptive value of this phenomenon in nightingales. PMID:23577089

  20. Steady state RANS simulations of temperature fluctuations in single phase turbulent mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kickhofel, J.; Fokken, J.; Kapulla, R.

    2012-07-01

    Single phase turbulent mixing in nuclear power plant circuits where a strong temperature gradient is present is known to precipitate pipe failure due to thermal fatigue. Experiments in a square mixing channel offer the opportunity to study the phenomenon under simple and easily reproducible boundary conditions. Measurements of this kind have been performed extensively at the Paul Scherrer Inst. in Switzerland with a high density of instrumentation in the Generic Mixing Experiment (GEMIX). As a fundamental mixing phenomena study closely related to the thermal fatigue problem, the experimental results from GEMIX are valuable for the validation of CFD codes strivingmore » to accurately simulate both the temperature and velocity fields in single phase turbulent mixing. In the experiments two iso-kinetic streams meet at a shallow angle of 3 degrees and mix in a straight channel of square cross-section under various degrees of density, temperature, and viscosity stratification over a range of Reynolds numbers ranging from 5*10{sup 3} to 1*10{sup 5}. Conductivity measurements, using wire-mesh and wall sensors, as well as optical measurements, using particle image velocimetry, were conducted with high temporal and spatial resolutions (up to 2.5 kHz and 1 mm in the case of the wire mesh sensor) in the mixing zone, downstream of a splitter plate. The present paper communicates the results of RANS modeling of selected GEMIX tests. Steady-state CFD calculations using a RANS turbulence model represent an inexpensive method for analyzing large and complex components in commercial nuclear reactors, such as the downcomer and reactor pressure vessel heads. Crucial to real world applicability, however, is the ability to model turbulent heat fluctuations in the flow; the Turbulent Heat Flux Transport model developed by ANSYS CFX is capable, by implementation of a transport equation for turbulent heat fluxes, of readily modeling these values. Furthermore, the closure of the turbulent heat flux transport equation evokes a transport equation for the variance of the enthalpy. It is therefore possible to compare the modeled fluctuations of the liquid temperature directly with the scalar fluctuations recorded experimentally with the wire-mesh. Combined with a working Turbulent Heat Flux Transport model, complex mixing problems in large geometries could be better understood. We aim for the validation of Reynolds Stress based RANS simulations extended by the Turbulent Heat Flux Transport model by modeling the GEMIX experiments in detail. Numerical modeling has been performed using both BSL and SSG Reynolds Stress Models in a test matrix comprising experimental trials at the GEMIX facility. We expand on the turbulent mixing RANS CFD results of (Manera 2009) in a few ways. In the GEMIX facility we introduce density stratification in the flow while removing the characteristic large scale vorticity encountered in T-junctions and therefore find better conditions to check the diffusive conditions in the model. Furthermore, we study the performance of the model in a very different, simpler scalar fluctuation spectrum. The paper discusses the performance of the model regarding the dissipation of the turbulent kinetic energy and dissipation of the enthalpy variance. A novel element is the analyses of cases with density stratification. (authors)« less

  1. Recognition of facial expressions of mixed emotions in school-age children exposed to terrorism.

    PubMed

    Scrimin, Sara; Moscardino, Ughetta; Capello, Fabia; Altoè, Gianmarco; Axia, Giovanna

    2009-09-01

    This exploratory study aims at investigating the effects of terrorism on children's ability to recognize emotions. A sample of 101 exposed and 102 nonexposed children (mean age = 11 years), balanced for age and gender, were assessed 20 months after a terrorist attack in Beslan, Russia. Two trials controlled for children's ability to match a facial emotional stimulus with an emotional label and their ability to match an emotional label with an emotional context. The experimental trial evaluated the relation between exposure to terrorism and children's free labeling of mixed emotion facial stimuli created by morphing between 2 prototypical emotions. Repeated measures analyses of covariance revealed that exposed children correctly recognized pure emotions. Four log-linear models were performed to explore the association between exposure group and category of answer given in response to different mixed emotion facial stimuli. Model parameters indicated that, compared with nonexposed children, exposed children (a) labeled facial expressions containing anger and sadness significantly more often than expected as anger, and (b) produced fewer correct answers in response to stimuli containing sadness as a target emotion.

  2. The extent of mixing in stellar interiors: the open clusters Collinder 261 and Melotte 66

    NASA Astrophysics Data System (ADS)

    Drazdauskas, Arnas; Tautvaišienė, Gražina; Randich, Sofia; Bragaglia, Angela; Mikolaitis, Šarūnas; Janulis, Rimvydas

    2016-05-01

    Context. Determining carbon and nitrogen abundances in red giants provides useful diagnostics to test mixing processes in stellar atmospheres. Aims: Our main aim is to determine carbon-to-nitrogen and carbon isotope ratios for evolved giants in the open clusters Collinder 261 and Melotte 66 and to compare the results with predictions of theoretical models. Methods: High-resolution spectra were analysed using a differential model atmosphere method. Abundances of carbon were derived using the C2 Swan (0, 1) band head at 5635.5 Å. The wavelength interval 7940-8130 Å, which contains CN features, was analysed to determine nitrogen abundances and carbon isotope ratios. The oxygen abundances were determined from the [O I] line at 6300 Å. Results: The mean values of the elemental abundances in Collinder 261, as determined from seven stars, are: [ C/Fe ] = -0.23 ± 0.02 (s.d.), [ N/Fe ] = 0.18 ± 0.09, [ O/Fe ] = -0.03 ± 0.07. The mean 12C /13C ratio is 11 ± 2, considering four red clump stars and 18 for one star above the clump. The mean C/N ratios are 1.60 ± 0.30 and 1.74, respectively. For the five stars in Melotte 66 we obtained: [ C/Fe ] = -0.21 ± 0.07 (s.d.), [ N/Fe ] = 0.17 ± 0.07, [ O/Fe ] = 0.16 ± 0.04. The 12C /13C and C/N ratios are 8 ± 2 and 1.67 ± 0.21, respectively. Conclusions: The 12C /13C and C/N ratios of stars in the investigated open clusters were compared with the ratios predicted by stellar evolution models. The mean values of 12C /13C ratios in Collinder 261 and Melotte 66 agree well with models of thermohaline-induced extra-mixing for the corresponding stellar turn-off masses of about 1.1-1.2 M⊙. The mean C/N ratios are not decreased as much as predicted by the model in which the thermohaline- and rotation-induced extra-mixing act together. Based on observations collected at ESO telescopes under Guaranteed Time Observation programmes 071.D-0065, 072.D-0019, and 076.D-0220.

  3. Nonlinear mixed effects modelling approach in investigating phenobarbital pharmacokinetic interactions in epileptic patients.

    PubMed

    Vučićević, Katarina; Jovanović, Marija; Golubović, Bojana; Kovačević, Sandra Vezmar; Miljković, Branislava; Martinović, Žarko; Prostran, Milica

    2015-02-01

    The present study aimed to establish population pharmacokinetic model for phenobarbital (PB), examining and quantifying the magnitude of PB interactions with other antiepileptic drugs concomitantly used and to demonstrate its use for individualization of PB dosing regimen in adult epileptic patients. In total 205 PB concentrations were obtained during routine clinical monitoring of 136 adult epilepsy patients. PB steady state concentrations were measured by homogeneous enzyme immunoassay. Nonlinear mixed effects modelling (NONMEM) was applied for data analyses and evaluation of the final model. According to the final population model, significant determinant of apparent PB clearance (CL/F) was daily dose of concomitantly given valproic acid (VPA). Typical value of PB CL/F for final model was estimated at 0.314 l/h. Based on the final model, co-therapy with usual VPA dose of 1000 mg/day, resulted in PB CL/F average decrease of about 25 %, while 2000 mg/day leads to an average 50 % decrease in PB CL/F. Developed population PB model may be used in estimating individual CL/F for adult epileptic patients and could be applied for individualizing dosing regimen taking into account dose-dependent effect of concomitantly given VPA.

  4. Breaking from binaries - using a sequential mixed methods design.

    PubMed

    Larkin, Patricia Mary; Begley, Cecily Marion; Devane, Declan

    2014-03-01

    To outline the traditional worldviews of healthcare research and discuss the benefits and challenges of using mixed methods approaches in contributing to the development of nursing and midwifery knowledge. There has been much debate about the contribution of mixed methods research to nursing and midwifery knowledge in recent years. A sequential exploratory design is used as an exemplar of a mixed methods approach. The study discussed used a combination of focus-group interviews and a quantitative instrument to obtain a fuller understanding of women's experiences of childbirth. In the mixed methods study example, qualitative data were analysed using thematic analysis and quantitative data using regression analysis. Polarised debates about the veracity, philosophical integrity and motivation for conducting mixed methods research have largely abated. A mixed methods approach can contribute to a deeper, more contextual understanding of a variety of subjects and experiences; as a result, it furthers knowledge that can be used in clinical practice. The purpose of the research study should be the main instigator when choosing from an array of mixed methods research designs. Mixed methods research offers a variety of models that can augment investigative capabilities and provide richer data than can a discrete method alone. This paper offers an example of an exploratory, sequential approach to investigating women's childbirth experiences. A clear framework for the conduct and integration of the different phases of the mixed methods research process is provided. This approach can be used by practitioners and policy makers to improve practice.

  5. Subdomains of gender-related occupational interests: do they form a cohesive bipolar M-F dimension?

    PubMed

    Lippa, Richard A

    2005-06-01

    In four studies, with a total of 1780 male and 2969 female participants, subdomains of masculine and feminine occupations were identified from sets of occupational preference items. Identified masculine subdomains included "blue-collar realistic" (e.g., carpenter), "educated realistic" (electrical engineer), and "flashy, risk-taking" (jet pilot). Feminine subdomains included "fashion-related" (fashion model), "artistic" (author), "helping" (social worker), and "children-related" (manager of childcare center). In all studies, principal components analyses of subdomain preference scales showed that masculine subdomains were bipolar opposites of feminine subdomains. This bipolar structure emerged in analyses conducted on combined-sex groups, high-school boys, high-school girls, men, women, heterosexual men, gay men, heterosexual women, and lesbian women. The results suggest that, although there are distinct masculine and feminine occupational subdomains, gender-related occupational preferences, nonetheless, form a replicable, cohesive, bipolar individual difference dimension, which is not an artifact of studying mixed-sex or mixed-sexual-orientation groups.

  6. Runtime and Pressurization Analyses of Propellant Tanks

    NASA Technical Reports Server (NTRS)

    Field, Robert E.; Ryan, Harry M.; Ahuja, Vineet; Hosangadi, Ashvin; Lee, Chung P.

    2007-01-01

    Multi-element unstructured CFD has been utilized at NASA SSC to carry out analyses of propellant tank systems in different modes of operation. The three regimes of interest at SSC include (a) tank chill down (b) tank pressurization and (c) runtime propellant draw-down and purge. While tank chill down is an important event that is best addressed with long time-scale heat transfer calculations, CFD can play a critical role in the tank pressurization and runtime modes of operation. In these situations, problems with contamination of the propellant by inclusion of the pressurant gas from the ullage causes a deterioration of the quality of the propellant delivered to the test article. CFD can be used to help quantify the mixing and propellant degradation. During tank pressurization under some circumstances, rapid mixing of relatively warm pressurant gas with cryogenic propellant can lead to rapid densification of the gas and loss of pressure in the tank. This phenomenon can cause serious problems during testing because of the resulting decrease in propellant flow rate. With proper physical models implemented, CFD can model the coupling between the propellant and pressurant including heat transfer and phase change effects and accurately capture the complex physics in the evolving flowfields. This holds the promise of allowing the specification of operational conditions and procedures that could minimize the undesirable mixing and heat transfer inherent in propellant tank operation. It should be noted that traditional CFD modeling is inadequate for such simulations because the fluids in the tank are in a range of different sub-critical and supercritical states and elaborate phase change and mixing rules have to be developed to accurately model the interaction between the ullage gas and the propellant. We show a typical run-time simulation of a spherical propellant tank, containing RP-1 in this case, being pressurized with room-temperature nitrogen at 540 R. Nitrogen, shown in blue on the right-hand side of the figures, enters the tank from the diffuser at the top of the figures and impinges on the RP-1, shown in red, while the propellant is being continuously drained at the rate of 1050 lbs/sec through a pipe at the bottom of the tank. The sequence of frames in Figure 1 shows the resultant velocity fields and mixing between nitrogen and RP-1 in a cross-section of the tank at different times. A vortex is seen to form in the incoming nitrogen stream that tends to entrain propellant, mixing it with the pressurant gas. The RP-1 mass fraction contours in Figure 1 are also indicative of the level of mixing and contamination of the propellant. The simulation is used to track the propagation of the pure propellant front as it is drawn toward the exit with the evolution of the mixing processes in the tank. The CFD simulation modeled a total of 10 seconds of run time. As is seen from Figure 1d, after 5.65 seconds the propellant front is nearing the drain pipe, especially near the center of the tank. Behind this pure propellant front is a mixed fluid of compromised quality that would require the test to end when it reaches the exit pipe. Such unsteady simulations provide an estimate of the time that a high-quality propellant supply to the test article can be guaranteed at the modeled mass flow rate. In the final paper, we will discuss simulations of the LOX and propellant tanks at NASA SSC being pressurized by an inert ullage. Detailed comparisons will be made between the CFD simulations and lower order models as well as with test data. Conditions leading to cryo collapse in the tank will also be identified.

  7. Carbon emission allowance allocation with a mixed mechanism in air passenger transport.

    PubMed

    Qiu, Rui; Xu, Jiuping; Zeng, Ziqiang

    2017-09-15

    Air passenger transport carbon emissions have become a great challenge for both governments and airlines because of rapid developments in the aviation industry in recent decades. In this paper, a mixed mechanism composed of a cap-and-trade mechanism and a carbon tax mechanism is developed to assist governments in allocating carbon emission allowances to airlines operating on the routes. Combined this mixed mechanism with an equilibrium strategy, a bi-level multi-objective model is proposed for an air passenger transport carbon emission allowance allocation problem, in which a government is considered as a leader and the airlines as the followers. An interactive solution approach integrating a genetic algorithm and an interactive evolutionary mechanism is designed to search for satisfactory solutions of the proposed model. A case study is then presented to show its practicality and efficiency in mitigating carbon emissions. Sensitivity analyses under different tradable and taxable levels are also conducted, which can give the government insights as to the tradeoffs between lowering carbon intensity and improving airlines' operations. The computational results demonstrate that the mixed mechanism can assist greatly in carbon emission mitigation for air passenger transport and therefore, it should be established as part of air passenger transport carbon emission policies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Streets, W.E.

    As the need for rapid and more accurate determinations of gamma-emitting radionuclides in environmental and mixed waste samples grows, there is continued interest in the development of theoretical tools to eliminate the need for some laboratory analyses and to enhance the quality of information from necessary analyses. In gamma spectrometry the use of theoretical self-absorption coefficients (SACs) can eliminate the need to determine the SAC empirically by counting a known source through each sample. This empirical approach requires extra counting time and introduces another source of counting error, which must be included in the calculation of results. The empirical determinationmore » of SACs is routinely used when the nuclides of interest are specified; theoretical determination of the SAC can enhance the information for the analysis of true unknowns, where there may be no prior knowledge about radionuclides present in a sample. Determination of an exact SAC does require knowledge about the total composition of a sample. In support of the Department of Energy`s (DOE) Environmental Survey Program, the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory developed theoretical self-absorption models to estimate SACs for the determination of non-specified radionuclides in samples of unknown, widely-varying, compositions. Subsequently, another SAC model, in a different counting geometry and for specified nuclides, was developed for another application. These two models are now used routinely for the determination of gamma-emitting radionuclides in a wide variety of environmental and mixed waste samples.« less

  9. Closing the Seasonal Ocean Surface Temperature Balance in the Eastern Tropical Oceans from Remote Sensing and Model Reanalyses

    NASA Technical Reports Server (NTRS)

    Roberts, J. Brent; Clayson, Carol A.

    2012-01-01

    The Eastern tropical ocean basins are regions of significant atmosphere-ocean interaction and are important to variability across subseasonal to decadal time scales. The numerous physical processes at play in these areas strain the abilities of coupled general circulation models to accurately reproduce observed upper ocean variability. Furthermore, limitations in the observing system of important terms in the surface temperature balance (e.g., turbulent and radiative heat fluxes, advection) introduce uncertainty into the analyses of processes controlling sea surface temperature variability. This study presents recent efforts to close the surface temperature balance through estimation of the terms in the mixed layer temperature budget using state-of-the-art remotely sensed and model-reanalysis derived products. A set of twelve net heat flux estimates constructed using combinations of radiative and turbulent heat flux products - including GEWEX-SRB, ISCCP-SRF, OAFlux, SeaFlux, among several others - are used with estimates of oceanic advection, entrainment, and mixed layer depth variability to investigate the seasonal variability of ocean surface temperatures. Particular emphasis is placed on how well the upper ocean temperature balance is, or is not, closed on these scales using the current generation of observational and model reanalysis products. That is, the magnitudes and spatial variability of residual imbalances are addressed. These residuals are placed into context within the current uncertainties of the surface net heat fluxes and the role of the mixed layer depth variability in scaling the impact of those uncertainties, particularly in the shallow mixed layers of the Eastern tropical ocean basins.

  10. Analyses of turbulent flow fields and aerosol dynamics of diesel engine exhaust inside two dilution sampling tunnels using the CTAG model.

    PubMed

    Wang, Yan Jason; Yang, Bo; Lipsky, Eric M; Robinson, Allen L; Zhang, K Max

    2013-01-15

    Experimental results from laboratory emission testing have indicated that particulate emission measurements are sensitive to the dilution process of exhaust using fabricated dilution systems. In this paper, we first categorize the dilution parameters into two groups: (1) aerodynamics (e.g., mixing types, mixing enhancers, dilution ratios, residence time); and (2) mixture properties (e.g., temperature, relative humidity, particle size distributions of both raw exhaust and dilution gas). Then we employ the Comprehensive Turbulent Aerosol Dynamics and Gas Chemistry (CTAG) model to investigate the effects of those parameters on a set of particulate emission measurements comparing two dilution tunnels, i.e., a T-mixing lab dilution tunnel and a portable field dilution tunnel with a type of coaxial mixing. The turbulent flow fields and aerosol dynamics of particles are simulated inside two dilution tunnels. Particle size distributions under various dilution conditions predicted by CTAG are evaluated against the experimental data. It is found that in the area adjacent to the injection of exhaust, turbulence plays a crucial role in mixing the exhaust with the dilution air, and the strength of nucleation dominates the level of particle number concentrations. Further downstream, nucleation terminates and the growth of particles by condensation and coagulation continues. Sensitivity studies reveal that a potential unifying parameter for aerodynamics, i.e., the dilution rate of exhaust, plays an important role in new particle formation. The T-mixing lab tunnel tends to favor the nucleation due to a larger dilution rate of the exhaust than the coaxial mixing field tunnel. Our study indicates that numerical simulation tools can be potentially utilized to develop strategies to reduce the uncertainties associated with dilution samplings of emission sources.

  11. Three multimedia models used at hazardous and radioactive waste sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moskowitz, P.D.; Pardi, R.; Fthenakis, V.M.

    1996-02-01

    Multimedia models are used commonly in the initial phases of the remediation process where technical interest is focused on determining the relative importance of various exposure pathways. This report provides an approach for evaluating and critically reviewing the capabilities of multimedia models. This study focused on three specific models MEPAS Version 3.0, MMSOILS Version 2.2, and PRESTO-EPA-CPG Version 2.0. These models evaluate the transport and fate of contaminants from source to receptor through more than a single pathway. The presence of radioactive and mixed wastes at a site poses special problems. Hence, in this report, restrictions associated with the selectionmore » and application of multimedia models for sites contaminated with radioactive and mixed wastes are highlighted. This report begins with a brief introduction to the concept of multimedia modeling, followed by an overview of the three models. The remaining chapters present more technical discussions of the issues associated with each compartment and their direct application to the specific models. In these analyses, the following components are discussed: source term; air transport; ground water transport; overland flow, runoff, and surface water transport; food chain modeling; exposure assessment; dosimetry/risk assessment; uncertainty; default parameters. The report concludes with a description of evolving updates to the model; these descriptions were provided by the model developers.« less

  12. Growth of mixed cultures on mixtures of substitutable substrates: the operating diagram for a structured model.

    PubMed

    Reeves, Gregory T; Narang, Atul; Pilyugin, Sergei S

    2004-01-21

    The growth of mixed microbial cultures on mixtures of substrates is a problem of fundamental biological interest. In the last two decades, several unstructured models of mixed-substrate growth have been studied. It is well known, however, that the growth patterns in mixed-substrate environments are dictated by the enzymes that catalyse the transport of substrates into the cell. We have shown previously that a model taking due account of transport enzymes captures and explains all the observed patterns of growth of a single species on two substitutable substrates (J. Theor. Biol. 190 (1998) 241). Here, we extend the model to study the steady states of growth of two species on two substitutable substrates. The model is analysed to determine the conditions for existence and stability of the various steady states. Simulations are performed to determine the flow rates and feed concentrations at which both species coexist. We show that if the interaction between the two species is purely competitive, then at any given flow rate, coexistence is possible only if the ratio of the two feed concentrations lies within a certain interval; excessive supply of either one of the two substrates leads to annihilation of one of the species. This result simplifies the construction of the operating diagram for purely competing species. This is because the two-dimensional surface that bounds the flow rates and feed concentrations at which both species coexist has a particularly simple geometry: It is completely determined by only two coordinates, the flow rate and the ratio of the two feed concentrations. We also study commensalistic interactions between the two species by assuming that one of the species excretes a product that can support the growth of the other species. We show that such interactions enhance the coexistence region.

  13. Representation of memories in the cortical-hippocampal system: Results from the application of population similarity analyses

    PubMed Central

    McKenzie, Sam; Keene, Chris; Farovik, Anja; Blandon, John; Place, Ryan; Komorowski, Robert; Eichenbaum, Howard

    2016-01-01

    Here we consider the value of neural population analysis as an approach to understanding how information is represented in the hippocampus and cortical areas and how these areas might interact as a brain system to support memory. We argue that models based on sparse coding of different individual features by single neurons in these areas (e.g., place cells, grid cells) are inadequate to capture the complexity of experience represented within this system. By contrast, population analyses of neurons with denser coding and mixed selectivity reveal new and important insights into the organization of memories. Furthermore, comparisons of the organization of information in interconnected areas suggest a model of hippocampal-cortical interactions that mediates the fundamental features of memory. PMID:26748022

  14. Analysing Buyers' and Sellers' Strategic Interactions in Marketplaces: An Evolutionary Game Theoretic Approach

    NASA Astrophysics Data System (ADS)

    Vytelingum, Perukrishnen; Cliff, Dave; Jennings, Nicholas R.

    We develop a new model to analyse the strategic behaviour of buyers and sellers in market mechanisms. In particular, we wish to understand how the different strategies they adopt affect their economic efficiency in the market and to understand the impact of these choices on the overall efficiency of the marketplace. To this end, we adopt a two-population evolutionary game theoretic approach, where we consider how the behaviours of both buyers and sellers evolve in marketplaces. In so doing, we address the shortcomings of the previous state-of-the-art analytical model that assumes that buyers and sellers have to adopt the same mixed strategy in the market. Finally, we apply our model in one of the most common market mechanisms, the Continuous Double Auction, and demonstrate how it allows us to provide new insights into the strategic interactions of such trading agents.

  15. An Overview of Longitudinal Data Analysis Methods for Neurological Research

    PubMed Central

    Locascio, Joseph J.; Atri, Alireza

    2011-01-01

    The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models. PMID:22203825

  16. Cost-effectiveness of rivaroxaban for stroke prevention in atrial fibrillation in the Portuguese setting.

    PubMed

    Morais, João; Aguiar, Carlos; McLeod, Euan; Chatzitheofilou, Ismini; Fonseca Santos, Isabel; Pereira, Sónia

    2014-09-01

    To project the long-term cost-effectiveness of treating non-valvular atrial fibrillation (AF) patients for stroke prevention with rivaroxaban compared to warfarin in Portugal. A Markov model was used that included health and treatment states describing the management and consequences of AF and its treatment. The model's time horizon was set at a patient's lifetime and each cycle at three months. The analysis was conducted from a societal perspective and a 5% discount rate was applied to both costs and outcomes. Treatment effect data were obtained from the pivotal phase III ROCKET AF trial. The model was also populated with utility values obtained from the literature and with cost data derived from official Portuguese sources. The outcomes of the model included life-years, quality-adjusted life-years (QALYs), incremental costs, and associated incremental cost-effectiveness ratios (ICERs). Extensive sensitivity analyses were undertaken to further assess the findings of the model. As there is evidence indicating underuse and underprescription of warfarin in Portugal, an additional analysis was performed using a mixed comparator composed of no treatment, aspirin, and warfarin, which better reflects real-world prescribing in Portugal. This cost-effectiveness analysis produced an ICER of €3895/QALY for the base-case analysis (vs. warfarin) and of €6697/QALY for the real-world prescribing analysis (vs. mixed comparator). The findings were robust when tested in sensitivity analyses. The results showed that rivaroxaban may be a cost-effective alternative compared with warfarin or real-world prescribing in Portugal. Copyright © 2014 Sociedade Portuguesa de Cardiologia. Published by Elsevier España. All rights reserved.

  17. Mixed-method study of a conceptual model of evidence-based intervention sustainment across multiple public-sector service settings.

    PubMed

    Aarons, Gregory A; Green, Amy E; Willging, Cathleen E; Ehrhart, Mark G; Roesch, Scott C; Hecht, Debra B; Chaffin, Mark J

    2014-12-10

    This study examines sustainment of an EBI implemented in 11 United States service systems across two states, and delivered in 87 counties. The aims are to 1) determine the impact of state and county policies and contracting on EBI provision and sustainment; 2) investigate the role of public, private, and academic relationships and collaboration in long-term EBI sustainment; 3) assess organizational and provider factors that affect EBI reach/penetration, fidelity, and organizational sustainment climate; and 4) integrate findings through a collaborative process involving the investigative team, consultants, and system and community-based organization (CBO) stakeholders in order to further develop and refine a conceptual model of sustainment to guide future research and provide a resource for service systems to prepare for sustainment as the ultimate goal of the implementation process. A mixed-method prospective and retrospective design will be used. Semi-structured individual and group interviews will be used to collect information regarding influences on EBI sustainment including policies, attitudes, and practices; organizational factors and external policies affecting model implementation; involvement of or collaboration with other stakeholders; and outer- and inner-contextual supports that facilitate ongoing EBI sustainment. Document review (e.g., legislation, executive orders, regulations, monitoring data, annual reports, agendas and meeting minutes) will be used to examine the roles of state, county, and local policies in EBI sustainment. Quantitative measures will be collected via administrative data and web surveys to assess EBI reach/penetration, staff turnover, EBI model fidelity, organizational culture and climate, work attitudes, implementation leadership, sustainment climate, attitudes toward EBIs, program sustainment, and level of institutionalization. Hierarchical linear modeling will be used for quantitative analyses. Qualitative analyses will be tailored to each of the qualitative methods (e.g., document review, interviews). Qualitative and quantitative approaches will be integrated through an inclusive process that values stakeholder perspectives. The study of sustainment is critical to capitalizing on and benefiting from the time and fiscal investments in EBI implementation. Sustainment is also critical to realizing broad public health impact of EBI implementation. The present study takes a comprehensive mixed-method approach to understanding sustainment and refining a conceptual model of sustainment.

  18. Comparison of linear, skewed-linear, and proportional hazard models for the analysis of lambing interval in Ripollesa ewes.

    PubMed

    Casellas, J; Bach, R

    2012-06-01

    Lambing interval is a relevant reproductive indicator for sheep populations under continuous mating systems, although there is a shortage of selection programs accounting for this trait in the sheep industry. Both the historical assumption of small genetic background and its unorthodox distribution pattern have limited its implementation as a breeding objective. In this manuscript, statistical performances of 3 alternative parametrizations [i.e., symmetric Gaussian mixed linear (GML) model, skew-Gaussian mixed linear (SGML) model, and piecewise Weibull proportional hazard (PWPH) model] have been compared to elucidate the preferred methodology to handle lambing interval data. More specifically, flock-by-flock analyses were performed on 31,986 lambing interval records (257.3 ± 0.2 d) from 6 purebred Ripollesa flocks. Model performances were compared in terms of deviance information criterion (DIC) and Bayes factor (BF). For all flocks, PWPH models were clearly preferred; they generated a reduction of 1,900 or more DIC units and provided BF estimates larger than 100 (i.e., PWPH models against linear models). These differences were reduced when comparing PWPH models with different number of change points for the baseline hazard function. In 4 flocks, only 2 change points were required to minimize the DIC, whereas 4 and 6 change points were needed for the 2 remaining flocks. These differences demonstrated a remarkable degree of heterogeneity across sheep flocks that must be properly accounted for in genetic evaluation models to avoid statistical biases and suboptimal genetic trends. Within this context, all 6 Ripollesa flocks revealed substantial genetic background for lambing interval with heritabilities ranging between 0.13 and 0.19. This study provides the first evidence of the suitability of PWPH models for lambing interval analysis, clearly discarding previous parametrizations focused on mixed linear models.

  19. Sesquiterpene lactone mix as a diagnostic tool for Asteraceae allergic contact dermatitis: chemical explanation for its poor performance and Sesquiterpene lactone mix II as a proposed improvement.

    PubMed

    Jacob, Mathias; Brinkmann, Jürgen; Schmidt, Thomas J

    2012-05-01

    Two preparations are currently in use for the diagnosis of allergic contact dermatitis caused by Asteraceae: (i) Sesquiterpene lactone (SL) mix [three pure sesquiterpene lactones (STLs)], whose use has been questioned, owing to an insufficient rate of true-positive results; and (ii) Compositae mix, consisting of five Asteraceae extracts, which is problematic because of lack of standardization and questionable reproducibility. To analyse the reasons for the narrow sensitivity of SL mix from a chemoinformatic point of view, and to propose a solution by rational selection of alternative constituents for a new SL mix II covering a broader cohort of allergic patients. Structural and biological information on allergenic STLs was retrieved from databases and the literature, and molecular modelling and chemoinformatic computations were performed. An explanation for the insufficient hit rate of SL mix is that the three constituents possess extremely similar molecular structures/properties and do not represent well the structural diversity of allergenic STLs. STLs that are known as constituents of Compositae mix plants show much a wider diversity, which explains the higher positive rate. On the basis of their positions in chemical property space, a new collection of STLs that more evenly cover the overall structural diversity spectrum is proposed. SL mix II is likely to detect a larger number of patients sensitized to Asteraceae. © 2012 John Wiley & Sons A/S.

  20. Climate Change and Future U.S. Electricity Infrastructure: the Nexus between Water Availability, Land Suitability, and Low-Carbon Technologies

    NASA Astrophysics Data System (ADS)

    Rice, J.; Halter, T.; Hejazi, M. I.; Jensen, E.; Liu, L.; Olson, J.; Patel, P.; Vernon, C. R.; Voisin, N.; Zuljevic, N.

    2014-12-01

    Integrated assessment models project the future electricity generation mix under different policy, technology, and socioeconomic scenarios, but they do not directly address site-specific factors such as interconnection costs, population density, land use restrictions, air quality, NIMBY concerns, or water availability that might affect the feasibility of achieving the technology mix. Moreover, since these factors can change over time due to climate, policy, socioeconomics, and so on, it is important to examine the dynamic feasibility of integrated assessment scenarios "on the ground." This paper explores insights from coupling an integrated assessment model (GCAM-USA) with a geospatial power plant siting model (the Capacity Expansion Regional Feasibility model, CERF) within a larger multi-model framework that includes regional climate, hydrologic, and water management modeling. GCAM-USA is a dynamic-recursive market equilibrium model simulating the impact of carbon policies on global and national markets for energy commodities and other goods; one of its outputs is the electricity generation mix and expansion at the state-level. It also simulates water demands from all sectors that are downscaled as input to the water management modeling. CERF simulates siting decisions by dynamically representing suitable areas for different generation technologies with geospatial analyses (informed by technology-specific siting criteria, such as required mean streamflow per the Clean Water Act), and then choosing siting locations to minimize interconnection costs (to electric transmission and gas pipelines). CERF results are compared across three scenarios simulated by GCAM-USA: 1) a non-mitigation scenario (RCP8.5) in which conventional fossil-fueled technologies prevail, 2) a mitigation scenario (RCP4.5) in which the carbon price causes a shift toward nuclear, carbon capture and sequestration (CCS), and renewables, and 3) a repeat of scenario (2) in which CCS technologies are made unavailable—resulting in a large increase in the nuclear fraction of the mix.

  1. Insights into hydrologic and hydrochemical processes based on concentration-discharge and end-member mixing analyses in the mid-Merced River Basin, Sierra Nevada, California

    NASA Astrophysics Data System (ADS)

    Liu, Fengjing; Conklin, Martha H.; Shaw, Glenn D.

    2017-01-01

    Both concentration-discharge relation and end-member mixing analysis were explored to elucidate the connectivity of hydrologic and hydrochemical processes using chemical data collected during 2006-2008 at Happy Isles (468 km2), Pohono Bridge (833 km2), and Briceburg (1873 km2) in the snowmelt-fed mid-Merced River basin, augmented by chemical data collected by the USGS during 1990-2014 at Happy Isles. Concentration-discharge (C-Q) in streamflow was dominated by a well-defined power law relation, with the magnitude of exponent (0.02-0.6) and R2 values (p < 0.001) lower on rising than falling limbs. Concentrations of conservative solutes in streamflow resulted from mixing of two end-members at Happy Isles and Pohono Bridge and three at Briceburg, with relatively constant solute concentrations in end-members. The fractional contribution of groundwater was higher on rising than falling limbs at all basin scales. The relationship between the fractional contributions of subsurface flow and groundwater and streamflow (F-Q) followed the same relation as C-Q as a result of end-member mixing. The F-Q relation was used as a simple model to simulate subsurface flow and groundwater discharges to Happy Isles from 1990 to 2014 and was successfully validated by solute concentrations measured by the USGS. It was also demonstrated that the consistency of F-Q and C-Q relations is applicable to other catchments where end-members and the C-Q relationships are well defined, suggesting hydrologic and hydrochemical processes are strongly coupled and mutually predictable. Combining concentration-discharge and end-member mixing analyses could be used as a diagnostic tool to understand streamflow generation and hydrochemical controls in catchment hydrologic studies.

  2. FDNS CFD Code Benchmark for RBCC Ejector Mode Operation

    NASA Technical Reports Server (NTRS)

    Holt, James B.; Ruf, Joe

    1999-01-01

    Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi-dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for both Diffusion and Afterburning (DAB) and Simultaneous Mixing and Combustion (SMC) test conditions. Results from both the 2D and the 3D models are presented.

  3. Morphology and mixing state of individual freshly emitted wildfire carbonaceous particles.

    PubMed

    China, Swarup; Mazzoleni, Claudio; Gorkowski, Kyle; Aiken, Allison C; Dubey, Manvendra K

    2013-01-01

    Biomass burning is one of the largest sources of carbonaceous aerosols in the atmosphere, significantly affecting earth's radiation budget and climate. Tar balls, abundant in biomass burning smoke, absorb sunlight and have highly variable optical properties, typically not accounted for in climate models. Here we analyse single biomass burning particles from the Las Conchas fire (New Mexico, 2011) using electron microscopy. We show that the relative abundance of tar balls (80%) is 10 times greater than soot particles (8%). We also report two distinct types of tar balls; one less oxidized than the other. Furthermore, the mixing of soot particles with other material affects their optical, chemical and physical properties. We quantify the morphology of soot particles and classify them into four categories: ~50% are embedded (heavily coated), ~34% are partly coated, ~12% have inclusions and~4% are bare. Inclusion of these observations should improve climate model performances.

  4. Risk of cervical injuries in mixed martial arts.

    PubMed

    Kochhar, T; Back, D L; Mann, B; Skinner, J

    2005-07-01

    Mixed martial arts have rapidly succeeded boxing as the world's most popular full contact sport, and the incidence of injury is recognised to be high. To assess qualitatively and quantitatively the potential risk for participants to sustain cervical spine and associated soft tissue injuries. Four commonly performed manoeuvres with possible risks to the cervical spine were analysed with respect to their kinematics, and biomechanical models were constructed. Motion analysis of two manoeuvres revealed strong correlations with rear end motor vehicle impact injuries, and kinematics of the remaining two suggested a strong risk of injury. Mathematical models of the biomechanics showed that the forces involved are of the same order as those involved in whiplash injuries and of the same magnitude as compression injuries of the cervical spine. This study shows that there is a significant risk of whiplash injuries in this sport, and there are no safety regulations to address these concerns.

  5. Risk of cervical injuries in mixed martial arts

    PubMed Central

    Kochhar, T; Back, D; Mann, B; Skinner, J

    2005-01-01

    Background: Mixed martial arts have rapidly succeeded boxing as the world's most popular full contact sport, and the incidence of injury is recognised to be high. Objective: To assess qualitatively and quantitatively the potential risk for participants to sustain cervical spine and associated soft tissue injuries. Methods: Four commonly performed manoeuvres with possible risks to the cervical spine were analysed with respect to their kinematics, and biomechanical models were constructed. Results: Motion analysis of two manoeuvres revealed strong correlations with rear end motor vehicle impact injuries, and kinematics of the remaining two suggested a strong risk of injury. Mathematical models of the biomechanics showed that the forces involved are of the same order as those involved in whiplash injuries and of the same magnitude as compression injuries of the cervical spine. Conclusions: This study shows that there is a significant risk of whiplash injuries in this sport, and there are no safety regulations to address these concerns. PMID:15976168

  6. Turbulent vertical diffusivity in the sub-tropical stratosphere

    NASA Astrophysics Data System (ADS)

    Pisso, I.; Legras, B.

    2008-02-01

    Vertical (cross-isentropic) mixing is produced by small-scale turbulent processes which are still poorly understood and paramaterized in numerical models. In this work we provide estimates of local equivalent diffusion in the lower stratosphere by comparing balloon borne high-resolution measurements of chemical tracers with reconstructed mixing ratio from large ensembles of random Lagrangian backward trajectories using European Centre for Medium-range Weather Forecasts analysed winds and a chemistry-transport model (REPROBUS). We focus on a case study in subtropical latitudes using data from HIBISCUS campaign. An upper bound on the vertical diffusivity is found in this case study to be of the order of 0.5 m2 s-1 in the subtropical region, which is larger than the estimates at higher latitudes. The relation between diffusion and dispersion is studied by estimating Lyapunov exponents and studying their variation according to the presence of active dynamical structures.

  7. Health economic comparison of SLIT allergen and SCIT allergoid immunotherapy in patients with seasonal grass-allergic rhinoconjunctivitis in Germany.

    PubMed

    Verheggen, Bram G; Westerhout, Kirsten Y; Schreder, Carl H; Augustin, Matthias

    2015-01-01

    Allergoids are chemically modified allergen extracts administered to reduce allergenicity and to maintain immunogenicity. Oralair® (the 5-grass tablet) is a sublingual native grass allergen tablet for pre- and co-seasonal treatment. Based on a literature review, meta-analysis, and cost-effectiveness analysis the relative effects and costs of the 5-grass tablet versus a mix of subcutaneous allergoid compounds for grass pollen allergic rhinoconjunctivitis were assessed. A Markov model with a time horizon of nine years was used to assess the costs and effects of three-year immunotherapy treatment. Relative efficacy expressed as standardized mean differences was estimated using an indirect comparison on symptom scores extracted from available clinical trials. The Rhinitis Symptom Utility Index (RSUI) was applied as a proxy to estimate utility values for symptom scores. Drug acquisition and other medical costs were derived from published sources as well as estimates for resource use, immunotherapy persistence, and occurrence of asthma. The analysis was executed from the German payer's perspective, which includes payments of the Statutory Health Insurance (SHI) and additional payments by insurants. Comprehensive deterministic and probabilistic sensitivity analyses and different scenarios were performed to test the uncertainty concerning the incremental model outcomes. The applied model predicted a cost-utility ratio of the 5-grass tablet versus a market mix of injectable allergoid products of € 12,593 per QALY in the base case analysis. Predicted incremental costs and QALYs were € 458 (95% confidence interval, CI: € 220; € 739) and 0.036 (95% CI: 0.002; 0.078), respectively. Compared to the allergoid mix the probability of the 5-grass tablet being the most cost-effective treatment option was predicted to be 76% at a willingness-to-pay threshold of € 20,000. The results were most sensitive to changes in efficacy estimates, duration of the pollen season, and immunotherapy persistence rates. This analysis suggests the sublingual native 5-grass tablet to be cost-effective relative to a mix of subcutaneous allergoid compounds. The robustness of these statements has been confirmed in extensive sensitivity and scenario analyses.

  8. Quantitative assessment of Pb sources in isotopic mixtures using a Bayesian mixing model.

    PubMed

    Longman, Jack; Veres, Daniel; Ersek, Vasile; Phillips, Donald L; Chauvel, Catherine; Tamas, Calin G

    2018-04-18

    Lead (Pb) isotopes provide valuable insights into the origin of Pb within a sample, typically allowing for reliable fingerprinting of their source. This is useful for a variety of applications, from tracing sources of pollution-related Pb, to the origins of Pb in archaeological artefacts. However, current approaches investigate source proportions via graphical means, or simple mixing models. As such, an approach, which quantitatively assesses source proportions and fingerprints the signature of analysed Pb, especially for larger numbers of sources, would be valuable. Here we use an advanced Bayesian isotope mixing model for three such applications: tracing dust sources in pre-anthropogenic environmental samples, tracking changing ore exploitation during the Roman period, and identifying the source of Pb in a Roman-age mining artefact. These examples indicate this approach can understand changing Pb sources deposited during both pre-anthropogenic times, when natural cycling of Pb dominated, and the Roman period, one marked by significant anthropogenic pollution. Our archaeometric investigation indicates clear input of Pb from Romanian ores previously speculated, but not proven, to have been the Pb source. Our approach can be applied to a range of disciplines, providing a new method for robustly tracing sources of Pb observed within a variety of environments.

  9. Progressive Damage Analyses of Skin/Stringer Debonding

    NASA Technical Reports Server (NTRS)

    Daville, Carlos G.; Camanho, Pedro P.; deMoura, Marcelo F.

    2004-01-01

    The debonding of skin/stringer constructions is analyzed using a step-by-step simulation of material degradation based on strain softening decohesion elements and a ply degradation procedure. Decohesion elements with mixed-mode capability are placed at the interface between the skin and the flange to simulate the initiation and propagation of the delamination. In addition, the initiation and accumulation of fiber failure and matrix damage is modeled using Hashin-type failure criteria and their corresponding material degradation schedules. The debonding predictions using simplified three-dimensional models correlate well with test results.

  10. Buoyancy Driven Coolant Mixing Studies of Natural Circulation Flows at the ROCOM Test Facility Using ANSYS CFX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hohne, Thomas; Kliem, Soren; Rohde, Ulrich

    2006-07-01

    Coolant mixing in the cold leg, downcomer and the lower plenum of pressurized water reactors is an important phenomenon mitigating the reactivity insertion into the core. Therefore, mixing of the de-borated slugs with the ambient coolant in the reactor pressure vessel was investigated at the four loop 1:5 scaled ROCOM mixing test facility. Thermal hydraulics analyses showed, that weakly borated condensate can accumulate in particular in the pump loop seal of those loops, which do not receive safety injection. After refilling of the primary circuit, natural circulation in the stagnant loops can re-establish simultaneously and the de-borated slugs are shiftedmore » towards the reactor pressure vessel (RPV). In the ROCOM experiments, the length of the flow ramp and the initial density difference between the slugs and the ambient coolant was varied. From the test matrix experiments with 0 resp. 2% density difference between the de-borated slugs and the ambient coolant were used to validate the CFD software ANSYS CFX. To model the effects of turbulence on the mean flow a higher order Reynolds stress turbulence model was employed and a mesh consisting of 6.4 million hybrid elements was utilized. Only the experiments and CFD calculations with modeled density differences show a stratification in the downcomer. Depending on the degree of density differences the less dense slugs flow around the core barrel at the top of the downcomer. At the opposite side the lower borated coolant is entrained by the colder safety injection water and transported to the core. The validation proves that ANSYS CFX is able to simulate appropriately the flow field and mixing effects of coolant with different densities. (authors)« less

  11. A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications

    PubMed Central

    Austin, Peter C.

    2017-01-01

    Summary Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log–log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata). PMID:29307954

  12. A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications.

    PubMed

    Austin, Peter C

    2017-08-01

    Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log-log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata).

  13. Numerical Investigation Into Effect of Fuel Injection Timing on CAI/HCCI Combustion in a Four-Stroke GDI Engine

    NASA Astrophysics Data System (ADS)

    Cao, Li; Zhao, Hua; Jiang, Xi; Kalian, Navin

    2006-02-01

    The Controlled Auto-Ignition (CAI) combustion, also known as Homogeneous Charge Compression Ignition (HCCI), was achieved by trapping residuals with early exhaust valve closure in conjunction with direct injection. Multi-cycle 3D engine simulations have been carried out for parametric study on four different injection timings in order to better understand the effects of injection timings on in-cylinder mixing and CAI combustion. The full engine cycle simulation including complete gas exchange and combustion processes was carried out over several cycles in order to obtain the stable cycle for analysis. The combustion models used in the present study are the Shell auto-ignition model and the characteristic-time combustion model, which were modified to take the high level of EGR into consideration. A liquid sheet breakup spray model was used for the droplet breakup processes. The analyses show that the injection timing plays an important role in affecting the in-cylinder air/fuel mixing and mixture temperature, which in turn affects the CAI combustion and engine performance.

  14. Meta-analysis for the comparison of two diagnostic tests to a common gold standard: A generalized linear mixed model approach.

    PubMed

    Hoyer, Annika; Kuss, Oliver

    2018-05-01

    Meta-analysis of diagnostic studies is still a rapidly developing area of biostatistical research. Especially, there is an increasing interest in methods to compare different diagnostic tests to a common gold standard. Restricting to the case of two diagnostic tests, in these meta-analyses the parameters of interest are the differences of sensitivities and specificities (with their corresponding confidence intervals) between the two diagnostic tests while accounting for the various associations across single studies and between the two tests. We propose statistical models with a quadrivariate response (where sensitivity of test 1, specificity of test 1, sensitivity of test 2, and specificity of test 2 are the four responses) as a sensible approach to this task. Using a quadrivariate generalized linear mixed model naturally generalizes the common standard bivariate model of meta-analysis for a single diagnostic test. If information on several thresholds of the tests is available, the quadrivariate model can be further generalized to yield a comparison of full receiver operating characteristic (ROC) curves. We illustrate our model by an example where two screening methods for the diagnosis of type 2 diabetes are compared.

  15. Mixing and CP violation in the B$$0\\atop{s}$$ meson system at CDF; Mélange et violation de CP dans le système des mésons B$$0\\atop{s}$$ à CDF (in French)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Giovanni, Gian Piero

    2008-01-01

    The two analyses presented in the thesis, the Bmore » $$0\\atop{s}$$ mixing analysis and the B$$0\\atop{s}$$ → J/ψφ angular analysis, share most of the technical implementations and features. Thus, my choice was to pursue in parallel the common aspects of the analyses, avoiding, whenever possible, repetitions. Each Chapter is split in two parts, the first one dedicated to the B$$0\\atop{s}$$ mixing analysis and the second one describing the angular analysis on the B$$0\\atop{s}$$ → J/ψφ decay mode. They are organized as follows. In Chapter 1 we present the theoretical framework of the B$$0\\atop{s}$$ neutral mesons system. After a general introduction on the Standard Model, we focus on the quantities which are relevant to the Δms measurement and the CP violation phenomena, underlying the details concerning the study of pseudo-scalar to vector vector decays, P → VV, which allow to carry out an angular analysis. A discussion on the implication of the measurements performed in the search of physics beyond the Standard Model is presented. The accelerator facilities and the CDF-II detector are reported in Chapter 2. While describing the detector, more emphasis is given to the components fundamental to perform B physics analyses at CDF. The Chapter 3 is focused on the reconstruction and selection of the data samples. The Chapter starts with a description of the on-line trigger requirements, according to the B$$0\\atop{s}$$ sample considered, followed by the offline selection criteria implemented to reconstruct B$$0\\atop{s}$$ semileptonic and hadronic decays, fully and partially reconstructed, for the B$$0\\atop{s}$$ mixing analysis, as well as the B$$0\\atop{s}$$ → J/ψφ decay mode for the angular analysis. The subsequent Chapter 4 is dedicated to the revision of the technical ingredients needed in the final analyses. The B$$0\\atop{s}$$ mixing elements are firstly described. The methodology historically used in the oscillation searches, the 'amplitude scan', is here introduced together with the calibration of the proper-decay-time resolution and the flavor tagging algorithms, in particular a closer examination of the same-side tagger performances is given. The B$$0\\atop{s}$$ → J/ψφ angular analysis elements description then follows, focusing on the performances and the eventual differences with respect to the B$$0\\atop{s}$$ oscillation search. The final results of the analyses are obtained with the use of an un-binned likelihood fitting framework: Chapter 5 presents the general principles behind this methodology and a description of both the maximum likelihood fitters employed. Chapter 6 contains the conclusive results on the B$$0\\atop{s}$$ analyses. They are presented in an historical fashion: the measurement of the B$$0\\atop{s}$$ oscillation frequency is followed by the first flavor tagged ΔΓs and βs measurements. The impact and the constraints on the parameters of the flavor model is part of the discussion in the Chapter. As cross-check of the B$$0\\atop{s}$$ angular analysis, the B$$0\\atop{s}$$ → J/ψK*0 decay mode has been additionally studied. Its angular analysis shows a competitive sensitivity with the B factories in measuring the parameters which define the decay. Not only this contributes to enforce the reliability of the entire framework, but it constitutes an excellent result by itself. Thus, we devote the entire Chapter 7 to the sole discussion of the angular analysis of the B$$0\\atop{s}$$ → J/ψK*0 decay mode.« less

  16. Integrative Mixed Methods Data Analytic Strategies in Research on School Success in Challenging Circumstances

    ERIC Educational Resources Information Center

    Jang, Eunice E.; McDougall, Douglas E.; Pollon, Dawn; Herbert, Monique; Russell, Pia

    2008-01-01

    There are both conceptual and practical challenges in dealing with data from mixed methods research studies. There is a need for discussion about various integrative strategies for mixed methods data analyses. This article illustrates integrative analytic strategies for a mixed methods study focusing on improving urban schools facing challenging…

  17. Current developments in forensic interpretation of mixed DNA samples (Review).

    PubMed

    Hu, Na; Cong, Bin; Li, Shujin; Ma, Chunling; Fu, Lihong; Zhang, Xiaojing

    2014-05-01

    A number of recent improvements have provided contemporary forensic investigations with a variety of tools to improve the analysis of mixed DNA samples in criminal investigations, producing notable improvements in the analysis of complex trace samples in cases of sexual assult and homicide. Mixed DNA contains DNA from two or more contributors, compounding DNA analysis by combining DNA from one or more major contributors with small amounts of DNA from potentially numerous minor contributors. These samples are characterized by a high probability of drop-out or drop-in combined with elevated stutter, significantly increasing analysis complexity. At some loci, minor contributor alleles may be completely obscured due to amplification bias or over-amplification, creating the illusion of additional contributors. Thus, estimating the number of contributors and separating contributor genotypes at a given locus is significantly more difficult in mixed DNA samples, requiring the application of specialized protocols that have only recently been widely commercialized and standardized. Over the last decade, the accuracy and repeatability of mixed DNA analyses available to conventional forensic laboratories has greatly advanced in terms of laboratory technology, mathematical models and biostatistical software, generating more accurate, rapid and readily available data for legal proceedings and criminal cases.

  18. Current developments in forensic interpretation of mixed DNA samples (Review)

    PubMed Central

    HU, NA; CONG, BIN; LI, SHUJIN; MA, CHUNLING; FU, LIHONG; ZHANG, XIAOJING

    2014-01-01

    A number of recent improvements have provided contemporary forensic investigations with a variety of tools to improve the analysis of mixed DNA samples in criminal investigations, producing notable improvements in the analysis of complex trace samples in cases of sexual assult and homicide. Mixed DNA contains DNA from two or more contributors, compounding DNA analysis by combining DNA from one or more major contributors with small amounts of DNA from potentially numerous minor contributors. These samples are characterized by a high probability of drop-out or drop-in combined with elevated stutter, significantly increasing analysis complexity. At some loci, minor contributor alleles may be completely obscured due to amplification bias or over-amplification, creating the illusion of additional contributors. Thus, estimating the number of contributors and separating contributor genotypes at a given locus is significantly more difficult in mixed DNA samples, requiring the application of specialized protocols that have only recently been widely commercialized and standardized. Over the last decade, the accuracy and repeatability of mixed DNA analyses available to conventional forensic laboratories has greatly advanced in terms of laboratory technology, mathematical models and biostatistical software, generating more accurate, rapid and readily available data for legal proceedings and criminal cases. PMID:24748965

  19. H2-broadening, shifting and mixing coefficients of the doublets in the ν2 and ν4 bands of PH3 at room temperature

    NASA Astrophysics Data System (ADS)

    Salem, Jamel; Blanquet, Ghislain; Lepère, Muriel; Younes, Rached ben

    2018-05-01

    The broadening, shifting and mixing coefficients of the doublet spectral lines in the ν2 and ν4 bands of PH3 perturbed by H2 have been determined at room temperature. Indeed, the collisional spectroscopic parameters: intensities, line widths, line shifts and line mixing parameters, are all grouped together in the collisional relaxation matrix. To analyse the collisional process and physical effects on spectra of phosphine (PH3), we have used the measurements carried out using a tunable diode-laser spectrometer in the ν2 and ν4 bands of PH3 perturbed by hydrogen (H2) at room temperature. The recorded spectra are fitted by the Voigt profile and the speed-dependent uncorrelated hard collision model of Rautian and Sobelman. These profiles are developed in the studies of isolated lines and are modified to account for the line mixing effects in the overlapping lines. The line widths, line shifts and line mixing parameters are given for six A1 and A2 doublet lines with quantum numbers K = 3n, (n = 1, 2, …) and overlapped by collisional broadening at pressures of less than 50 mbar.

  20. An investigation of the predictors of photoprotection and UVR dose to the face in patients with XP: a protocol using observational mixed methods.

    PubMed

    Walburn, Jessica; Sarkany, Robert; Norton, Sam; Foster, Lesley; Morgan, Myfanwy; Sainsbury, Kirby; Araújo-Soares, Vera; Anderson, Rebecca; Garrood, Isabel; Heydenreich, Jakob; Sniehotta, Falko F; Vieira, Rute; Wulf, Hans Christian; Weinman, John

    2017-08-21

    Xeroderma pigmentosum (XP) is a rare genetic condition caused by defective nucleotide excision repair and characterised by skin cancer, ocular and neurological involvement. Stringent ultraviolet protection is the only way to prevent skin cancer. Despite the risks, some patients' photoprotection is poor, with a potentially devastating impact on their prognosis. The aim of this research is to identify disease-specific and psychosocial predictors of photoprotection behaviour and ultraviolet radiation (UVR) dose to the face. Mixed methods research based on 45 UK patients will involve qualitative interviews to identify individuals' experience of XP and the influences on their photoprotection behaviours and a cross-sectional quantitative survey to assess biopsychosocial correlates of these behaviours at baseline. This will be followed by objective measurement of UVR exposure for 21 days by wrist-worn dosimeter and daily recording of photoprotection behaviours and psychological variables for up to 50 days in the summer months. This novel methodology will enable UVR dose reaching the face to be calculated and analysed as a clinically relevant endpoint. A range of qualitative and quantitative analytical approaches will be used, reflecting the mixed methods (eg, cross-sectional qualitative interviews, n-of-1 studies). Framework analysis will be used to analyse the qualitative interviews; mixed-effects longitudinal models will be used to examine the association of clinical and psychosocial factors with the average daily UVR dose; dynamic logistic regression models will be used to investigate participant-specific psychosocial factors associated with photoprotection behaviours. This research has been approved by Camden and King's Cross Research Ethics Committee 15/LO/1395. The findings will be published in peer-reviewed journals and presented at national and international scientific conferences. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  1. Genetic factors controlling wool shedding in a composite Easycare sheep flock.

    PubMed

    Matika, O; Bishop, S C; Pong-Wong, R; Riggio, V; Headon, D J

    2013-12-01

    Historically, sheep have been selectively bred for desirable traits including wool characteristics. However, recent moves towards extensive farming and reduced farm labour have seen a renewed interest in Easycare breeds. The aim of this study was to quantify the underlying genetic architecture of wool shedding in an Easycare flock. Wool shedding scores were collected from 565 pedigreed commercial Easycare sheep from 2002 to 2010. The wool scoring system was based on a 10-point (0-9) scale, with score 0 for animals retaining full fleece and 9 for those completely shedding. DNA was sampled from 200 animals of which 48 with extreme phenotypes were genotyped using a 50-k SNP chip. Three genetic analyses were performed: heritability analysis, complex segregation analysis to test for a major gene hypothesis and a genome-wide association study to map regions in the genome affecting the trait. Phenotypes were treated as a continuous or binary variable and categories. High estimates of heritability (0.80 when treated as a continuous, 0.65-0.75 as binary and 0.75 as categories) for shedding were obtained from linear mixed model analyses. Complex segregation analysis gave similar estimates (0.80 ± 0.06) to those above with additional evidence for a major gene with dominance effects. Mixed model association analyses identified four significant (P < 0.05) SNPs. Further analyses of these four SNPs in all 200 animals revealed that one of the SNPs displayed dominance effects similar to those obtained from the complex segregation analyses. In summary, we found strong genetic control for wool shedding, demonstrated the possibility of a single putative dominant gene controlling this trait and identified four SNPs that may be in partial linkage disequilibrium with gene(s) controlling shedding. © 2013 University of Edinburgh, Animal Genetics © 2013 Stichting International Foundation for Animal Genetics.

  2. Trophic structure of mesopelagic fishes in the Gulf of Mexico revealed by gut content and stable isotope analyses

    USGS Publications Warehouse

    McClain-Counts, Jennifer P.; Demopoulos, Amanda W.J.; Ross, Steve W.

    2017-01-01

    Mesopelagic fishes represent an important component of the marine food web due to their global distributions, high abundances and ability to transport organic material throughout a large part of the water column. This study combined stable isotope (SIAs) and gut content analyses (GCAs) to characterize the trophic structure of mesopelagic fishes in the North-Central Gulf of Mexico. Additionally, this study examined whether mesopelagic fishes utilized chemosynthetic energy from cold seeps. Specimens were collected (9–25 August 2007) over three deep (>1,000 m) cold seeps at discrete depths (surface to 1,503 m) over the diurnal cycle. GCA classified 31 species (five families) of mesopelagic fishes into five feeding guilds: piscivores, large crustacean consumers, copepod consumers, generalists and mixed zooplanktivores. However, these guilds were less clearly defined based on stable isotope mixing model (MixSIAR) results, suggesting diets may be more mixed over longer time periods (weeks–months) and across co-occurring species. Copepods were likely important for the majority of mesopelagic fishes, consistent with GCA (this study) and previous literature. MixSIAR results also identified non-crustacean prey items, including salps and pteropods, as potentially important prey items for mesopelagic fishes, including those fishes not analysed in GCA (Sternoptyx spp. and Melamphaidae). Salps and other soft-bodied species are often missed in GCAs. Mesopelagic fishes had δ13C results consistent with particulate organic matter serving as the baseline organic carbon source, fueling up to three trophic levels. Fishes that undergo diel vertical migration were depleted in 15N relative to weak migrators, consistent with depth-specific isotope trends in sources and consumers, and assimilation of 15N-depleted organic matter in surface waters. Linear correlations between fish size and δ15N values suggested ontogenetic changes in fish diets for several species. While there was no direct measure of mesopelagic fishes assimilating chemosynthetic material, detection of infrequent consumption of this food resource may be hindered by the assimilation of isotopically enriched photosynthetic organic matter. By utilizing multiple dietary metrics (e.g. GCA, δ13C, δ15N, MixSIAR), this study better defined the trophic structure of mesopelagic fishes and allowed for insights on feeding, ultimately providing useful baseline information from which to track mesopelagic trophodynamics over time and space.

  3. Large eddy simulation and direct numerical simulation of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Adumitroaie, V.; Frankel, S. H.; Madnia, C. K.; Givi, P.

    1993-01-01

    The objective of this research is to make use of Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS) for the computational analyses of high speed reacting flows. Our efforts in the first phase of this research conducted within the past three years have been directed in several issues pertaining to intricate physics of turbulent reacting flows. In our previous 5 semi-annual reports submitted to NASA LaRC, as well as several technical papers in archival journals, the results of our investigations have been fully described. In this progress report which is different in format as compared to our previous documents, we focus only on the issue of LES. The reason for doing so is that LES is the primary issue of interest to our Technical Monitor and that our other findings were needed to support the activities conducted under this prime issue. The outcomes of our related investigations, nevertheless, are included in the appendices accompanying this report. The relevance of the materials in these appendices are, therefore, discussed only briefly within the body of the report. Here, results are presented of a priori and a posterior analyses for validity assessments of assumed Probability Density Function (PDF) methods as potential subgrid scale (SGS) closures for LES of turbulent reacting flows. Simple non-premixed reacting systems involving an isothermal reaction of the type A + B yields Products under both chemical equilibrium and non-equilibrium conditions are considered. A priori analyses are conducted of a homogeneous box flow, and a spatially developing planar mixing layer to investigate the performance of the Pearson Family of PDF's as SGS models. A posteriori analyses are conducted of the mixing layer using a hybrid one-equation Smagorinsky/PDF SGS closure. The Smagorinsky closure augmented by the solution of the subgrid turbulent kinetic energy (TKE) equation is employed to account for hydrodynamic fluctuations, and the PDF is employed for modeling the effects of scalar fluctuations. The implementation of the model requires the knowledge of the local values of the first two SGS moments. These are provided by additional modeled transport equations. In both a priori and a posteriori analyses, the predicted results are appraised by comparison with subgrid averaged results generated by DNS. Based on these results, the paths to be followed in future investigations are identified.

  4. The "Nursing Home Compare" measure of urinary/fecal incontinence: cross-sectional variation, stability over time, and the impact of case mix.

    PubMed

    Li, Yue; Schnelle, John; Spector, William D; Glance, Laurent G; Mukamel, Dana B

    2010-02-01

    To assess the impact of facility case mix on cross-sectional variations and short-term stability of the "Nursing Home Compare" incontinence quality measure (QM) and to determine whether multivariate risk adjustment can minimize such impacts. Retrospective analyses of the 2005 national minimum data set (MDS) that included approximately 600,000 long-term care residents in over 10,000 facilities in each quarterly sample. Mixed logistic regression was used to construct the risk-adjusted QM (nonshrinkage estimator). Facility-level ordinary least-squares models and adjusted R(2) were used to estimate the impact of case mix on cross-sectional and short-term longitudinal variations of currently published and risk-adjusted QMs. At least 50 percent of the cross-sectional variation and 25 percent of the short-term longitudinal variation of the published QM are explained by facility case mix. In contrast, the cross-sectional and short-term longitudinal variations of the risk-adjusted QM are much less susceptible to case-mix variations (adjusted R(2)<0.10), even for facilities with more extreme or more unstable outcome. Current "Nursing Home Compare" incontinence QM reflects considerable case-mix variations across facilities and over time, and therefore it may be biased. This issue can be largely addressed by multivariate risk adjustment using risk factors available in the MDS.

  5. Meteorological models for estimating phenology of corn

    NASA Technical Reports Server (NTRS)

    Daughtry, C. S. T.; Cochran, J. C.; Hollinger, S. E.

    1984-01-01

    Knowledge of when critical crop stages occur and how the environment affects them should provide useful information for crop management decisions and crop production models. Two sources of data were evaluated for predicting dates of silking and physiological maturity of corn (Zea mays L.). Initial evaluations were conducted using data of an adapted corn hybrid grown on a Typic Agriaquoll at the Purdue University Agronomy Farm. The second phase extended the analyses to large areas using data acquired by the Statistical Reporting Service of USDA for crop reporting districts (CRD) in Indiana and Iowa. Several thermal models were compared to calendar days for predicting dates of silking and physiological maturity. Mixed models which used a combination of thermal units to predict silking and days after silking to predict physiological maturity were also evaluated. At the Agronomy Farm the models were calibrated and tested on the same data. The thermal models were significantly less biased and more accurate than calendar days for predicting dates of silking. Differences among the thermal models were small. Significant improvements in both bias and accuracy were observed when the mixed models were used to predict dates of physiological maturity. The results indicate that statistical data for CRD can be used to evaluate models developed at agricultural experiment stations.

  6. Random Testing and Model Checking: Building a Common Framework for Nondeterministic Exploration

    NASA Technical Reports Server (NTRS)

    Groce, Alex; Joshi, Rajeev

    2008-01-01

    Two popular forms of dynamic analysis, random testing and explicit-state software model checking, are perhaps best viewed as search strategies for exploring the state spaces introduced by nondeterminism in program inputs. We present an approach that enables this nondeterminism to be expressed in the SPIN model checker's PROMELA language, and then lets users generate either model checkers or random testers from a single harness for a tested C program. Our approach makes it easy to compare model checking and random testing for models with precisely the same input ranges and probabilities and allows us to mix random testing with model checking's exhaustive exploration of non-determinism. The PROMELA language, as intended in its design, serves as a convenient notation for expressing nondeterminism and mixing random choices with nondeterministic choices. We present and discuss a comparison of random testing and model checking. The results derive from using our framework to test a C program with an effectively infinite state space, a module in JPL's next Mars rover mission. More generally, we show how the ability of the SPIN model checker to call C code can be used to extend SPIN's features, and hope to inspire others to use the same methods to implement dynamic analyses that can make use of efficient state storage, matching, and backtracking.

  7. Modelling of subgrid-scale phenomena in supercritical transitional mixing layers: an a priori study

    NASA Astrophysics Data System (ADS)

    Selle, Laurent C.; Okong'o, Nora A.; Bellan, Josette; Harstad, Kenneth G.

    A database of transitional direct numerical simulation (DNS) realizations of a supercritical mixing layer is analysed for understanding small-scale behaviour and examining subgrid-scale (SGS) models duplicating that behaviour. Initially, the mixing layer contains a single chemical species in each of the two streams, and a perturbation promotes roll-up and a double pairing of the four spanwise vortices initially present. The database encompasses three combinations of chemical species, several perturbation wavelengths and amplitudes, and several initial Reynolds numbers specifically chosen for the sole purpose of achieving transition. The DNS equations are the Navier-Stokes, total energy and species equations coupled to a real-gas equation of state; the fluxes of species and heat include the Soret and Dufour effects. The large-eddy simulation (LES) equations are derived from the DNS ones through filtering. Compared to the DNS equations, two types of additional terms are identified in the LES equations: SGS fluxes and other terms for which either assumptions or models are necessary. The magnitude of all terms in the LES conservation equations is analysed on the DNS database, with special attention to terms that could possibly be neglected. It is shown that in contrast to atmospheric-pressure gaseous flows, there are two new terms that must be modelled: one in each of the momentum and the energy equations. These new terms can be thought to result from the filtering of the nonlinear equation of state, and are associated with regions of high density-gradient magnitude both found in DNS and observed experimentally in fully turbulent high-pressure flows. A model is derived for the momentum-equation additional term that performs well at small filter size but deteriorates as the filter size increases, highlighting the necessity of ensuring appropriate grid resolution in LES. Modelling approaches for the energy-equation additional term are proposed, all of which may be too computationally intensive in LES. Several SGS flux models are tested on an a priori basis. The Smagorinsky (SM) model has a poor correlation with the data, while the gradient (GR) and scale-similarity (SS) models have high correlations. Calibrated model coefficients for the GR and SS models yield good agreement with the SGS fluxes, although statistically, the coefficients are not valid over all realizations. The GR model is also tested for the variances entering the calculation of the new terms in the momentum and energy equations; high correlations are obtained, although the calibrated coefficients are not statistically significant over the entire database at fixed filter size. As a manifestation of the small-scale supercritical mixing peculiarities, both scalar-dissipation visualizations and the scalar-dissipation probability density functions (PDF) are examined. The PDF is shown to exhibit minor peaks, with particular significance for those at larger scalar dissipation values than the mean, thus significantly departing from the Gaussian behaviour.

  8. Using 3D geological modelling and geochemical mixing models to characterise alluvial aquifer recharge sources in the upper Condamine River catchment, Queensland, Australia.

    PubMed

    Martinez, Jorge L; Raiber, Matthias; Cendón, Dioni I

    2017-01-01

    The influence of mountain front recharge on the water balance of alluvial valley aquifers located in upland catchments of the Condamine River basin in Queensland, Australia, is investigated through the development of an integrated hydrogeological framework. A combination of three-dimensional (3D) geological modelling, hydraulic gradient maps, multivariate statistical analyses and hydrochemical mixing calculations is proposed for the identification of hydrochemical end-members and quantification of the relative contributions of each end-member to alluvial aquifer recharge. The recognised end-members correspond to diffuse recharge and lateral groundwater inflows from three hydrostratigraphic units directly connected to the alluvial aquifer. This approach allows mapping zones of potential inter-aquifer connectivity and areas of groundwater mixing between underlying units and the alluvium. Mixing calculations using samples collected under baseflow conditions reveal that lateral contribution from a regional volcanic aquifer system represents the majority (41%) of inflows to the alluvial aquifer. Diffuse recharge contribution (35%) and inflow from two sedimentary bedrock hydrostratigraphic units (collectively 24%) comprise the remainder of major recharge sources. A detailed geochemical assessment of alluvial groundwater evolution along a selected flowpath of a representative subcatchment of the Condamine River basin confirms mixing as a key process responsible for observed spatial variations in hydrochemistry. Dissolution of basalt-related minerals and dolomite, CO 2 uptake, ion-exchange, precipitation of clay minerals, and evapotranspiration further contribute to the hydrochemical evolution of groundwater in the upland alluvial aquifer. This study highlights the benefits of undertaking an integrated approach that combines multiple independent lines of evidence. The proposed methods can be applied to investigate processes associated with inter-aquifer mixing, including groundwater contamination resulting from depressurisation of underlying geological units hydraulically connected to the shallower water reservoirs. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. How Do We Know What Is Happening Online?: A Mixed Methods Approach to Analysing Online Activity

    ERIC Educational Resources Information Center

    Charalampidi, Marina; Hammond, Michael

    2016-01-01

    Purpose: The purpose of this paper is to discuss the process of analysing online discussion and argue for the merits of mixed methods. Much research of online participation and e-learning has been either message-focused or person-focused. The former covers methodologies such as content and discourse analysis, the latter interviewing and surveys.…

  10. Longitudinal analysis of the strengths and difficulties questionnaire scores of the Millennium Cohort Study children in England using M-quantile random-effects regression.

    PubMed

    Tzavidis, Nikos; Salvati, Nicola; Schmid, Timo; Flouri, Eirini; Midouhas, Emily

    2016-02-01

    Multilevel modelling is a popular approach for longitudinal data analysis. Statistical models conventionally target a parameter at the centre of a distribution. However, when the distribution of the data is asymmetric, modelling other location parameters, e.g. percentiles, may be more informative. We present a new approach, M -quantile random-effects regression, for modelling multilevel data. The proposed method is used for modelling location parameters of the distribution of the strengths and difficulties questionnaire scores of children in England who participate in the Millennium Cohort Study. Quantile mixed models are also considered. The analyses offer insights to child psychologists about the differential effects of risk factors on children's outcomes.

  11. Investigating Sources of Ozone over California Using AJAX Airborne Measurements and Models: Assessing the Contribution from Long Range Transport

    NASA Technical Reports Server (NTRS)

    Ryoo, Ju-Mee; Johnson, Matthew S.; Iraci, Laura T.; Yates, Emma L.; Gore, Warren

    2017-01-01

    High ozone (O3) concentrations at low altitudes (1.5e4 km) were detected from airborne Alpha Jet Atmospheric eXperiment (AJAX) measurements on 30 May 2012 off the coast of California (CA). We investigate the causes of those elevated O3 concentrations using airborne measurements and various models. GEOS-Chem simulation shows that the contribution from local sources is likely small. A back trajectory model was used to determine the air mass origins and how much they contributed to the O3 over CA. Low-level potential vorticity (PV) from Modern Era Retrospective analysis for Research and Applications 2 (MERRA-2) reanalysis data appears to be a result of the diabatic heating and mixing of airs in the lower altitudes, rather than be a result of direct transport from stratospheric intrusion. The Q diagnostic, which is a measure of the mixing of the air masses, indicates that there is sufficient mixing along the trajectory to indicate that O3 from the different origins is mixed and transported to the western U.S.The back-trajectory model simulation demonstrates the air masses of interest came mostly from the mid troposphere (MT, 76), but the contribution of the lower troposphere (LT, 19) is also significant compared to those from the upper troposphere/lower stratosphere (UTLS, 5). Air coming from the LT appears to be mostly originating over Asia. The possible surface impact of the high O3 transported aloft on the surface O3 concentration through vertical and horizontal transport within a few days is substantiated by the influence maps determined from the Weather Research and Forecasting Stochastic Time Inverted Lagrangian Transport (WRF-STILT) model and the observed increases in surface ozone mixing ratios. Contrasting this complex case with a stratospheric-dominant event emphasizes the contribution of each source to the high O3 concentration in the lower altitudes over CA. Integrated analyses using models, reanalysis, and diagnostic tools, allows high ozone values detected by in-situ measurements to be attributed to multiple source processes.

  12. The Southern Ocean in the Coupled Model Intercomparison Project phase 5

    PubMed Central

    Meijers, A. J. S.

    2014-01-01

    The Southern Ocean is an important part of the global climate system, but its complex coupled nature makes both its present state and its response to projected future climate forcing difficult to model. Clear trends in wind, sea-ice extent and ocean properties emerged from multi-model intercomparison in the Coupled Model Intercomparison Project phase 3 (CMIP3). Here, we review recent analyses of the historical and projected wind, sea ice, circulation and bulk properties of the Southern Ocean in the updated Coupled Model Intercomparison Project phase 5 (CMIP5) ensemble. Improvements to the models include higher resolutions, more complex and better-tuned parametrizations of ocean mixing, and improved biogeochemical cycles and atmospheric chemistry. CMIP5 largely reproduces the findings of CMIP3, but with smaller inter-model spreads and biases. By the end of the twenty-first century, mid-latitude wind stresses increase and shift polewards. All water masses warm, and intermediate waters freshen, while bottom waters increase in salinity. Surface mixed layers shallow, warm and freshen, whereas sea ice decreases. The upper overturning circulation intensifies, whereas bottom water formation is reduced. Significant disagreement exists between models for the response of the Antarctic Circumpolar Current strength, for reasons that are as yet unclear. PMID:24891395

  13. AUTOMATED ANALYSIS OF QUANTITATIVE IMAGE DATA USING ISOMORPHIC FUNCTIONAL MIXED MODELS, WITH APPLICATION TO PROTEOMICS DATA.

    PubMed

    Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard

    2011-01-01

    Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method frequently correspond to subregions of visible spots that may represent post-translational modifications or co-migrating proteins that cannot be visually resolved from adjacent, more abundant proteins on the gel image. Thus, it is possible that this image-based approach may actually improve the realized resolution of the gel, revealing differentially expressed proteins that would not have even been detected as spots by modern spot-based analyses.

  14. Strontium isotope systematics of mixing groundwater and oil-field brine at Goose Lake in northeastern Montana, USA

    USGS Publications Warehouse

    Peterman, Zell E.; Thamke, Joanna N.; Futa, Kiyoto; Preston, Todd

    2012-01-01

    Groundwater, surface water, and soil in the Goose Lake oil field in northeastern Montana have been affected by Cl−-rich oil-field brines during long-term petroleum production. Ongoing multidisciplinary geochemical and geophysical studies have identified the degree and local extent of interaction between brine and groundwater. Fourteen samples representing groundwater, surface water, and brine were collected for Sr isotope analyses to evaluate the usefulness of 87Sr/86Sr in detecting small amounts of brine. Differences in Sr concentrations and 87Sr/86Sr are optimal at this site for the experiment. Strontium concentrations range from 0.13 to 36.9 mg/L, and corresponding 87Sr/86Sr values range from 0.71097 to 0.70828. The local brine has 168 mg/L Sr and a 87Sr/86Sr value of 0.70802. Mixing relationships are evident in the data set and illustrate the sensitivity of Sr in detecting small amounts of brine in groundwater. The location of data points on a Sr isotope-concentration plot is readily explained by an evaporation-mixing model. The model is supported by the variation in concentrations of most of the other solutes.

  15. Expansion Under Climate Change: The Genetic Consequences.

    PubMed

    Garnier, Jimmy; Lewis, Mark A

    2016-11-01

    Range expansion and range shifts are crucial population responses to climate change. Genetic consequences are not well understood but are clearly coupled to ecological dynamics that, in turn, are driven by shifting climate conditions. We model a population with a deterministic reaction-diffusion model coupled to a heterogeneous environment that develops in time due to climate change. We decompose the resulting travelling wave solution into neutral genetic components to analyse the spatio-temporal dynamics of its genetic structure. Our analysis shows that range expansions and range shifts under slow climate change preserve genetic diversity. This is because slow climate change creates range boundaries that promote spatial mixing of genetic components. Mathematically, the mixing leads to so-called pushed travelling wave solutions. This mixing phenomenon is not seen in spatially homogeneous environments, where range expansion reduces genetic diversity through gene surfing arising from pulled travelling wave solutions. However, the preservation of diversity is diminished when climate change occurs too quickly. Using diversity indices, we show that fast expansions and range shifts erode genetic diversity more than slow range expansions and range shifts. Our study provides analytical insight into the dynamics of travelling wave solutions in heterogeneous environments.

  16. Development of a reactive-dispersive plume model

    NASA Astrophysics Data System (ADS)

    Kim, Hyun S.; Kim, Yong H.; Song, Chul H.

    2017-04-01

    A reactive-dispersive plume model (RDPM) was developed in this study. The RDPM can consider two main components of large-scale point source plume: i) turbulent dispersion and ii) photochemical reactions. In order to evaluate the simulation performance of newly developed RDPM, the comparisons between the model-predicted and observed mixing ratios were made using the TexAQS II 2006 (Texas Air Quality Study II 2006) power-plant experiment data. Statistical analyses show good correlation (0.61≤R≤0.92), and good agreement with the Index of Agreement (0.70≤R≤0.95). The chemical NOx lifetimes for two power-plant plumes (Monticello and Welsh power plants) were also estimated.

  17. Decoherence effect in neutrinos produced in microquasar jets

    NASA Astrophysics Data System (ADS)

    Mosquera, M. E.; Civitarese, O.

    2018-04-01

    We study the effect of decoherence upon the neutrino spectra produced in microquasar jets. In order to analyse the precession of the polarization vector of neutrinos we have calculated its time evolution by solving the corresponding equations of motion, and by assuming two different scenarios, namely: (i) the mixing between two active neutrinos, and (ii) the mixing between one active and one sterile neutrino. The results of the calculations corresponding to these scenarios show that the onset of decoherence does not depends on the activation of neutrino-neutrino interactions when realistic values of the coupling are used in the calculations. We discuss also the case of neutrinos produced in windy microquasars and compare the results which those obtained with more conventional models of microquasars.

  18. Effects of morphological Family Size for young readers.

    PubMed

    Perdijk, Kors; Schreuder, Robert; Baayen, R Harald; Verhoeven, Ludo

    2012-09-01

    Dutch children, from the second and fourth grade of primary school, were each given a visual lexical decision test on 210 Dutch monomorphemic words. After removing words not recognized by a majority of the younger group, (lexical) decisions were analysed by mixed-model regression methods to see whether morphological Family Size influenced decision times over and above several other covariates. The effect of morphological Family Size on decision time was mixed: larger families led to significantly faster decision times for the second graders but not for the fourth graders. Since facilitative effects on decision times had been found for adults, we offer a developmental account to explain the absence of an effect of Family Size on decision times for fourth graders. ©2011 The British Psychological Society.

  19. Interfacial mixing in as-deposited Si/Ni/Si layers analyzed by x-ray and polarized neutron reflectometry

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Debarati; Basu, Saibal; Singh, Surendra; Roy, Sumalay; Dev, Bhupendra Nath

    2012-12-01

    Interdiffusion occurring across the interfaces in a Si/Ni/Si layered system during deposition at room temperature was probed using x-ray reflectivity (XRR) and polarized neutron reflectivity (PNR). Exploiting the complementarity of these techniques, both structural and magnetic characterization with nanometer depth resolution could be achieved. Suitable model fitting of the reflectivity profiles identified the formation of Ni-Si mixed alloy layers at the Si/Ni and Ni/Si interfaces. The physical parameters of the layered structure, including quantitative assessment of the stoichiometry of interfacial alloys, were obtained from the analyses of XRR and PNR patterns. In addition, PNR provided magnetic moment density profile as a function of depth in the stratified medium.

  20. Preliminary Report on U-Th-Pb Isotope Systematics of the Olivine-Phyric Shergottite Tissint

    NASA Technical Reports Server (NTRS)

    Moriwaki, R.; Usui, T.; Yokoyama, T.; Simon, J. I.; Jones, J. H.

    2014-01-01

    Geochemical studies of shergottites suggest that their parental magmas reflect mixtures between at least two distinct geochemical source reservoirs, producing correlations between radiogenic isotope compositions, and trace element abundances.. These correlations have been interpreted as indicating the presence of a reduced, incompatible-element- depleted reservoir and an oxidized, incompatible-element-rich reservoir. The former is clearly a depleted mantle source, but there has been a long debate regarding the origin of the enriched reservoir. Two contrasting models have been proposed regarding the location and mixing process of the two geochemical source reservoirs: (1) assimilation of oxidized crust by mantle derived, reduced magmas, or (2) mixing of two distinct mantle reservoirs during melting. The former clearly requires the ancient martian crust to be the enriched source (crustal assimilation), whereas the latter requires a long-lived enriched mantle domain that probably originated from residual melts formed during solidification of a magma ocean (heterogeneous mantle model). This study conducts Pb isotope and U-Th-Pb concentration analyses of the olivine-phyric shergottite Tissint because U-Th-Pb isotope systematics have been intensively used as a powerful radiogenic tracer to characterize old crust/sediment components in mantle- derived, terrestrial oceanic island basalts. The U-Th-Pb analyses are applied to sequential acid leaching fractions obtained from Tissint whole-rock powder in order to search for Pb isotopic source components in Tissint magma. Here we report preliminary results of the U-Th-Pb analyses of acid leachates and a residue, and propose the possibility that Tissint would have experienced minor assimilation of old martian crust.

  1. Estimation of deepwater temperature and hydrogeochemistry of springs in the Takab geothermal field, West Azerbaijan, Iran.

    PubMed

    Sharifi, Reza; Moore, Farid; Mohammadi, Zargham; Keshavarzi, Behnam

    2016-01-01

    Chemical analyses of water samples from 19 hot and cold springs are used to characterize Takab geothermal field, west of Iran. The springs are divided into two main groups based on temperature, host rock, total dissolved solids (TDS), and major and minor elements. TDS, electrical conductivity (EC), Cl(-), and SO4 (2-) concentrations of hot springs are all higher than in cold springs. Higher TDS in hot springs probably reflect longer circulation and residence time. The high Si, B, and Sr contents in thermal waters are probably the result of extended water-rock interaction and reflect flow paths and residence time. Binary, ternary, and Giggenbach diagrams were used to understand the deeper mixing conditions and locations of springs in the model system. It is believed that the springs are heated either by mixing of deep geothermal fluid with cold groundwater or low conductive heat flow. Mixing ratios are evaluated using Cl, Na, and B concentrations and a mass balance approach. Calculated quartz and chalcedony geothermometer give lower reservoir temperatures than cation geothermometers. The silica-enthalpy mixing model predicts a subsurface reservoir temperature between 62 and 90 °C. The δ(18)O and δD (δ(2)H) are used to trace and determine the origin and movement of water. Both hot and cold waters plot close to the local meteoric line, indicating local meteoric origin.

  2. Bus accident analysis of routes with/without bus priority.

    PubMed

    Goh, Kelvin Chun Keong; Currie, Graham; Sarvi, Majid; Logan, David

    2014-04-01

    This paper summarises findings on road safety performance and bus-involved accidents in Melbourne along roads where bus priority measures had been applied. Results from an empirical analysis of the accident types revealed significant reduction in the proportion of accidents involving buses hitting stationary objects and vehicles, which suggests the effect of bus priority in addressing manoeuvrability issues for buses. A mixed-effects negative binomial (MENB) regression and back-propagation neural network (BPNN) modelling of bus accidents considering wider influences on accident rates at a route section level also revealed significant safety benefits when bus priority is provided. Sensitivity analyses done on the BPNN model showed general agreement in the predicted accident frequency between both models. The slightly better performance recorded by the MENB model results suggests merits in adopting a mixed effects modelling approach for accident count prediction in practice given its capability to account for unobserved location and time-specific factors. A major implication of this research is that bus priority in Melbourne's context acts to improve road safety and should be a major consideration for road management agencies when implementing bus priority and road schemes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Quantification of polyhydroxyalkanoates in mixed and pure cultures biomass by Fourier transform infrared spectroscopy: comparison of different approaches.

    PubMed

    Isak, I; Patel, M; Riddell, M; West, M; Bowers, T; Wijeyekoon, S; Lloyd, J

    2016-08-01

    Fourier transform infrared (FTIR) spectroscopy was used in this study for the rapid quantification of polyhydroxyalkanoates (PHA) in mixed and pure culture bacterial biomass. Three different statistical analysis methods (regression, partial least squares (PLS) and nonlinear) were applied to the FTIR data and the results were plotted against the PHA values measured with the reference gas chromatography technique. All methods predicted PHA content in mixed culture biomass with comparable efficiency, indicated by similar residuals values. The PHA in these cultures ranged from low to medium concentration (0-44 wt% of dried biomass content). However, for the analysis of the combined mixed and pure culture biomass with PHA concentration ranging from low to high (0-93% of dried biomass content), the PLS method was most efficient. This paper reports, for the first time, the use of a single calibration model constructed with a combination of mixed and pure cultures covering a wide PHA range, for predicting PHA content in biomass. Currently no one universal method exists for processing FTIR data for polyhydroxyalkanoates (PHA) quantification. This study compares three different methods of analysing FTIR data for quantification of PHAs in biomass. A new data-processing approach was proposed and the results were compared against existing literature methods. Most publications report PHA quantification of medium range in pure culture. However, in our study we encompassed both mixed and pure culture biomass containing a broader range of PHA in the calibration curve. The resulting prediction model is useful for rapid quantification of a wider range of PHA content in biomass. © 2016 The Society for Applied Microbiology.

  4. Dolomitization in a mixing zone of near-seawater composition, Late Pleistocene, northeastern Yucatan Peninsula

    USGS Publications Warehouse

    Ward, W. C.; Halley, Robert B.

    1985-01-01

    18O compositions of Yucatecan dolomite and of modern ground water suggest dolomite precipitation from mixed water ranging from about 75% seawater, 25% freshwater to nearly all seawater. (Isotope analyses are for the most stable calcian dolomites; more soluble, calcium-rich dolomite presumably is analyzed with calcite and thought to be isotopically lighter than the less soluble dolomite.) In the cement sequence, the most stable dolomite is followed by more soluble dolomite as ground water becomes less saline. Isotope analyses, together with position of dolomite in the cement sequence, suggest the most stable calcian dolomite (including limpid dolomite) precipitated from mixed water with large proportions of seawater, and the less stable, more calcian dolomite precipitated from fresher mixed water.

  5. A Methodological Review of US Budget-Impact Models for New Drugs.

    PubMed

    Mauskopf, Josephine; Earnshaw, Stephanie

    2016-11-01

    A budget-impact analysis is required by many jurisdictions when adding a new drug to the formulary. However, previous reviews have indicated that adherence to methodological guidelines is variable. In this methodological review, we assess the extent to which US budget-impact analyses for new drugs use recommended practices. We describe recommended practice for seven key elements in the design of a budget-impact analysis. Targeted literature searches for US studies reporting estimates of the budget impact of a new drug were performed and we prepared a summary of how each study addressed the seven key elements. The primary finding from this review is that recommended practice is not followed in many budget-impact analyses. For example, we found that growth in the treated population size and/or changes in disease-related costs expected during the model time horizon for more effective treatments was not included in several analyses for chronic conditions. In addition, all drug-related costs were not captured in the majority of the models. Finally, for most studies, one-way sensitivity and scenario analyses were very limited, and the ranges used in one-way sensitivity analyses were frequently arbitrary percentages rather than being data driven. The conclusions from our review are that changes in population size, disease severity mix, and/or disease-related costs should be properly accounted for to avoid over- or underestimating the budget impact. Since each budget holder might have different perspectives and different values for many of the input parameters, it is also critical for published budget-impact analyses to include extensive sensitivity and scenario analyses based on realistic input values.

  6. Estimating spatial and temporal components of variation in count data using negative binomial mixed models

    USGS Publications Warehouse

    Irwin, Brian J.; Wagner, Tyler; Bence, James R.; Kepler, Megan V.; Liu, Weihai; Hayes, Daniel B.

    2013-01-01

    Partitioning total variability into its component temporal and spatial sources is a powerful way to better understand time series and elucidate trends. The data available for such analyses of fish and other populations are usually nonnegative integer counts of the number of organisms, often dominated by many low values with few observations of relatively high abundance. These characteristics are not well approximated by the Gaussian distribution. We present a detailed description of a negative binomial mixed-model framework that can be used to model count data and quantify temporal and spatial variability. We applied these models to data from four fishery-independent surveys of Walleyes Sander vitreus across the Great Lakes basin. Specifically, we fitted models to gill-net catches from Wisconsin waters of Lake Superior; Oneida Lake, New York; Saginaw Bay in Lake Huron, Michigan; and Ohio waters of Lake Erie. These long-term monitoring surveys varied in overall sampling intensity, the total catch of Walleyes, and the proportion of zero catches. Parameter estimation included the negative binomial scaling parameter, and we quantified the random effects as the variations among gill-net sampling sites, the variations among sampled years, and site × year interactions. This framework (i.e., the application of a mixed model appropriate for count data in a variance-partitioning context) represents a flexible approach that has implications for monitoring programs (e.g., trend detection) and for examining the potential of individual variance components to serve as response metrics to large-scale anthropogenic perturbations or ecological changes.

  7. Dynamically heterogenous partitions and phylogenetic inference: an evaluation of analytical strategies with cytochrome b and ND6 gene sequences in cranes.

    PubMed

    Krajewski, C; Fain, M G; Buckley, L; King, D G

    1999-11-01

    ki ctes over whether molecular sequence data should be partitioned for phylogenetic analysis often confound two types of heterogeneity among partitions. We distinguish historical heterogeneity (i.e., different partitions have different evolutionary relationships) from dynamic heterogeneity (i.e., different partitions show different patterns of sequence evolution) and explore the impact of the latter on phylogenetic accuracy and precision with a two-gene, mitochondrial data set for cranes. The well-established phylogeny of cranes allows us to contrast tree-based estimates of relevant parameter values with estimates based on pairwise comparisons and to ascertain the effects of incorporating different amounts of process information into phylogenetic estimates. We show that codon positions in the cytochrome b and NADH dehydrogenase subunit 6 genes are dynamically heterogenous under both Poisson and invariable-sites + gamma-rates versions of the F84 model and that heterogeneity includes variation in base composition and transition bias as well as substitution rate. Estimates of transition-bias and relative-rate parameters from pairwise sequence comparisons were comparable to those obtained as tree-based maximum likelihood estimates. Neither rate-category nor mixed-model partitioning strategies resulted in a loss of phylogenetic precision relative to unpartitioned analyses. We suggest that weighted-average distances provide a computationally feasible alternative to direct maximum likelihood estimates of phylogeny for mixed-model analyses of large, dynamically heterogenous data sets. Copyright 1999 Academic Press.

  8. Residual estuarine circulation in the Mandovi, a monsoonal estuary: A three-dimensional model study

    NASA Astrophysics Data System (ADS)

    Vijith, V.; Shetye, S. R.; Baetens, K.; Luyten, P.; Michael, G. S.

    2016-05-01

    Observations in the Mandovi estuary, located on the central west coast of India, have shown that the salinity field in this estuary is remarkably time-dependent and passes through all possible states of stratification (riverine, highly-stratified, partially-mixed and well-mixed) during a year as the runoff into the estuary varies from high values (∼1000 m3 s-1) in the wet season to negligible values (∼1 m3 s-1) at end of the dry season. The time-dependence is forced by the Indian Summer Monsoon (ISM) and hence the estuary is referred to as a monsoonal estuary. In this paper, we use a three-dimensional, open source, hydrodynamic, numerical model to reproduce the observed annual salinity field in the Mandovi. We then analyse the model results to define characteristics of residual estuarine circulation in the Mandovi. Our motivation to study this aspect of the Mandovi's dynamics is derived from the following three considerations. First, residual circulation is important to long-term evolution of an estuary; second, we need to understand how this circulation responds to strongly time-dependent runoff forcing experienced by a monsoonal estuary; and third, Mandovi is among the best studied estuaries that come under the influence of ISM, and has observations that can be used to validate the model. Our analysis shows that the residual estuarine circulation in the Mandovi shows four distinct phases during a year: a river like flow that is oriented downstream throughout the estuary; a salt-wedge type circulation, with flow into the estuary near the bottom and out of the estuary near the surface restricted close to the mouth of the estuary; circulation associated with a partially-mixed estuary; and, the circulation associated with a well-mixed estuary. Dimensional analysis of the field of residual circulation helped us to establish the link between strength of residual circulation at a location and magnitude of river runoff and rate of mixing at the location. We then derive an analytical expression that approximates exchange velocity (bottom velocity minus near freshwater velocity at a location) as a function of freshwater velocity and rate of mixing.

  9. Use of the preconditioned conjugate gradient algorithm as a generic solver for mixed-model equations in animal breeding applications.

    PubMed

    Tsuruta, S; Misztal, I; Strandén, I

    2001-05-01

    Utility of the preconditioned conjugate gradient algorithm with a diagonal preconditioner for solving mixed-model equations in animal breeding applications was evaluated with 16 test problems. The problems included single- and multiple-trait analyses, with data on beef, dairy, and swine ranging from small examples to national data sets. Multiple-trait models considered low and high genetic correlations. Convergence was based on relative differences between left- and right-hand sides. The ordering of equations was fixed effects followed by random effects, with no special ordering within random effects. The preconditioned conjugate gradient program implemented with double precision converged for all models. However, when implemented in single precision, the preconditioned conjugate gradient algorithm did not converge for seven large models. The preconditioned conjugate gradient and successive overrelaxation algorithms were subsequently compared for 13 of the test problems. The preconditioned conjugate gradient algorithm was easy to implement with the iteration on data for general models. However, successive overrelaxation requires specific programming for each set of models. On average, the preconditioned conjugate gradient algorithm converged in three times fewer rounds of iteration than successive overrelaxation. With straightforward implementations, programs using the preconditioned conjugate gradient algorithm may be two or more times faster than those using successive overrelaxation. However, programs using the preconditioned conjugate gradient algorithm would use more memory than would comparable implementations using successive overrelaxation. Extensive optimization of either algorithm can influence rankings. The preconditioned conjugate gradient implemented with iteration on data, a diagonal preconditioner, and in double precision may be the algorithm of choice for solving mixed-model equations when sufficient memory is available and ease of implementation is essential.

  10. A mixing timescale model for TPDF simulations of turbulent premixed flames

    DOE PAGES

    Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.; ...

    2017-02-06

    Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive scalar mixing across different flame regimes are appropriately accounted for.« less

  11. A mixing timescale model for TPDF simulations of turbulent premixed flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.

    Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive scalar mixing across different flame regimes are appropriately accounted for.« less

  12. Ice Cloud Formation and Dehydration in the Tropical Tropopause Layer

    NASA Technical Reports Server (NTRS)

    Jensen, Eric; Gore, Warren J. (Technical Monitor)

    2002-01-01

    Stratospheric water vapor is important not only for its greenhouse forcing, but also because it plays a significant role in stratospheric chemistry. Several recent studies have focused on the potential for dehydration due to ice cloud formation in air rising slowly through the tropical tropopause layer (TTL). Holton and Gettelman showed that temperature variations associated with horizontal transport of air in the TTL can drive ice cloud formation and dehydration, and Gettelman et al. recently examined the cloud formation and dehydration along kinematic trajectories using simple assumptions about the cloud properties. In this study, a Lagrangian, one-dimensional cloud model has been used to further investigate cloud formation and dehydration as air is transported horizontally and vertically through the TTL. Time-height curtains of temperature are extracted from meteorological analyses. The model tracks the growth, advection, and sedimentation of individual cloud particles. The regional distribution of clouds simulated in the model is comparable to the subvisible cirrus distribution indicated by SAGE II. The simulated cloud properties and cloud frequencies depend strongly on the assumed supersaturation threshold for ice nucleation. The clouds typically do not dehydrate the air along trajectories down to the temperature minimum saturation mixing ratio. Rather the water vapor mixing ratio crossing the tropopause along trajectories is 10-50% larger than the saturation mixing ratio. I will also discuss the impacts of Kelvin waves and gravity waves on cloud properties and dehydration efficiency. These simulations can be used to determine whether observed lower stratospheric water vapor mixing ratios can be explained by dehydration associated with in situ TTL cloud formation alone.

  13. Analysis of categorical moderators in mixed-effects meta-analysis: Consequences of using pooled versus separate estimates of the residual between-studies variances.

    PubMed

    Rubio-Aparicio, María; Sánchez-Meca, Julio; López-López, José Antonio; Botella, Juan; Marín-Martínez, Fulgencio

    2017-11-01

    Subgroup analyses allow us to examine the influence of a categorical moderator on the effect size in meta-analysis. We conducted a simulation study using a dichotomous moderator, and compared the impact of pooled versus separate estimates of the residual between-studies variance on the statistical performance of the Q B (P) and Q B (S) tests for subgroup analyses assuming a mixed-effects model. Our results suggested that similar performance can be expected as long as there are at least 20 studies and these are approximately balanced across categories. Conversely, when subgroups were unbalanced, the practical consequences of having heterogeneous residual between-studies variances were more evident, with both tests leading to the wrong statistical conclusion more often than in the conditions with balanced subgroups. A pooled estimate should be preferred for most scenarios, unless the residual between-studies variances are clearly different and there are enough studies in each category to obtain precise separate estimates. © 2017 The British Psychological Society.

  14. Minimum number of clusters and comparison of analysis methods for cross sectional stepped wedge cluster randomised trials with binary outcomes: A simulation study.

    PubMed

    Barker, Daniel; D'Este, Catherine; Campbell, Michael J; McElduff, Patrick

    2017-03-09

    Stepped wedge cluster randomised trials frequently involve a relatively small number of clusters. The most common frameworks used to analyse data from these types of trials are generalised estimating equations and generalised linear mixed models. A topic of much research into these methods has been their application to cluster randomised trial data and, in particular, the number of clusters required to make reasonable inferences about the intervention effect. However, for stepped wedge trials, which have been claimed by many researchers to have a statistical power advantage over the parallel cluster randomised trial, the minimum number of clusters required has not been investigated. We conducted a simulation study where we considered the most commonly used methods suggested in the literature to analyse cross-sectional stepped wedge cluster randomised trial data. We compared the per cent bias, the type I error rate and power of these methods in a stepped wedge trial setting with a binary outcome, where there are few clusters available and when the appropriate adjustment for a time trend is made, which by design may be confounding the intervention effect. We found that the generalised linear mixed modelling approach is the most consistent when few clusters are available. We also found that none of the common analysis methods for stepped wedge trials were both unbiased and maintained a 5% type I error rate when there were only three clusters. Of the commonly used analysis approaches, we recommend the generalised linear mixed model for small stepped wedge trials with binary outcomes. We also suggest that in a stepped wedge design with three steps, at least two clusters be randomised at each step, to ensure that the intervention effect estimator maintains the nominal 5% significance level and is also reasonably unbiased.

  15. Synthesis and deposition of basement membrane proteins by primary brain capillary endothelial cells in a murine model of the blood-brain barrier.

    PubMed

    Thomsen, Maj Schneider; Birkelund, Svend; Burkhart, Annette; Stensballe, Allan; Moos, Torben

    2017-03-01

    The brain vascular basement membrane is important for both blood-brain barrier (BBB) development, stability, and barrier integrity and the contribution hereto from brain capillary endothelial cells (BCECs), pericytes, and astrocytes of the BBB is probably significant. The aim of this study was to analyse four different in vitro models of the murine BBB for expression and possible secretion of major basement membrane proteins from murine BCECs (mBCECs). mBCECs, pericytes and glial cells (mainly astrocytes and microglia) were prepared from brains of C57BL/6 mice. The mBCECs were grown as monoculture, in co-culture with pericytes or mixed glial cells, or as a triple-culture with both pericytes and mixed glial cells. The integrity of the BBB models was validated by measures of transendothelial electrical resistance (TEER) and passive permeability to mannitol. The expression of basement membrane proteins was analysed using RT-qPCR, mass spectrometry and immunocytochemistry. Co-culturing mBCECs with pericytes, mixed glial cells, or both significantly increased the TEER compared to the monoculture, and a low passive permeability was correlated with high TEER. The mBCECs expressed all major basement membrane proteins such as laminin-411, laminin-511, collagen [α1(IV)] 2 α2(IV), agrin, perlecan, and nidogen 1 and 2 in vitro. Increased expression of the laminin α5 subunit correlated with the addition of BBB-inducing factors (hydrocortisone, Ro 20-1724, and pCPT-cAMP), whereas increased expression of collagen IV α1 primarily correlated with increased levels of cAMP. In conclusion, BCECs cultured in vitro coherently form a BBB and express basement membrane proteins as a feature of maturation. Cover Image for this issue: doi: 10.1111/jnc.13789. © 2016 International Society for Neurochemistry.

  16. Use of chemical and isotopic tracers to characterize the interactions between ground water and surface water in mantled karst

    USGS Publications Warehouse

    Katz, B.G.; Coplen, T.B.; Bullen, T.D.; Hal, Davis J.

    1997-01-01

    In the mantled karst terrane of northern Florida, the water quality of the Upper Floridan aquifer is influenced by the degree of connectivity between the aquifer and the surface. Chemical and isotopic analyses [18O/16O (??18O), 2H/1H (??D), 13C/12C (??13C), tritium(3H), and strontium-87/strontium-86(87Sr/86Sr)]along with geochemical mass-balance modeling were used to identify the dominant hydrochemical processes that control the composition of ground water as it evolves downgradient in two systems. In one system, surface water enters the Upper Floridan aquifer through a sinkhole located in the Northern Highlands physiographic unit. In the other system, surface water enters the aquifer through a sinkhole lake (Lake Bradford) in the Woodville Karst Plain. Differences in the composition of water isotopes (??18O and ??D) in rainfall, ground water, and surface water were used to develop mixing models of surface water (leakage of water to the Upper Floridan aquifer from a sinkhole lake and a sinkhole) and ground water. Using mass-balance calculations, based on differences in ??18O and ??D, the proportion of lake water that mixed with meteoric water ranged from 7 to 86% in water from wells located in close proximity to Lake Bradford. In deeper parts of the Upper Floridan aquifer, water enriched in 18O and D from five of 12 sampled municipal wells indicated that recharge from a sinkhole (1 to 24%) and surface water with an evaporated isotopic signature (2 to 32%) was mixing with ground water. The solute isotopes, ??13C and 87Sr/86Sr, were used to test the sensitivity of binary and ternary mixing models, and to estimate the amount of mass transfer of carbon and other dissolved species in geochemical reactions. In ground water downgradient from Lake Bradford, the dominant processes controlling carbon cycling in ground water were dissolution of carbonate minerals, aerobic degradation of organic matter, and hydrolysis of silicate minerals. In the deeper parts of the Upper Floridan aquifer, the major processes controlling the concentrations of major dissolved species included dissolution of calcite and dolomite, and degradation of organic matter under oxic conditions. The Upper Floridan aquifer is highly susceptible to contamination from activities at the land surface in the Tallahassee area. The presence of post-1950s concentrations of 3H in ground water from depths greater than 100 m below land surface indicates that water throughout much of the Upper Floridan aquifer has been recharged during the last 40 years. Even though mixing is likely between ground water and surface water in many parts of the study area, the Upper Floridan aquifer produces good quality water, which due to dilution effects shows little if any impact from trace elements or nutrients that are present in surface waters.The water quality of the Upper Floridan aquifer is influenced by the degree of connectivity between the aquifer and the surface water. Chemical and isotopic analyses, tritium, and strontium-87/strontium-86 along with geochemical mass-balance modeling were used to identify the dominant hydrochemical processes that control the composition of groundwater. Differences in the composition of water isotopes in rainfall, groundwater and surface water were used to develop mixing models of surface water and groundwater. Even though mixing is likely between groundwater and surface water in many parts of the study area, the Upper Floridan aquifer produces good quality water, showing little impact from trace elements present in surface waters.

  17. The Influence of Thermodynamic Phase on the Retrieval of Mixed-Phase Cloud Microphysical and Optical Properties in the Visible and Near Infrared Region

    NASA Technical Reports Server (NTRS)

    Lee, Joonsuk; Yang, Ping; Dessler, Andrew E.; Baum, Bryan A.; Platnick, Steven

    2005-01-01

    Cloud microphysical and optical properties are inferred from the bidirectional reflectances simulated for a single-layered cloud consisting of an external mixture of ice particles and liquid droplets. The reflectances are calculated with a rigorous discrete ordinates radiative transfer model and are functions of the cloud effective particle size, the cloud optical thickness, and the values of the ice fraction in the cloud (i.e., the ratio of ice water content to total water content). In the present light scattering and radiative transfer simulations, the ice fraction is assumed to be vertically homogeneous; the habit (shape) percentage as a function of ice particle size is consistent with that used for the Moderate Resolution Imaging Spectroradiometer (MODIS) operational (Collection 4 and earlier) cloud products; and the surface is assumed to be Lambertian with an albedo of 0.03. Furthermore, error analyses pertaining to the inference of the effective particle sizes and optical thicknesses of mixed-phase clouds are performed. Errors are calculated with respect to the assumption of a cloud containing solely liquid or ice phase particles. The analyses suggest that the effective particle size inferred for a mixed-phase cloud can be underestimated (or overestimated) if pure liquid phase (or pure ice phase) is assumed for the cloud, whereas the corresponding cloud optical thickness can be overestimated (or underestimated).

  18. Intercomparison of aerosol-cloud-precipitation interactions in stratiform orographic mixed-phase clouds

    NASA Astrophysics Data System (ADS)

    Muhlbauer, A.; Hashino, T.; Xue, L.; Teller, A.; Lohmann, U.; Rasmussen, R. M.; Geresdi, I.; Pan, Z.

    2010-09-01

    Anthropogenic aerosols serve as a source of both cloud condensation nuclei (CCN) and ice nuclei (IN) and affect microphysical properties of clouds. Increasing aerosol number concentrations is hypothesized to retard the cloud droplet coalescence and the riming in mixed-phase clouds, thereby decreasing orographic precipitation. This study presents results from a model intercomparison of 2-D simulations of aerosol-cloud-precipitation interactions in stratiform orographic mixed-phase clouds. The sensitivity of orographic precipitation to changes in the aerosol number concentrations is analysed and compared for various dynamical and thermodynamical situations. Furthermore, the sensitivities of microphysical processes such as coalescence, aggregation, riming and diffusional growth to changes in the aerosol number concentrations are evaluated and compared. The participating numerical models are the model from the Consortium for Small-Scale Modeling (COSMO) with bulk microphysics, the Weather Research and Forecasting (WRF) model with bin microphysics and the University of Wisconsin modeling system (UWNMS) with a spectral ice habit prediction microphysics scheme. All models are operated on a cloud-resolving scale with 2 km horizontal grid spacing. The results of the model intercomparison suggest that the sensitivity of orographic precipitation to aerosol modifications varies greatly from case to case and from model to model. Neither a precipitation decrease nor a precipitation increase is found robustly in all simulations. Qualitative robust results can only be found for a subset of the simulations but even then quantitative agreement is scarce. Estimates of the aerosol effect on orographic precipitation are found to range from -19% to 0% depending on the simulated case and the model. Similarly, riming is shown to decrease in some cases and models whereas it increases in others, which implies that a decrease in riming with increasing aerosol load is not a robust result. Furthermore, it is found that neither a decrease in cloud droplet coalescence nor a decrease in riming necessarily implies a decrease in precipitation due to compensation effects by other microphysical pathways. The simulations suggest that mixed-phase conditions play an important role in buffering the effect of aerosol perturbations on cloud microphysics and reducing the overall susceptibility of clouds and precipitation to changes in the aerosol number concentrations. As a consequence the aerosol effect on precipitation is suggested to be less pronounced or even inverted in regions with high terrain (e.g., the Alps or Rocky Mountains) or in regions where mixed-phase microphysics is important for the climatology of orographic precipitation.

  19. Shergottite Lead Isotope Signature in Chassigny and the Nakhlites

    NASA Technical Reports Server (NTRS)

    Jones, J. H.; Simon, J. I.

    2017-01-01

    The nakhlites/chassignites and the shergottites represent two differing suites of basaltic martian meteorites. The shergottites have ages less than or equal to 0.6 Ga and a large range of initial Sr-/Sr-86 and epsilon (Nd-143) ratios. Conversely, the nakhlites and chassignites cluster at 1.3-1.4 Ga and have a limited range of initial Sr-87/Sr-86 and epsilon (Nd-143). More importantly, the shergottites have epsilon (W-182) less than 1, whereas the nakhlites and chassignites have epsilon (W-182) approximately 3. This latter observation precludes the extraction of both meteorite groups from a single source region. However, recent Pb isotopic analyses indicate that there may have been interaction between shergottite and nakhlite/chassignite Pb reservoirs.Pb Analyses of Chassigny: Two different studies haveinvestigated 207Pb/204Pb vs. 206Pb/204Pb in Chassigny: (i)TIMS bulk-rock analyses of successive leaches and theirresidue [3]; and (ii) SIMS analysis of individual minerals[4]. The bulk-rock analyses fall along a regression of SIMSplagioclase analyses that define an errorchron that is olderthan the Solar System (4.61±0.1 Ga); i.e., these define amixing line between Chassigny’s principal Pb isotopic components(Fig. 1). Augites and olivines in Chassingy (notshown) also fall along or near the plagioclase regression [4].This agreement indicates that the whole-rock leachateslikely measure indigenous, martian Pb, not terrestrial contamination[5]. SIMS analyses of K-spars and sulfides definea separate, sub-parallel trend having higher 207Pb/206Pbvalues ([4]; Fig. 1). The good agreement between the bulkrockanalyses and the SIMS analyses of plagioclases alsoindicates that the Pb in the K-spars and sulfides cannot be amajor component of Chassigny.The depleted reservoir sampled by Chassigny plagioclaseis not the same as the solar system initial (PAT) andrequires a multi-stage origin. Here we show a two-stagemodel (Fig. 1) with a 238U/204Pb (µ) of 0.5 for 4.5-2.4 Gaand a µ of 7 for 2.4-1.4 Ga. This is not a unique model butdoes produce a Pb composition that falls on the plagioclaseregression at 1.4 Ga, the approximate igneous age of Chassigny [1]. It should be noted that low-µ single-stage modelsare not capable of producing sufficiently radiogenic 206Pb/204Pb at 1.4 Ga.Relation to Shergottites: The Chassigny K-spars and sulfides fall along a second mixing line defined by leachesand residues of depleted and intermediate shergottites [6]. This mixing line falls above the plagioclase regression.Therefore, we also interpret the radiogenic component of this mixing line to represent indigenous martian Pb. It ispossible that the depleted and intermediate shergottites and the Chassigny plagioclases sample radiogenic Pb from thethe same source, i.e., the mixing lines may intersect at high 206Pb/204Pb.Both K-spar and sulfide are late-stage phases. At the time of their crystallization, the Chassigny system appearsto have remained open to a depleted shergottite Pb reservoir. The depleted component of the shergottite mixing linecan be generated by a single-stage evolution from PAT (4.5 to 1.4 Ga) in a reservoir having a µ 2. A similar modelfor the most depleted shergottites is also possible: µ = 1.5 for 4.5 to 0.3 Ga.Nakhlites: Nakhlite analyses plot between the shergottite and Chassigny plagioclase regressions [3]. So again,members of the nakhlite/chassignite suite show affinities to shergottite Pb.

  20. Sudbury project (University of Muenster-Ontario Geological Survey): Isotope systematics support the impact origin

    NASA Technical Reports Server (NTRS)

    Deutsch, A.; Buhl, D.; Brockmeyer, P.; Lakomy, R.; Flucks, M.

    1992-01-01

    Within the framework of the Sudbury project a considerable number of Sr-Nd isotope analyses were carried out on petrographically well-defined samples of different breccia units. Together with isotope data from the literature these data are reviewed under the aspect of a self-consistent impact model. The crucial point of this model is that the Sudbury Igneous Complex (SIC) is interpreted as a differentiated impact melt sheet without any need for an endogenic 'magmatic' component such as 'impact-triggered' magmatism or 'partial' impact melting of the crust and mixing with a mantle-derived magma.

  1. Estimation of time-variable fast flow path chemical concentrations for application in tracer-based hydrograph separation analyses

    USGS Publications Warehouse

    Kronholm, Scott C.; Capel, Paul D.

    2016-01-01

    Mixing models are a commonly used method for hydrograph separation, but can be hindered by the subjective choice of the end-member tracer concentrations. This work tests a new variant of mixing model that uses high-frequency measures of two tracers and streamflow to separate total streamflow into water from slowflow and fastflow sources. The ratio between the concentrations of the two tracers is used to create a time-variable estimate of the concentration of each tracer in the fastflow end-member. Multiple synthetic data sets, and data from two hydrologically diverse streams, are used to test the performance and limitations of the new model (two-tracer ratio-based mixing model: TRaMM). When applied to the synthetic streams under many different scenarios, the TRaMM produces results that were reasonable approximations of the actual values of fastflow discharge (±0.1% of maximum fastflow) and fastflow tracer concentrations (±9.5% and ±16% of maximum fastflow nitrate concentration and specific conductance, respectively). With real stream data, the TRaMM produces high-frequency estimates of slowflow and fastflow discharge that align with expectations for each stream based on their respective hydrologic settings. The use of two tracers with the TRaMM provides an innovative and objective approach for estimating high-frequency fastflow concentrations and contributions of fastflow water to the stream. This provides useful information for tracking chemical movement to streams and allows for better selection and implementation of water quality management strategies.

  2. Evaluation of the Community Multiscale Air Quality Model for Simulating Winter Ozone Formation in the Uinta Basin

    NASA Astrophysics Data System (ADS)

    Matichuk, Rebecca; Tonnesen, Gail; Luecken, Deborah; Gilliam, Rob; Napelenok, Sergey L.; Baker, Kirk R.; Schwede, Donna; Murphy, Ben; Helmig, Detlev; Lyman, Seth N.; Roselle, Shawn

    2017-12-01

    The Weather Research and Forecasting (WRF) and Community Multiscale Air Quality (CMAQ) models were used to simulate a 10 day high-ozone episode observed during the 2013 Uinta Basin Winter Ozone Study (UBWOS). The baseline model had a large negative bias when compared to ozone (O3) and volatile organic compound (VOC) measurements across the basin. Contrary to other wintertime Uinta Basin studies, predicted nitrogen oxides (NOx) were typically low compared to measurements. Increases to oil and gas VOC emissions resulted in O3 predictions closer to observations, and nighttime O3 improved when reducing the deposition velocity for all chemical species. Vertical structures of these pollutants were similar to observations on multiple days. However, the predicted surface layer VOC mixing ratios were generally found to be underestimated during the day and overestimated at night. While temperature profiles compared well to observations, WRF was found to have a warm temperature bias and too low nighttime mixing heights. Analyses of more realistic snow heat capacity in WRF to account for the warm bias and vertical mixing resulted in improved temperature profiles, although the improved temperature profiles seldom resulted in improved O3 profiles. While additional work is needed to investigate meteorological impacts, results suggest that the uncertainty in the oil and gas emissions contributes more to the underestimation of O3. Further, model adjustments based on a single site may not be suitable across all sites within the basin.

  3. Transcriptional responses of zebrafish to complex metal mixtures in laboratory studies overestimates the responses observed with environmental water.

    PubMed

    Pradhan, Ajay; Ivarsson, Per; Ragnvaldsson, Daniel; Berg, Håkan; Jass, Jana; Olsson, Per-Erik

    2017-04-15

    Metals released into the environment continue to be of concern for human health. However, risk assessment of metal exposure is often based on total metal levels and usually does not take bioavailability data, metal speciation or matrix effects into consideration. The continued development of biological endpoint analyses are therefore of high importance for improved eco-toxicological risk analyses. While there is an on-going debate concerning synergistic or additive effects of low-level mixed exposures there is little environmental data confirming the observations obtained from laboratory experiments. In the present study we utilized qRT-PCR analysis to identify key metal response genes to develop a method for biomonitoring and risk-assessment of metal pollution. The gene expression patterns were determined for juvenile zebrafish exposed to waters from sites down-stream of a closed mining operation. Genes representing different physiological processes including stress response, inflammation, apoptosis, drug metabolism, ion channels and receptors, and genotoxicity were analyzed. The gene expression patterns of zebrafish exposed to laboratory prepared metal mixes were compared to the patterns obtained with fish exposed to the environmental samples with the same metal composition and concentrations. Exposure to environmental samples resulted in fewer alterations in gene expression compared to laboratory mixes. A biotic ligand model (BLM) was used to approximate the bioavailability of the metals in the environmental setting. However, the BLM results were not in agreement with the experimental data, suggesting that the BLM may be overestimating the risk in the environment. The present study therefore supports the inclusion of site-specific biological analyses to complement the present chemical based assays used for environmental risk-assessment. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. The Supermarket Model with Bounded Queue Lengths in Equilibrium

    NASA Astrophysics Data System (ADS)

    Brightwell, Graham; Fairthorne, Marianne; Luczak, Malwina J.

    2018-04-01

    In the supermarket model, there are n queues, each with a single server. Customers arrive in a Poisson process with arrival rate λ n , where λ = λ (n) \\in (0,1) . Upon arrival, a customer selects d=d(n) servers uniformly at random, and joins the queue of a least-loaded server amongst those chosen. Service times are independent exponentially distributed random variables with mean 1. In this paper, we analyse the behaviour of the supermarket model in the regime where λ (n) = 1 - n^{-α } and d(n) = \\lfloor n^β \\rfloor , where α and β are fixed numbers in (0, 1]. For suitable pairs (α , β ) , our results imply that, in equilibrium, with probability tending to 1 as n → ∞, the proportion of queues with length equal to k = \\lceil α /β \\rceil is at least 1-2n^{-α + (k-1)β } , and there are no longer queues. We further show that the process is rapidly mixing when started in a good state, and give bounds on the speed of mixing for more general initial conditions.

  5. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  6. Anomalies of the upper water column in the Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Rivetti, Irene; Boero, Ferdinando; Fraschetti, Simonetta; Zambianchi, Enrico; Lionello, Piero

    2017-04-01

    The evolution of the upper water column in the Mediterranean Sea during more than 60 years is reconstructed in terms of few parameters describing the mixed layer and the seasonal thermocline. The analysis covers the period 1945-2011 using data from three public sources: MEDAR-MEDATLAS, World Ocean Database, MFS-VOS program. Five procedures for estimating the mixed layer depth are described, discussed and compared using the 20-year long time series of temperature profiles of the DYFAMED station in the Ligurian Sea. On this basis the so-called three segments profile model (which approximates the upper water column with three segments representing mixed layer, thermocline and deep layer) has been selected for a systematic analysis at Mediterranean scale. A widespread increase of the thickness and temperature of the mixed layer, increase of the depth and decrease of the temperature of the thermocline base have been observed in summer and autumn during the recent decades. It is shown that positive temperature extremes of the mixed layer and of its thickness are potential drivers of the mass mortalities of benthic invertebrates documented since 1983. Hotspots of mixed layer anomalies have been also identified. These results refine previous analyses showing that ongoing and future warming of upper Mediterranean is likely to increase mass mortalities by producing environmental conditions beyond the limit of tolerance of some benthic species.

  7. On the repeated measures designs and sample sizes for randomized controlled trials.

    PubMed

    Tango, Toshiro

    2016-04-01

    For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. A process of rumour scotching on finite populations.

    PubMed

    de Arruda, Guilherme Ferraz; Lebensztayn, Elcio; Rodrigues, Francisco A; Rodríguez, Pablo Martín

    2015-09-01

    Rumour spreading is a ubiquitous phenomenon in social and technological networks. Traditional models consider that the rumour is propagated by pairwise interactions between spreaders and ignorants. Only spreaders are active and may become stiflers after contacting spreaders or stiflers. Here we propose a competition-like model in which spreaders try to transmit an information, while stiflers are also active and try to scotch it. We study the influence of transmission/scotching rates and initial conditions on the qualitative behaviour of the process. An analytical treatment based on the theory of convergence of density-dependent Markov chains is developed to analyse how the final proportion of ignorants behaves asymptotically in a finite homogeneously mixing population. We perform Monte Carlo simulations in random graphs and scale-free networks and verify that the results obtained for homogeneously mixing populations can be approximated for random graphs, but are not suitable for scale-free networks. Furthermore, regarding the process on a heterogeneous mixing population, we obtain a set of differential equations that describes the time evolution of the probability that an individual is in each state. Our model can also be applied for studying systems in which informed agents try to stop the rumour propagation, or for describing related susceptible-infected-recovered systems. In addition, our results can be considered to develop optimal information dissemination strategies and approaches to control rumour propagation.

  9. A process of rumour scotching on finite populations

    PubMed Central

    de Arruda, Guilherme Ferraz; Lebensztayn, Elcio; Rodrigues, Francisco A.; Rodríguez, Pablo Martín

    2015-01-01

    Rumour spreading is a ubiquitous phenomenon in social and technological networks. Traditional models consider that the rumour is propagated by pairwise interactions between spreaders and ignorants. Only spreaders are active and may become stiflers after contacting spreaders or stiflers. Here we propose a competition-like model in which spreaders try to transmit an information, while stiflers are also active and try to scotch it. We study the influence of transmission/scotching rates and initial conditions on the qualitative behaviour of the process. An analytical treatment based on the theory of convergence of density-dependent Markov chains is developed to analyse how the final proportion of ignorants behaves asymptotically in a finite homogeneously mixing population. We perform Monte Carlo simulations in random graphs and scale-free networks and verify that the results obtained for homogeneously mixing populations can be approximated for random graphs, but are not suitable for scale-free networks. Furthermore, regarding the process on a heterogeneous mixing population, we obtain a set of differential equations that describes the time evolution of the probability that an individual is in each state. Our model can also be applied for studying systems in which informed agents try to stop the rumour propagation, or for describing related susceptible–infected–recovered systems. In addition, our results can be considered to develop optimal information dissemination strategies and approaches to control rumour propagation. PMID:26473048

  10. Incidence and effects of endemic populations of forest pests in young mixed-conifer forests of the Sierra Nevada

    Treesearch

    Carroll B. Williams; David L. Azuma; George T. Ferrell

    1992-01-01

    Approximately 3.200 trees in young mixed-conifer stands were examined for pest activity and human-caused or mechanical injuries, and approximately 25 percent of these trees were randomly selected for stem analyses. The examination of trees felled for stem analyses showed that 409 (47 percent) were free of pests and 466 (53 percent) had one or more pest categories....

  11. The Impact of Satellite-Derived Land Surface Temperatures on Numerical Weather Prediction Analyses and Forecasts

    NASA Astrophysics Data System (ADS)

    Candy, B.; Saunders, R. W.; Ghent, D.; Bulgin, C. E.

    2017-09-01

    Land surface temperature (LST) observations from a variety of satellite instruments operating in the infrared have been compared to estimates of surface temperature from the Met Office operational numerical weather prediction (NWP) model. The comparisons show that during the day the NWP model can underpredict the surface temperature by up to 10 K in certain regions such as the Sahel and southern Africa. By contrast at night the differences are generally smaller. Matchups have also been performed between satellite LSTs and observations from an in situ radiometer located in Southern England within a region of mixed land use. These matchups demonstrate good agreement at night and suggest that the satellite uncertainties in LST are less than 2 K. The Met Office surface analysis scheme has been adapted to utilize nighttime LST observations. Experiments using these analyses in an NWP model have shown a benefit to the resulting forecasts of near-surface air temperature, particularly over Africa.

  12. Religion and Spirituality's Influences on HIV Syndemics Among MSM: A Systematic Review and Conceptual Model.

    PubMed

    Lassiter, Jonathan M; Parsons, Jeffrey T

    2016-02-01

    This paper presents a systematic review of the quantitative HIV research that assessed the relationships between religion, spirituality, HIV syndemics, and individual HIV syndemics-related health conditions (e.g. depression, substance abuse, HIV risk) among men who have sex with men (MSM) in the United States. No quantitative studies were found that assessed the relationships between HIV syndemics, religion, and spirituality. Nine studies, with 13 statistical analyses, were found that examined the relationships between individual HIV syndemics-related health conditions, religion, and spirituality. Among the 13 analyses, religion and spirituality were found to have mixed relationships with HIV syndemics-related health conditions (6 nonsignificant associations; 5 negative associations; 2 positive associations). Given the overall lack of inclusion of religion and spirituality in HIV syndemics research, a conceptual model that hypothesizes the potential interactions of religion and spirituality with HIV syndemics-related health conditions is presented. The implications of the model for MSM's health are outlined.

  13. Religion and Spirituality’s Influences on HIV Syndemics Among MSM: A Systematic Review and Conceptual Model

    PubMed Central

    Parsons, Jeffrey T.

    2015-01-01

    This paper presents a systematic review of the quantitative HIV research that assessed the relationships between religion, spirituality, HIV syndemics, and individual HIV syndemics-related health conditions (e.g. depression, substance abuse, HIV risk) among men who have sex with men (MSM) in the United States. No quantitative studies were found that assessed the relationships between HIV syndemics, religion, and spirituality. Nine studies, with 13 statistical analyses, were found that examined the relationships between individual HIV syndemics-related health conditions, religion, and spirituality. Among the 13 analyses, religion and spirituality were found to have mixed relationships with HIV syndemics-related health conditions (6 nonsignificant associations; 5 negative associations; 2 positive associations). Given the overall lack of inclusion of religion and spirituality in HIV syndemics research, a conceptual model that hypothesizes the potential interactions of religion and spirituality with HIV syndemics-related health conditions is presented. The implications of the model for MSM’s health are outlined. PMID:26319130

  14. Associations of Family and Peer Experiences with Masculinity Attitude Trajectories at the Individual and Group Level in Adolescent and Young Adult Males

    PubMed Central

    Marcell, Arik V.; Eftim, Sorina E.; Sonenstein, Freya L.; Pleck, Joseph H.

    2013-01-01

    Data were drawn from 845 males in the National Survey of Adolescent Males who were initially aged 15–17, and followed-up 2.5 and 4.5 years later, to their early twenties. Mixed-effects regression models (MRM) and semiparametric trajectory analyses (STA) modeled patterns of change in masculinity attitudes at the individual and group levels, guided by gender intensification theory and cognitive-developmental theory. Overall, men’s masculinity attitudes became significantly less traditional between middle adolescence and early adulthood. In MRM analyses using time-varying covariates, maintaining paternal coresidence and continuing to have first sex in uncommitted heterosexual relationships were significantly associated with masculinity attitudes remaining relatively traditional. The STA modeling identified three distinct patterns of change in masculinity attitudes. A traditional-liberalizing trajectory of masculinity attitudes was most prevalent, followed by traditional-stable and nontraditional-stable trajectories. Implications for gender intensification and cognitive-developmental approaches to masculinity attitudes are discussed. PMID:24187483

  15. What Do You Think You Are Measuring? A Mixed-Methods Procedure for Assessing the Content Validity of Test Items and Theory-Based Scaling

    PubMed Central

    Koller, Ingrid; Levenson, Michael R.; Glück, Judith

    2017-01-01

    The valid measurement of latent constructs is crucial for psychological research. Here, we present a mixed-methods procedure for improving the precision of construct definitions, determining the content validity of items, evaluating the representativeness of items for the target construct, generating test items, and analyzing items on a theoretical basis. To illustrate the mixed-methods content-scaling-structure (CSS) procedure, we analyze the Adult Self-Transcendence Inventory, a self-report measure of wisdom (ASTI, Levenson et al., 2005). A content-validity analysis of the ASTI items was used as the basis of psychometric analyses using multidimensional item response models (N = 1215). We found that the new procedure produced important suggestions concerning five subdimensions of the ASTI that were not identifiable using exploratory methods. The study shows that the application of the suggested procedure leads to a deeper understanding of latent constructs. It also demonstrates the advantages of theory-based item analysis. PMID:28270777

  16. Upper limits to trace constituents in Jupiter's atmosphere from an analysis of its 5 micrometer spectrum

    NASA Technical Reports Server (NTRS)

    Treffers, R. R.; Larson, H. P.; Fink, U.; Gautier, T. N.

    1978-01-01

    A high-resolution spectrum of Jupiter at 5 micrometers recorded at the Kuiper Airborne Observatory is used to determine upper limits to the column density of 19 molecules. The upper limits to the mixing ratios of SiH4, H2S, HCN, and simple hydrocarbons are discussed with respect to current models of Jupiter's atmosphere. These upper limits are compared to expectations based upon the solar abundance of the elements. This analysis permits upper limit measurements (SiH4), or actual detections (GeH4) of molecules with mixing ratios with hydrogen as low as 10 to the minus 9th power. In future observations at 5 micrometers the sensitivity of remote spectroscopic analyses should permit the study of constituents with mixing ratios as low as 10 to the minus 10th power, which would include the hydrides of such elements as Sn and As as well as numerous organic molecules.

  17. Mixing stops at the LHC

    DOE PAGES

    Agrawal, Prateek; Frugiuele, Claudia

    2014-01-01

    We study the phenomenology of a light stop NLSP in the presence of large mixing with either the first or the second generation. R-symmetric models provide a prime setting for this scenario, but our discussion also applies to the MSSM when a significant amount of mixing can be accommodated. In our framework the dominant stop decay is through the flavor violating mode into a light jet and the LSP in an extended region of parameter space. There are currently no limits from ATLAS and CMS in this region. We emulate shape-based hadronic SUSY searches for this topology, and find thatmore » they have potential sensitivity. If the extension of these analyses to this region is robust, we find that these searches can set strong exclusion limits on light stops. If not, then the flavor violating decay mode is challenging and may represent a blind spot in stop searches even at 13 TeV. Thus, an experimental investigation of this scenario is well motivated.« less

  18. Hybrid models for chemical reaction networks: Multiscale theory and application to gene regulatory systems.

    PubMed

    Winkelmann, Stefanie; Schütte, Christof

    2017-09-21

    Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.

  19. Hybrid models for chemical reaction networks: Multiscale theory and application to gene regulatory systems

    NASA Astrophysics Data System (ADS)

    Winkelmann, Stefanie; Schütte, Christof

    2017-09-01

    Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.

  20. Inequalities in the Education System and the Reproduction of Socioeconomic Disparities in Voting in England, Denmark and Germany: The Influence of Country Context, Tracking and Self-Efficacy on Voting Intentions of Students Age 16-18

    ERIC Educational Resources Information Center

    Hoskins, Bryony; Janmaat, Jan Germen; Han, Christine; Muijs, Daniel

    2016-01-01

    This article performs exploratory research using a mixed-methods approach (structural equation modelling and a thematic analysis of interview data) to analyse the ways in which socioeconomic disparities in voting patterns are reproduced through inequalities in education in different national contexts, and the role of self-efficacy in this process.…

  1. Genesis of highland basalt breccias - A view from 66095

    NASA Technical Reports Server (NTRS)

    Garrison, J. R., Jr.; Taylor, L. A.

    1980-01-01

    Electron microprobe and defocused beam analyses of the lunar highland breccia sample 66095 show it consists of a fine-grained subophitic matrix containing a variety of mineral and lithic clasts, such as intergranular and cataclastic ANT, shocked and unshocked plagioclase, and basalts. Consideration of the chemistries of both matrix and clasts provides a basis for a qualitative three-component mixing model consisting of an ANT plutonic complex, a Fra Mauro basalt, and minor meteoric material.

  2. Data and Model Uncertainties associated with Biogeochemical Groundwater Remediation and their impact on Decision Analysis

    NASA Astrophysics Data System (ADS)

    Pandey, S.; Vesselinov, V. V.; O'Malley, D.; Karra, S.; Hansen, S. K.

    2016-12-01

    Models and data are used to characterize the extent of contamination and remediation, both of which are dependent upon the complex interplay of processes ranging from geochemical reactions, microbial metabolism, and pore-scale mixing to heterogeneous flow and external forcings. Characterization is wrought with important uncertainties related to the model itself (e.g. conceptualization, model implementation, parameter values) and the data used for model calibration (e.g. sparsity, measurement errors). This research consists of two primary components: (1) Developing numerical models that incorporate the complex hydrogeology and biogeochemistry that drive groundwater contamination and remediation; (2) Utilizing novel techniques for data/model-based analyses (such as parameter calibration and uncertainty quantification) to aid in decision support for optimal uncertainty reduction related to characterization and remediation of contaminated sites. The reactive transport models are developed using PFLOTRAN and are capable of simulating a wide range of biogeochemical and hydrologic conditions that affect the migration and remediation of groundwater contaminants under diverse field conditions. Data/model-based analyses are achieved using MADS, which utilizes Bayesian methods and Information Gap theory to address the data/model uncertainties discussed above. We also use these tools to evaluate different models, which vary in complexity, in order to weigh and rank models based on model accuracy (in representation of existing observations), model parsimony (everything else being equal, models with smaller number of model parameters are preferred), and model robustness (related to model predictions of unknown future states). These analyses are carried out on synthetic problems, but are directly related to real-world problems; for example, the modeled processes and data inputs are consistent with the conditions at the Los Alamos National Laboratory contamination sites (RDX and Chromium).

  3. Genomic similarity and kernel methods I: advancements by building on mathematical and statistical foundations.

    PubMed

    Schaid, Daniel J

    2010-01-01

    Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.

  4. Collective motion patterns of swarms with delay coupling: Theory and experiment.

    PubMed

    Szwaykowska, Klementyna; Schwartz, Ira B; Mier-Y-Teran Romero, Luis; Heckman, Christoffer R; Mox, Dan; Hsieh, M Ani

    2016-03-01

    The formation of coherent patterns in swarms of interacting self-propelled autonomous agents is a subject of great interest in a wide range of application areas, ranging from engineering and physics to biology. In this paper, we model and experimentally realize a mixed-reality large-scale swarm of delay-coupled agents. The coupling term is modeled as a delayed communication relay of position. Our analyses, assuming agents communicating over an Erdös-Renyi network, demonstrate the existence of stable coherent patterns that can be achieved only with delay coupling and that are robust to decreasing network connectivity and heterogeneity in agent dynamics. We also show how the bifurcation structure for emergence of different patterns changes with heterogeneity in agent acceleration capabilities and limited connectivity in the network as a function of coupling strength and delay. Our results are verified through simulation as well as preliminary experimental results of delay-induced pattern formation in a mixed-reality swarm.

  5. Does Marriage Moderate Genetic Effects on Delinquency and Violence?

    PubMed Central

    Li, Yi; Liu, Hexuan; Guo, Guang

    2015-01-01

    Using data from the National Longitudinal Study of Adolescent to Adult Health (N = 1,254), the authors investigated whether marriage can foster desistance from delinquency and violence by moderating genetic effects. In contrast to existing gene–environment research that typically focuses on one or a few genetic polymorphisms, they extended a recently developed mixed linear model to consider the collective influence of 580 single nucleotide polymorphisms in 64 genes related to aggression and risky behavior. The mixed linear model estimates the proportion of variance in the phenotype that is explained by the single nucleotide polymorphisms. The authors found that the proportion of variance in delinquency/violence explained was smaller among married individuals than unmarried individuals. Because selection, confounding, and heterogeneity may bias the estimate of the Gene × Marriage interaction, they conducted a series of analyses to address these issues. The findings suggest that the Gene × Marriage interaction results were not seriously affected by these issues. PMID:26549892

  6. CFD analyses of combustor and nozzle flowfields

    NASA Astrophysics Data System (ADS)

    Tsuei, Hsin-Hua; Merkle, Charles L.

    1993-11-01

    The objectives of the research are to improve design capabilities for low thrust rocket engines through understanding of the detailed mixing and combustion processes. A Computational Fluid Dynamic (CFD) technique is employed to model the flowfields within the combustor, nozzle, and near plume field. The computational modeling of the rocket engine flowfields requires the application of the complete Navier-Stokes equations, coupled with species diffusion equations. Of particular interest is a small gaseous hydrogen-oxygen thruster which is considered as a coordinated part of an ongoing experimental program at NASA LeRC. The numerical procedure is performed on both time-marching and time-accurate algorithms, using an LU approximate factorization in time, flux split upwinding differencing in space. The integrity of fuel film cooling along the wall, its effectiveness in the mixing with the core flow including unsteady large scale effects, the resultant impact on performance and the assessment of the near plume flow expansion to finite pressure altitude chamber are addressed.

  7. Collective motion patterns of swarms with delay coupling: Theory and experiment

    NASA Astrophysics Data System (ADS)

    Szwaykowska, Klementyna; Schwartz, Ira B.; Mier-y-Teran Romero, Luis; Heckman, Christoffer R.; Mox, Dan; Hsieh, M. Ani

    2016-03-01

    The formation of coherent patterns in swarms of interacting self-propelled autonomous agents is a subject of great interest in a wide range of application areas, ranging from engineering and physics to biology. In this paper, we model and experimentally realize a mixed-reality large-scale swarm of delay-coupled agents. The coupling term is modeled as a delayed communication relay of position. Our analyses, assuming agents communicating over an Erdös-Renyi network, demonstrate the existence of stable coherent patterns that can be achieved only with delay coupling and that are robust to decreasing network connectivity and heterogeneity in agent dynamics. We also show how the bifurcation structure for emergence of different patterns changes with heterogeneity in agent acceleration capabilities and limited connectivity in the network as a function of coupling strength and delay. Our results are verified through simulation as well as preliminary experimental results of delay-induced pattern formation in a mixed-reality swarm.

  8. An exploratory sequential design to validate measures of moral emotions.

    PubMed

    Márquez, Margarita G; Delgado, Ana R

    2017-05-01

    This paper presents an exploratory and sequential mixed methods approach in validating measures of knowledge of the moral emotions of contempt, anger and disgust. The sample comprised 60 participants in the qualitative phase when a measurement instrument was designed. Item stems, response options and correction keys were planned following the results obtained in a descriptive phenomenological analysis of the interviews. In the quantitative phase, the scale was used with a sample of 102 Spanish participants, and the results were analysed with the Rasch model. In the qualitative phase, salient themes included reasons, objects and action tendencies. In the quantitative phase, good psychometric properties were obtained. The model fit was adequate. However, some changes had to be made to the scale in order to improve the proportion of variance explained. Substantive and methodological im-plications of this mixed-methods study are discussed. Had the study used a single re-search method in isolation, aspects of the global understanding of contempt, anger and disgust would have been lost.

  9. CAN STELLAR MIXING EXPLAIN THE LACK OF TYPE Ib SUPERNOVAE IN LONG-DURATION GAMMA-RAY BURSTS?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frey, Lucille H.; Fryer, Chris L.; Young, Patrick A.

    2013-08-10

    The discovery of supernovae associated with long-duration gamma-ray burst observations is primary evidence that the progenitors of these outbursts are massive stars. One of the principle mysteries in understanding these progenitors has been the fact that all of these gamma-ray-burst-associated supernovae are Type Ic supernovae with no evidence of helium in the stellar atmosphere. Many studies have focused on whether or not this helium is simply hidden from spectral analyses. In this Letter, we show results from recent stellar models using new convection algorithms based on our current understanding of stellar mixing. We demonstrate that enhanced convection may lead tomore » severe depletion of stellar helium layers, suggesting that the helium is not observed simply because it is not in the star. We also present light curves and spectra of these compact helium-depleted stars compared to models with more conventional helium layers.« less

  10. 2009–2010 Seasonal Influenza Vaccination Coverage Among College Students From 8 Universities in North Carolina

    PubMed Central

    Poehling, Katherine A.; Blocker, Jill; Ip, Edward H.; Peters, Timothy R.; Wolfson, Mark

    2012-01-01

    Objective We sought to describe the 2009–2010 seasonal influenza vaccine coverage of college students. Participants 4090 college students from eight North Carolina universities participated in a confidential, web-based survey in October-November 2009. Methods Associations between self-reported 2009–2010 seasonal influenza vaccination and demographic characteristics, campus activities, parental education, and email usage were assessed by bivariate analyses and by a mixed-effects model adjusting for clustering by university. Results Overall, 20% of students (range 14%–30% by university) reported receiving 2009–2010 seasonal influenza vaccine. Being a freshman, attending a private university, having a college-educated parent, and participating in academic clubs/honor societies predicted receipt of influenza vaccine in the mixed-effects model. Conclusions The self-reported 2009–2010 influenza vaccine coverage was one-quarter of the 2020 Healthy People goal (80%) for healthy persons 18–64 years of age. College campuses have the opportunity to enhance influenza vaccine coverage among its diverse student populations. PMID:23157195

  11. 2009-2010 seasonal influenza vaccination coverage among college students from 8 universities in North Carolina.

    PubMed

    Poehling, Katherine A; Blocker, Jill; Ip, Edward H; Peters, Timothy R; Wolfson, Mark

    2012-01-01

    The authors sought to describe the 2009-2010 seasonal influenza vaccine coverage of college students. A total of 4,090 college students from 8 North Carolina universities participated in a confidential, Web-based survey in October-November 2009. Associations between self-reported 2009-2010 seasonal influenza vaccination and demographic characteristics, campus activities, parental education, and e-mail usage were assessed by bivariate analyses and by a mixed-effects model adjusting for clustering by university. Overall, 20% of students (range 14%-30% by university) reported receiving 2009-2010 seasonal influenza vaccine. Being a freshman, attending a private university, having a college-educated parent, and participating in academic clubs/honor societies predicted receipt of influenza vaccine in the mixed-effects model. The self-reported 2009-2010 influenza vaccine coverage was one-quarter of the 2020 Healthy People goal (80%) for healthy persons 18 to 64 years of age. College campuses have the opportunity to enhance influenza vaccine coverage among its diverse student populations.

  12. Monocarbonyl Curcumin Analogs: Heterocyclic Pleiotropic Kinase Inhibitors that Mediate Anti-Cancer Properties

    PubMed Central

    Brown, Andrew; Shi, Qi; Moore, Terry W.; Yoon, Younghyoun; Prussia, Andrew; Maddox, Clinton; Liotta, Dennis C.; Shim*, Hyunsuk; Snyder*, James P.

    2014-01-01

    Curcumin is a biologically active component of curry powder. A structurally-related class of mimetics possesses similar anti-inflammatory and anticancer properties. Mechanism has been examined by exploring kinase inhibition trends. In a screen of 50 kinases relevant to many forms of cancer, one member of the series (4, EF31) showed ≥85% inhibition for ten of the enzymes at 5 μM, while twenty-two of the proteins were blocked at ≥40%. IC50’s for an expanded set of curcumin analogs established a rank order of potencies, and analyses of IKKβ and AKT2 enzyme kinetics for 4 revealed a mixed inhibition model, ATP competition dominating. Our curcumin mimetics are generally selective for Ser/Thr kinases. Both selectivity and potency trends are compatible with protein sequence comparisons, while modeled kinase binding site geometries deliver a reasonable correlation with mixed inhibition. Overall, these analogs are shown to be pleiotropic inhibitors that operate at multiple points along cell signaling pathways. PMID:23550937

  13. Monocarbonyl curcumin analogues: heterocyclic pleiotropic kinase inhibitors that mediate anticancer properties.

    PubMed

    Brown, Andrew; Shi, Qi; Moore, Terry W; Yoon, Younghyoun; Prussia, Andrew; Maddox, Clinton; Liotta, Dennis C; Shim, Hyunsuk; Snyder, James P

    2013-05-09

    Curcumin is a biologically active component of curry powder. A structurally related class of mimetics possesses similar anti-inflammatory and anticancer properties. Mechanism has been examined by exploring kinase inhibition trends. In a screen of 50 kinases relevant to many forms of cancer, one member of the series (4, EF31) showed ≥85% inhibition for 10 of the enzymes at 5 μM, while 22 of the proteins were blocked at ≥40%. IC50 values for an expanded set of curcumin analogues established a rank order of potencies, and analyses of IKKβ and AKT2 enzyme kinetics for 4 revealed a mixed inhibition model, ATP competition dominating. Our curcumin mimetics are generally selective for Ser/Thr kinases. Both selectivity and potency trends are compatible with protein sequence comparisons, while modeled kinase binding site geometries deliver a reasonable correlation with mixed inhibition. Overall, these analogues are shown to be pleiotropic inhibitors that operate at multiple points along cell signaling pathways.

  14. Measuring the individual benefit of a medical or behavioral treatment using generalized linear mixed-effects models.

    PubMed

    Diaz, Francisco J

    2016-10-15

    We propose statistical definitions of the individual benefit of a medical or behavioral treatment and of the severity of a chronic illness. These definitions are used to develop a graphical method that can be used by statisticians and clinicians in the data analysis of clinical trials from the perspective of personalized medicine. The method focuses on assessing and comparing individual effects of treatments rather than average effects and can be used with continuous and discrete responses, including dichotomous and count responses. The method is based on new developments in generalized linear mixed-effects models, which are introduced in this article. To illustrate, analyses of data from the Sequenced Treatment Alternatives to Relieve Depression clinical trial of sequences of treatments for depression and data from a clinical trial of respiratory treatments are presented. The estimation of individual benefits is also explained. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Quantitative genetic properties of four measures of deformity in yellowtail kingfish Seriola lalandi Valenciennes, 1833.

    PubMed

    Nguyen, N H; Whatmore, P; Miller, A; Knibb, W

    2016-02-01

    The main aim of this study was to estimate the heritability for four measures of deformity and their genetic associations with growth (body weight and length), carcass (fillet weight and yield) and flesh-quality (fillet fat content) traits in yellowtail kingfish Seriola lalandi. The observed major deformities included lower jaw, nasal erosion, deformed operculum and skinny fish on 480 individuals from 22 families at Clean Seas Tuna Ltd. They were typically recorded as binary traits (presence or absence) and were analysed separately by both threshold generalized models and standard animal mixed models. Consistency of the models was evaluated by calculating simple Pearson correlation of breeding values of full-sib families for jaw deformity. Genetic and phenotypic correlations among traits were estimated using a multitrait linear mixed model in ASReml. Both threshold and linear mixed model analysis showed that there is additive genetic variation in the four measures of deformity, with the estimates of heritability obtained from the former (threshold) models on liability scale ranging from 0.14 to 0.66 (SE 0.32-0.56) and from the latter (linear animal and sire) models on original (observed) scale, 0.01-0.23 (SE 0.03-0.16). When the estimates on the underlying liability were transformed to the observed scale (0, 1), they were generally consistent between threshold and linear mixed models. Phenotypic correlations among deformity traits were weak (close to zero). The genetic correlations among deformity traits were not significantly different from zero. Body weight and fillet carcass showed significant positive genetic correlations with jaw deformity (0.75 and 0.95, respectively). Genetic correlation between body weight and operculum was negative (-0.51, P < 0.05). The genetic correlations' estimates of body and carcass traits with other deformity were not significant due to their relatively high standard errors. Our results showed that there are prospects for genetic selection to improve deformity in yellowtail kingfish and that measures of deformity should be included in the recording scheme, breeding objectives and selection index in practical selective breeding programmes due to the antagonistic genetic correlations of deformed jaws with body and carcass performance. © 2015 John Wiley & Sons Ltd.

  16. Breast cancer risk factor associations differ for pure versus invasive carcinoma with an in situ component in case-control and case-case analyses

    PubMed Central

    Ruszczyk, Melanie; Zirpoli, Gary; Kumar, Shicha; Bandera, Elisa V.; Bovbjerg, Dana H.; Jandorf, Lina; Khoury, Thaer; Hwang, Helena; Ciupak, Gregory; Pawlish, Karen; Schedin, Pepper; Masso-Welch, Patricia; Ambrosone, Christine B.; Hong, Chi-Chen

    2015-01-01

    Purpose Invasive ductal carcinoma (IDC) is diagnosed with or without a ductal carcinoma in situ (DCIS) component. Previous analyses have found significant differences in tumor characteristics between pure IDC lacking DCIS and mixed IDC with DCIS. We will test our hypothesis that pure IDC represents a form of breast cancer with etiology and risk factors distinct from mixed IDC/DCIS. Methods We compared reproductive risk factors for breast cancer risk, as well as family and smoking history between 831 women with mixed IDC/DCIS (n=650) or pure IDC (n=181), and 1,620 controls, in the context of the Women's Circle of Health Study (WCHS), a case-control study of breast cancer in African-American and European-American women. Data on reproductive and lifestyle factors were collected during interviews, and tumor characteristics were abstracted from pathology reports. Case-control and case-case analyses were conducted using unconditional logistic regression. Results Most risk factors were similarly associated with pure IDC and mixed IDC/DCIS. However, among postmenopausal women, risk for pure IDC was lower in women with body mass index (BMI) 25 to <30 kg/m2 (Odds Ratio (OR)=0.66; 95% confidence interval (CI), 0.35-1.23) and BMI≥30 kg/m2 (OR=0.33; 95% CI, 0.18-0.67) compared to women with BMI<25 kg/m2, with no associations with mixed IDC/DCIS. In case-case analyses, women who breastfed up to 12 months (OR=0.55; 95% CI, 0.32-0.94) or longer (OR=0.47; 95% CI, 0.26-0.87) showed decreased odds of pure IDC than mixed IDC/DCIS compared to those who did not breastfeed. Conclusions Associations with some breast cancer risk factors differed between mixed IDC/DCIS and pure IDC, potentially suggesting differential developmental pathways. These findings, if confirmed in a larger study, will provide a better understanding of the development patterns of breast cancer and the influence of modifiable risk factors, which in turn could lead to better preventive measures for pure IDC, which have worse disease prognosis compared to mixed IDC/DCIS. PMID:26621543

  17. Multi-Scale Analysis for Characterizing Near-Field Constituent Concentrations in the Context of a Macro-Scale Semi-Lagrangian Numerical Model

    NASA Astrophysics Data System (ADS)

    Yearsley, J. R.

    2017-12-01

    The semi-Lagrangian numerical scheme employed by RBM, a model for simulating time-dependent, one-dimensional water quality constituents in advection-dominated rivers, is highly scalable both in time and space. Although the model has been used at length scales of 150 meters and time scales of three hours, the majority of applications have been at length scales of 1/16th degree latitude/longitude (about 5 km) or greater and time scales of one day. Applications of the method at these scales has proven successful for characterizing the impacts of climate change on water temperatures in global rivers and on the vulnerability of thermoelectric power plants to changes in cooling water temperatures in large river systems. However, local effects can be very important in terms of ecosystem impacts, particularly in the case of developing mixing zones for wastewater discharges with pollutant loadings limited by regulations imposed by the Federal Water Pollution Control Act (FWPCA). Mixing zone analyses have usually been decoupled from large-scale watershed influences by developing scenarios that represent critical scenarios for external processes associated with streamflow and weather conditions . By taking advantage of the particle-tracking characteristics of the numerical scheme, RBM can provide results at any point in time within the model domain. We develop a proof of concept for locations in the river network where local impacts such as mixing zones may be important. Simulated results from the semi-Lagrangian numerical scheme are treated as input to a finite difference model of the two-dimensional diffusion equation for water quality constituents such as water temperature or toxic substances. Simulations will provide time-dependent, two-dimensional constituent concentration in the near-field in response to long-term basin-wide processes. These results could provide decision support to water quality managers for evaluating mixing zone characteristics.

  18. Balancing acts: A mixed methods study of the figured world of African American 7th graders in urban science classrooms

    NASA Astrophysics Data System (ADS)

    Cleveland-Solomon, Tanya E.

    What beliefs and cultural models do youth who are underrepresented in science have about the domain of science and about themselves as science learners? What do they imagine is possible for them in relation to science both now and in the future? In other words, what constitutes their figured world of science? This dissertation study, using a mixed methods design, offers new perspectives on the ways that underrepresented youth's unexamined assumptions or cultural models and resources may shape their identities and motivation to learn science. Through analyses of survey and interview data, I found that urban African American youths' social context, gender, racial identity, and perceptions of the science they had in school influenced their motivation to learn science. Analyses of short-term classroom observations and interviews suggested that students had competing cultural models that they used in their constructions of identities as science learners, which they espoused and adopted in relation to how well they leveraged the science-related cultural resources available to them. Results from this study suggested that these 7th graders would benefit from access to more expansive cultural models through access to individuals with scientific capital as a way to allow them to create fruitful identities as science learners. If we want to ensure that students from groups that are underrepresented in science not only have better outcomes, but aspire to and enter the science career pipeline, we must also begin to support them in their negotiations of competing cultural models that limit their ability to adopt science-learner identities in their classrooms. This study endeavored to understand the particular cultural models and motivational beliefs that drive students to act, and what types of individuals they imagine scientists and science workers to be. This study also examined how cultural models and resources influence identity negotiation, specifically the roles youths envision for themselves as science students.

  19. Population Pharmacokinetic Analyses of Lithium: A Systematic Review.

    PubMed

    Methaneethorn, Janthima

    2018-02-01

    Even though lithium has been used for the treatment of bipolar disorder for several decades, its toxicities are still being reported. The major limitation in the use of lithium is its narrow therapeutic window. Several methods have been proposed to predict lithium doses essential to attain therapeutic levels. One of the methods used to guide lithium therapy is population pharmacokinetic approach which accounts for inter- and intra-individual variability in predicting lithium doses. Several population pharmacokinetic studies of lithium have been conducted. The objective of this review is to provide information on population pharmacokinetics of lithium focusing on nonlinear mixed effect modeling approach and to summarize significant factors affecting lithium pharmacokinetics. A literature search was conducted from PubMed database from inception to December, 2016. Studies conducted in humans, using lithium as a study drug, providing population pharmacokinetic analyses of lithium by means of nonlinear mixed effect modeling, were included in this review. Twenty-four articles were identified from the database. Seventeen articles were excluded based on the inclusion and exclusion criteria. A total of seven articles were included in this review. Of these, only one study reported a combined population pharmacokinetic-pharmacodynamic model of lithium. Lithium pharmacokinetics were explained using both one- and two-compartment models. The significant predictors of lithium clearance identified in most studies were renal function and body size. One study reported a significant effect of age on lithium clearance. The typical values of lithium clearance ranged from 0.41 to 9.39 L/h. The magnitude of inter-individual variability on lithium clearance ranged from 12.7 to 25.1%. Only two studies evaluated the models using external data sets. Model methodologies in each study are summarized and discussed in this review. For future perspective, a population pharmacokinetic-pharmacodynamic study of lithium is recommended. Moreover, external validation of previously published models should be performed.

  20. Application of Ensemble Detection and Analysis to Modeling Uncertainty in Non Stationary Process

    NASA Technical Reports Server (NTRS)

    Racette, Paul

    2010-01-01

    Characterization of non stationary and nonlinear processes is a challenge in many engineering and scientific disciplines. Climate change modeling and projection, retrieving information from Doppler measurements of hydrometeors, and modeling calibration architectures and algorithms in microwave radiometers are example applications that can benefit from improvements in the modeling and analysis of non stationary processes. Analyses of measured signals have traditionally been limited to a single measurement series. Ensemble Detection is a technique whereby mixing calibrated noise produces an ensemble measurement set. The collection of ensemble data sets enables new methods for analyzing random signals and offers powerful new approaches to studying and analyzing non stationary processes. Derived information contained in the dynamic stochastic moments of a process will enable many novel applications.

  1. Assimilation of ZDR Columns for Improving the Spin-Up and Forecasts of Convective Storms

    NASA Astrophysics Data System (ADS)

    Carlin, J.; Gao, J.; Snyder, J.; Ryzhkov, A.

    2017-12-01

    A primary motivation for assimilating radar reflectivity data is the reduction of spin-up time for modeled convection. To accomplish this, cloud analysis techniques seek to induce and sustain convective updrafts in storm-scale models by inserting temperature and moisture increments and hydrometeor mixing ratios into the model analysis from simple relations with reflectivity. Polarimetric radar data provide additional insight into the microphysical and dynamic structure of convection. In particular, the radar meteorology community has known for decades that convective updrafts cause, and are typically co-located with, differential reflectivity (ZDR) columns - vertical protrusions of enhanced ZDR above the environmental 0˚C level. Despite these benefits, limited work has been done thus far to assimilate dual-polarization radar data into numerical weather prediction models. In this study, we explore the utility of assimilating ZDR columns to improve storm-scale model analyses and forecasts of convection. We modify the existing Advanced Regional Prediction System's (ARPS) cloud analysis routine to adjust model temperature and moisture state variables using detected ZDR columns as proxies for convective updrafts, and compare the resultant cycled analyses and forecasts with those from the original reflectivity-based cloud analysis formulation. Results indicate qualitative and quantitative improvements from assimilating ZDR columns, including more coherent analyzed updrafts, forecast updraft helicity swaths that better match radar-derived rotation tracks, more realistic forecast reflectivity fields, and larger equitable threat scores. These findings support the use of dual-polarization radar signatures to improve storm-scale model analyses and forecasts.

  2. SPH numerical investigation of the characteristics of an oscillating hydraulic jump at an abrupt drop

    NASA Astrophysics Data System (ADS)

    De Padova, Diana; Mossa, Michele; Sibilla, Stefano

    2018-02-01

    This paper shows the results of the smooth particle hydrodynamics (SPH) modelling of the hydraulic jump at an abrupt drop, where the transition from supercritical to subcritical flow is characterised by several flow patterns depending upon the inflow and tailwater conditions. SPH simulations are obtained by a pseudo-compressible XSPH scheme with pressure smoothing; turbulent stresses are represented either by an algebraic mixing-length model, or by a two-equation k- ɛ model. The numerical model is applied to analyse the occurrence of oscillatory flow conditions between two different jump types characterised by quasi-periodic oscillation, and the results are compared with experiments performed at the hydraulics laboratory of Bari Technical University. The purpose of this paper is to obtain a deeper understanding of the physical features of a flow which is in general difficult to be reproduced numerically, owing to its unstable character: in particular, vorticity and turbulent kinetic energy fields, velocity, water depth and pressure spectra downstream of the jump, and velocity and pressure cross-correlations can be computed and analysed.

  3. DAKOTA Design Analysis Kit for Optimization and Terascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.

    2010-02-24

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less

  4. A meta-analysis of Th2 pathway genetic variants and risk for allergic rhinitis.

    PubMed

    Bunyavanich, Supinda; Shargorodsky, Josef; Celedón, Juan C

    2011-06-01

    There is a significant genetic contribution to allergic rhinitis (AR). Genetic association studies for AR have been performed, but varying results make it challenging to decipher the overall potential effect of specific variants. The Th2 pathway plays an important role in the immunological development of AR. We performed meta-analyses of genetic association studies of variants in Th2 pathway genes and AR. PubMed and Phenopedia were searched by double extraction for original studies on Th2 pathway-related genetic polymorphisms and their associations with AR. A meta-analysis was conducted on each genetic polymorphism with data meeting our predetermined selection criteria. Analyses were performed using both fixed and random effects models, with stratification by age group, ethnicity, and AR definition where appropriate. Heterogeneity and publication bias were assessed. Six independent studies analyzing three candidate polymorphisms and involving a total of 1596 cases and 2892 controls met our inclusion criteria. Overall, the A allele of IL13 single nucleotide polymorphism (SNP) rs20541 was associated with increased odds of AR (estimated OR=1.2; 95% CI 1.1-1.3, p-value 0.004 in fixed effects model, 95% CI 1.0-1.5, p-value 0.056 in random effects model). The A allele of rs20541 was associated with increased odds of AR in mixed age groups using both fixed effects and random effects modeling. IL13 SNP rs1800925 and IL4R SNP 1801275 did not demonstrate overall associations with AR. We conclude that there is evidence for an overall association between IL13 SNP rs20541 and increased risk of AR, especially in mixed-age populations. © 2011 John Wiley & Sons A/S.

  5. MixSIAR: advanced stable isotope mixing models in R

    EPA Science Inventory

    Background/Question/Methods The development of stable isotope mixing models has coincided with modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances are published in parity with software packages. However, while mixing model theory has recently been ex...

  6. Clustering of longitudinal data by using an extended baseline: A new method for treatment efficacy clustering in longitudinal data.

    PubMed

    Schramm, Catherine; Vial, Céline; Bachoud-Lévi, Anne-Catherine; Katsahian, Sandrine

    2018-01-01

    Heterogeneity in treatment efficacy is a major concern in clinical trials. Clustering may help to identify the treatment responders and the non-responders. In the context of longitudinal cluster analyses, sample size and variability of the times of measurements are the main issues with the current methods. Here, we propose a new two-step method for the Clustering of Longitudinal data by using an Extended Baseline. The first step relies on a piecewise linear mixed model for repeated measurements with a treatment-time interaction. The second step clusters the random predictions and considers several parametric (model-based) and non-parametric (partitioning, ascendant hierarchical clustering) algorithms. A simulation study compares all options of the clustering of longitudinal data by using an extended baseline method with the latent-class mixed model. The clustering of longitudinal data by using an extended baseline method with the two model-based algorithms was the more robust model. The clustering of longitudinal data by using an extended baseline method with all the non-parametric algorithms failed when there were unequal variances of treatment effect between clusters or when the subgroups had unbalanced sample sizes. The latent-class mixed model failed when the between-patients slope variability is high. Two real data sets on neurodegenerative disease and on obesity illustrate the clustering of longitudinal data by using an extended baseline method and show how clustering may help to identify the marker(s) of the treatment response. The application of the clustering of longitudinal data by using an extended baseline method in exploratory analysis as the first stage before setting up stratified designs can provide a better estimation of treatment effect in future clinical trials.

  7. Empirical-statistical downscaling of reanalysis data to high-resolution air temperature and specific humidity above a glacier surface (Cordillera Blanca, Peru)

    NASA Astrophysics Data System (ADS)

    Hofer, Marlis; MöLg, Thomas; Marzeion, Ben; Kaser, Georg

    2010-06-01

    Recently initiated observation networks in the Cordillera Blanca (Peru) provide temporally high-resolution, yet short-term, atmospheric data. The aim of this study is to extend the existing time series into the past. We present an empirical-statistical downscaling (ESD) model that links 6-hourly National Centers for Environmental Prediction (NCEP)/National Center for Atmospheric Research (NCAR) reanalysis data to air temperature and specific humidity, measured at the tropical glacier Artesonraju (northern Cordillera Blanca). The ESD modeling procedure includes combined empirical orthogonal function and multiple regression analyses and a double cross-validation scheme for model evaluation. Apart from the selection of predictor fields, the modeling procedure is automated and does not include subjective choices. We assess the ESD model sensitivity to the predictor choice using both single-field and mixed-field predictors. Statistical transfer functions are derived individually for different months and times of day. The forecast skill largely depends on month and time of day, ranging from 0 to 0.8. The mixed-field predictors perform better than the single-field predictors. The ESD model shows added value, at all time scales, against simpler reference models (e.g., the direct use of reanalysis grid point values). The ESD model forecast 1960-2008 clearly reflects interannual variability related to the El Niño/Southern Oscillation but is sensitive to the chosen predictor type.

  8. Assessing the feasibility of community health insurance in Uganda: A mixed-methods exploratory analysis.

    PubMed

    Biggeri, M; Nannini, M; Putoto, G

    2018-03-01

    Community health insurance (CHI) aims to provide financial protection and facilitate health care access among poor rural populations. Given common operational challenges that hamper the full development of the scheme, there is need to undertake systematic feasibility studies. These are scarce in the literature and usually they do not provide a comprehensive analysis of the local context. The present research intends to adopt a mixed-methods approach to assess ex-ante the feasibility of CHI. In particular, eight preconditions are proposed to inform the viability of introducing the micro insurance. A case study located in rural northern Uganda is presented to test the effectiveness of the mixed-methods procedure for the feasibility purpose. A household survey covering 180 households, 8 structured focus group discussions, and 40 key informant interviews were performed between October and December 2016 in order to provide a complete and integrated analysis of the feasibility preconditions. Through the data collected at the household level, the population health seeking behaviours and the potential insurance design were examined; econometric analyses were carried out to investigate the perception of health as a priority need and the willingness to pay for the scheme. The latter component, in particular, was analysed through a contingent valuation method. The results validated the relevant feasibility preconditions. Econometric estimates demonstrated that awareness of catastrophic health expenditures and the distance to the hospital play a critical influence on household priorities and willingness to pay. Willingness is also significantly affected by socio-economic status and basic knowledge of insurance principles. Overall, the mixed-methods investigation showed that a comprehensive feasibility analysis can shape a viable CHI model to be implemented in the local context. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Spatially-Resolved Analyses of Aerodynamic Fallout from a Uranium-Fueled Nuclear Test

    DOE PAGES

    Lewis, L. A.; Knight, K. B.; Matzel, J. E.; ...

    2015-07-28

    The fiive silicate fallout glass spherules produced in a uranium-fueled, near-surface nuclear test were characterized by secondary ion mass spectrometry, electron probe microanalysis, autoradiography, scanning electron microscopy, and energy-dispersive x-ray spectroscopy. Several samples display compositional heterogeneity suggestive of incomplete mixing between major elements and natural U ( 238U/ 235U = 0.00725) and enriched U. Samples exhibit extreme spatial heterogeneity in U isotopic composition with 0.02 < 235U/ 238U < 11.84 among all five spherules and 0.02 < 235U/ 238U < 7.41 within a single spherule. Moreover, in two spherules, the 235U/ 238U ratio is correlated with changes in major elementmore » composition, suggesting the agglomeration of chemically and isotopically distinct molten precursors. Two samples are nearly homogenous with respect to major element and uranium isotopic composition, suggesting extensive mixing possibly due to experiencing higher temperatures or residing longer in the fireball. Linear correlations between 234U/ 238U, 235U/ 238U, and 236U/ 238U ratios are consistent with a two-component mixing model, which is used to illustrate the extent of mixing between natural and enriched U end members.« less

  10. Residual glasses and melt inclusions in basalts from DSDP Legs 45 and 46 - Evidence for magma mixing. [Deep Sea Drilling Project

    NASA Technical Reports Server (NTRS)

    Dungan, M. A.; Rhodes, J. M.

    1978-01-01

    Microprobe analyses of natural glasses in basalts recovered by Legs 45 and 46 of the Deep Sea Drilling Project are reported and interpreted in the context of other geochemical, petrographic and experimental data on the same rocks (Rhodes et al., 1978). Residual glass compositions in the moderately evolved aphyritic and abundantly phyric basalts within each site indicate that none of the units is related to any other or to a common parent by simple fractional crystallization. The compositional trends, extensive disequilibrium textures in the plagioclase phenocrysts and the presence in evolved lavas of refractory plagioclase and olivine phenocrysts bearing primitive melt inclusions provide evidence that magma mixing had a major role in the genesis of the Leg 45 and 46 basalts. The magma parental to these basalts was most likely characterized by high Mg/(Mg + Fe/+2/), CaO/Al2O3, CaO/Na2O and low lithophile concentrations. A mixing model involving incremental enrichment of magmaphile elements by repeated episodes of mixing of relatively primitive and moderately evolved magmas, followed by a small amount of fractionation is consistent with the characteristics of the basalts studied.

  11. Stable isotope signatures and trophic-step fractionation factors of fish tissues collected as non-lethal surrogates of dorsal muscle.

    PubMed

    Busst, Georgina M A; Bašić, Tea; Britton, J Robert

    2015-08-30

    Dorsal white muscle is the standard tissue analysed in fish trophic studies using stable isotope analyses. As muscle is usually collected destructively, fin tissues and scales are often used as non-lethal surrogates; we examined the utility of scales and fin tissue as muscle surrogates. The muscle, fin and scale δ(15) N and δ(13) C values from 10 cyprinid fish species determined with an elemental analyser coupled with an isotope ratio mass spectrometer were compared. The fish comprised (1) samples from the wild, and (2) samples from tank aquaria, using six species held for 120 days and fed a single food resource. Relationships between muscle, fin and scale isotope ratios were examined for each species and for the entire dataset, with the efficacy of four methods of predicting muscle isotope ratios from fin and scale values being tested. The fractionation factors between the three tissues of the laboratory fishes and their food resource were then calculated and applied to Bayesian mixing models to assess their effect on fish diet predictions. The isotopic data of the three tissues per species were distinct, but were significantly related, enabling estimations of muscle values from the two surrogates. Species-specific equations provided the least erroneous corrections of scale and fin isotope ratios (errors < 0.6‰). The fractionation factors for δ(15) N values were in the range obtained for other species, but were often higher for δ(13) C values. Their application to data from two fish populations in the mixing models resulted in significant alterations in diet predictions. Scales and fin tissue are strong surrogates of dorsal muscle in food web studies as they can provide estimates of muscle values within an acceptable level of error when species-specific methods are used. Their derived fractionation factors can also be applied to models predicting fish diet composition from δ(15) N and δ(13) C values. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Round Robin Analyses of the Steel Containment Vessel Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costello, J.F.; Hashimote, T.; Klamerus, E.W.

    A high pressure test of the steel containment vessel (SCV) model was conducted on December 11-12, 1996 at Sandia National Laboratories, Albuquerque, NM, USA. The test model is a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of an improved Mark II boiling water reactor (BWR) containment. Several organizations from the US, Europe, and Asia were invited to participate in a Round Robin analysis to perform independent pretest predictions and posttest evaluations of the behavior of the SCV model during the high pressure test. Both pretest and posttest analysis results from all Round Robin participants were compared tomore » the high pressure test data. This paper summarizes the Round Robin analysis activities and discusses the lessons learned from the collective effort.« less

  13. Identification and validation of mixed anxiety-depression.

    PubMed

    Hettema, J M; Aggen, S H; Kubarych, T S; Neale, M C; Kendler, K S

    2015-10-01

    Mixed anxiety-depression (MAD) has been under scrutiny to determine its potential place in psychiatric nosology. The current study sought to investigate its prevalence, clinical characteristics, course and potential validators. Restricted latent-class analyses were fit to 12-month self-reports of depression and anxiety symptom criteria in a large population-based sample of twins. Classes were examined across an array of relevant indicators (demographics, co-morbidity, adverse life events, clinical significance and twin concordance). Longitudinal analyses investigated the stability of, and transitions between, these classes for two time periods approximately 1.5 years apart. In all analyses, a class exhibiting levels of MAD symptomatology distinctly above the unaffected subjects yet having low prevalence of either major depression (MD) or generalized anxiety disorder (GAD) was identified. A restricted four-class model, constraining two classes to have no prior disorder history to distinguish residual or recurrent symptoms from new onsets in the last year, provided an interpretable classification: two groups with no prior history that were unaffected or had MAD and two with prior history having relatively low or high symptom levels. Prevalence of MAD was substantial (9-11%), and subjects with MAD differed quantitatively but not qualitatively from those with lifetime MD or GAD across the clinical validators examined. Our findings suggest that MAD is a commonly occurring, identifiable syndromal subtype that warrants further study and consideration for inclusion in future nosologic systems.

  14. K →π matrix elements of the chromomagnetic operator on the lattice

    NASA Astrophysics Data System (ADS)

    Constantinou, M.; Costa, M.; Frezzotti, R.; Lubicz, V.; Martinelli, G.; Meloni, D.; Panagopoulos, H.; Simula, S.; ETM Collaboration

    2018-04-01

    We present the results of the first lattice QCD calculation of the K →π matrix elements of the chromomagnetic operator OCM=g s ¯ σμ νGμ νd , which appears in the effective Hamiltonian describing Δ S =1 transitions in and beyond the standard model. Having dimension five, the chromomagnetic operator is characterized by a rich pattern of mixing with operators of equal and lower dimensionality. The multiplicative renormalization factor as well as the mixing coefficients with the operators of equal dimension have been computed at one loop in perturbation theory. The power divergent coefficients controlling the mixing with operators of lower dimension have been determined nonperturbatively, by imposing suitable subtraction conditions. The numerical simulations have been carried out using the gauge field configurations produced by the European Twisted Mass Collaboration with Nf=2 +1 +1 dynamical quarks at three values of the lattice spacing. Our result for the B parameter of the chromomagnetic operator at the physical pion and kaon point is BCMOK π=0.273 (69 ) , while in the SU(3) chiral limit we obtain BCMO=0.076 (23 ) . Our findings are significantly smaller than the model-dependent estimate BCMO˜1 - 4 , currently used in phenomenological analyses, and improve the uncertainty on this important phenomenological quantity.

  15. Health workforce planning and service expansion during an economic crisis: A case study of the national breast screening programme in Ireland.

    PubMed

    McHugh, S M; Tyrrell, E; Johnson, B; Healy, O; Perry, I J; Normand, C

    2015-12-01

    This article aims to estimate the workforce and resource implications of the proposed age extension of the national breast screening programme, under the economic constraints of reduced health budgets and staffing levels in the Irish health system. Using a mixed method design, a purposive sample of 20 participants were interviewed and data were analysed thematically (June-September 2012). Quantitative data (programme-level activity data, screening activity, staffing levels and screening plans) were used to model potential workload and resource requirements. The analysis indicates that over 90% operational efficiency was achieved throughout the first six months of 2012. Accounting for maternity leave (10%) and sick leave (3.5%), 16.1 additional radiographers (whole time equivalent) would be required for the workload created by the age extension of the screening programme, at 90% operational efficiency. The results suggest that service expansion is possible with relatively minimal additional radiography resources if the efficiency of the skill mix and the use of equipment are improved. Investing in the appropriate skill mix should not be limited to clinical groups but should also include administrative staff to manage and support the service. Workload modelling may contribute to improved health workforce planning and service efficiency. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Major role of nutrient supply in the control of picophytoplankton community structure

    NASA Astrophysics Data System (ADS)

    Mouriño, B.; Agusti, S.; Bode, A.; Cermeno, P.; Chouciño, P.; da Silva, J. C. B.; Fernández-Castro, B.; Gasol, J.; Gil Coto, M.; Graña, R.; Latasa, M.; Lubián, L.; Marañón, E.; Moran, X. A.; Moreno, E.; Moreira-Coello, V.; Otero-Ferrer, J. L.; Ruiz Villarreal, M.; Scharek, R.; Vallina, S. M.; Varela, M.; Villamaña, M.

    2016-02-01

    The Margalef's mandala (1978) is a simplified bottom-up control model that explains how mixing and nutrient concentration determine the composition of marine phytoplankton communities. Due to the difficulties of measuring turbulence in the field, previous attempts to verify this model have applied different proxies for nutrient supply, and very often used interchangeably the terms mixing and stratification. Moreover, because the mandala was conceived before the discovery of smaller phytoplankton groups (picoplankton <2 μm), it describes only the succession of vegetative phases of microplankton. In order to test the applicability of the classical mandala to picoplankton groups, we used a multidisciplinary approach including specifically designed field observations supported by remote sensing, database analyses, and modeling and laboratory chemostat experiments. Simultaneous estimates of nitrate diffusive fluxes, derived from microturbulence observations, and picoplankton abundance collected in more than 200 stations, spanning widely different hydrographic regimes, showed that the contribution of eukaryotes to picoautotrophic biomass increases with nutrient supply, whereas that of picocyanobacteria shows the opposite trend. These findings were supported by laboratory and modeling chemostat experiments that reproduced the competitive dynamics between picoeukaryote sand picocyanobacteria as a function of changing nutrient supply. Our results indicate that nutrient supply controls the distribution of picoplankton functional groups in the ocean, further supporting the model proposed by Margalef.

  17. Comparison of mixed effects models of antimicrobial resistance metrics of livestock and poultry Salmonella isolates from a national monitoring system.

    PubMed

    Bjork, K E; Kopral, C A; Wagner, B A; Dargatz, D A

    2015-12-01

    Antimicrobial use in agriculture is considered a pathway for the selection and dissemination of resistance determinants among animal and human populations. From 1997 through 2003 the U.S. National Antimicrobial Resistance Monitoring System (NARMS) tested clinical Salmonella isolates from multiple animal and environmental sources throughout the United States for resistance to panels of 16-19 antimicrobials. In this study we applied two mixed effects models, the generalized linear mixed model (GLMM) and accelerated failure time frailty (AFT-frailty) model, to susceptible/resistant and interval-censored minimum inhibitory concentration (MIC) metrics, respectively, from Salmonella enterica subspecies enterica serovar Typhimurium isolates from livestock and poultry. Objectives were to compare characteristics of the two models and to examine the effects of time, species, and multidrug resistance (MDR) on the resistance of isolates to individual antimicrobials, as revealed by the models. Fixed effects were year of sample collection, isolate source species and MDR indicators; laboratory study site was included as a random effect. MDR indicators were significant for every antimicrobial and were dominant effects in multivariable models. Temporal trends and source species influences varied by antimicrobial. In GLMMs, the intra-class correlation coefficient ranged up to 0.8, indicating that the proportion of variance accounted for by laboratory study site could be high. AFT models tended to be more sensitive, detecting more curvilinear temporal trends and species differences; however, high levels of left- or right-censoring made some models unstable and results uninterpretable. Results from GLMMs may be biased by cutoff criteria used to collapse MIC data into binary categories, and may miss signaling important trends or shifts if the series of antibiotic dilutions tested does not span a resistance threshold. Our findings demonstrate the challenges of measuring the AMR ecosystem and the complexity of interacting factors, and have implications for future monitoring. We include suggestions for future data collection and analyses, including alternative modeling approaches. Published by Elsevier B.V.

  18. Pre-diagnostic blood immune markers, incidence and progression of B-cell lymphoma and multiple myeloma: Univariate and functionally informed multivariate analyses.

    PubMed

    Vermeulen, Roel; Saberi Hosnijeh, Fatemeh; Bodinier, Barbara; Portengen, Lützen; Liquet, Benoît; Garrido-Manriquez, Javiera; Lokhorst, Henk; Bergdahl, Ingvar A; Kyrtopoulos, Soterios A; Johansson, Ann-Sofie; Georgiadis, Panagiotis; Melin, Beatrice; Palli, Domenico; Krogh, Vittorio; Panico, Salvatore; Sacerdote, Carlotta; Tumino, Rosario; Vineis, Paolo; Castagné, Raphaële; Chadeau-Hyam, Marc; Botsivali, Maria; Chatziioannou, Aristotelis; Valavanis, Ioannis; Kleinjans, Jos C S; de Kok, Theo M C M; Keun, Hector C; Athersuch, Toby J; Kelly, Rachel; Lenner, Per; Hallmans, Goran; Stephanou, Euripides G; Myridakis, Antonis; Kogevinas, Manolis; Fazzo, Lucia; De Santis, Marco; Comba, Pietro; Bendinelli, Benedetta; Kiviranta, Hannu; Rantakokko, Panu; Airaksinen, Riikka; Ruokojarvi, Paivi; Gilthorpe, Mark; Fleming, Sarah; Fleming, Thomas; Tu, Yu-Kang; Lundh, Thomas; Chien, Kuo-Liong; Chen, Wei J; Lee, Wen-Chung; Kate Hsiao, Chuhsing; Kuo, Po-Hsiu; Hung, Hung; Liao, Shu-Fen

    2018-04-18

    Recent prospective studies have shown that dysregulation of the immune system may precede the development of B-cell lymphomas (BCL) in immunocompetent individuals. However, to date, the studies were restricted to a few immune markers, which were considered separately. Using a nested case-control study within two European prospective cohorts, we measured plasma levels of 28 immune markers in samples collected a median of 6 years before diagnosis (range 2.01-15.97) in 268 incident cases of BCL (including multiple myeloma [MM]) and matched controls. Linear mixed models and partial least square analyses were used to analyze the association between levels of immune marker and the incidence of BCL and its main histological subtypes and to investigate potential biomarkers predictive of the time to diagnosis. Linear mixed model analyses identified associations linking lower levels of fibroblast growth factor-2 (FGF-2 p = 7.2 × 10 -4 ) and transforming growth factor alpha (TGF-α, p = 6.5 × 10 -5 ) and BCL incidence. Analyses stratified by histological subtypes identified inverse associations for MM subtype including FGF-2 (p = 7.8 × 10 -7 ), TGF-α (p = 4.08 × 10 -5 ), fractalkine (p = 1.12 × 10 -3 ), monocyte chemotactic protein-3 (p = 1.36 × 10 -4 ), macrophage inflammatory protein 1-alpha (p = 4.6 × 10 -4 ) and vascular endothelial growth factor (p = 4.23 × 10 -5 ). Our results also provided marginal support for already reported associations between chemokines and diffuse large BCL (DLBCL) and cytokines and chronic lymphocytic leukemia (CLL). Case-only analyses showed that Granulocyte-macrophage colony stimulating factor levels were consistently higher closer to diagnosis, which provides further evidence of its role in tumor progression. In conclusion, our study suggests a role of growth-factors in the incidence of MM and of chemokine and cytokine regulation in DLBCL and CLL. © 2018 The Authors International Journal of Cancer published by John Wiley & Sons Ltd on behalf of UICC.

  19. An analysis of tree mortality using high resolution remotely-sensed data for mixed-conifer forests in San Diego county

    NASA Astrophysics Data System (ADS)

    Freeman, Mary Pyott

    ABSTRACT An Analysis of Tree Mortality Using High Resolution Remotely-Sensed Data for Mixed-Conifer Forests in San Diego County by Mary Pyott Freeman The montane mixed-conifer forests of San Diego County are currently experiencing extensive tree mortality, which is defined as dieback where whole stands are affected. This mortality is likely the result of the complex interaction of many variables, such as altered fire regimes, climatic conditions such as drought, as well as forest pathogens and past management strategies. Conifer tree mortality and its spatial pattern and change over time were examined in three components. In component 1, two remote sensing approaches were compared for their effectiveness in delineating dead trees, a spatial contextual approach and an OBIA (object based image analysis) approach, utilizing various dates and spatial resolutions of airborne image data. For each approach transforms and masking techniques were explored, which were found to improve classifications, and an object-based assessment approach was tested. In component 2, dead tree maps produced by the most effective techniques derived from component 1 were utilized for point pattern and vector analyses to further understand spatio-temporal changes in tree mortality for the years 1997, 2000, 2002, and 2005 for three study areas: Palomar, Volcan and Laguna mountains. Plot-based fieldwork was conducted to further assess mortality patterns. Results indicate that conifer mortality was significantly clustered, increased substantially between 2002 and 2005, and was non-random with respect to tree species and diameter class sizes. In component 3, multiple environmental variables were used in Generalized Linear Model (GLM-logistic regression) and decision tree classifier model development, revealing the importance of climate and topographic factors such as precipitation and elevation, in being able to predict areas of high risk for tree mortality. The results from this study highlight the importance of multi-scale spatial as well as temporal analyses, in order to understand mixed-conifer forest structure, dynamics, and processes of decline, which can lead to more sustainable management of forests with continued natural and anthropogenic disturbance.

  20. Exploring business process modelling paradigms and design-time to run-time transitions

    NASA Astrophysics Data System (ADS)

    Caron, Filip; Vanthienen, Jan

    2016-09-01

    The business process management literature describes a multitude of approaches (e.g. imperative, declarative or event-driven) that each result in a different mix of process flexibility, compliance, effectiveness and efficiency. Although the use of a single approach over the process lifecycle is often assumed, transitions between approaches at different phases in the process lifecycle may also be considered. This article explores several business process strategies by analysing the approaches at different phases in the process lifecycle as well as the various transitions.

  1. Comparison of Optimum Interpolation and Cressman Analyses

    NASA Technical Reports Server (NTRS)

    Baker, W. E.; Bloom, S. C.; Nestler, M. S.

    1984-01-01

    The objective of this investigation is to develop a state-of-the-art optimum interpolation (O/I) objective analysis procedure for use in numerical weather prediction studies. A three-dimensional multivariate O/I analysis scheme has been developed. Some characteristics of the GLAS O/I compared with those of the NMC and ECMWF systems are summarized. Some recent enhancements of the GLAS scheme include a univariate analysis of water vapor mixing ratio, a geographically dependent model prediction error correlation function and a multivariate oceanic surface analysis.

  2. Comparison of Optimum Interpolation and Cressman Analyses

    NASA Technical Reports Server (NTRS)

    Baker, W. E.; Bloom, S. C.; Nestler, M. S.

    1985-01-01

    The development of a state of the art optimum interpolation (O/I) objective analysis procedure for use in numerical weather prediction studies was investigated. A three dimensional multivariate O/I analysis scheme was developed. Some characteristics of the GLAS O/I compared with those of the NMC and ECMWF systems are summarized. Some recent enhancements of the GLAS scheme include a univariate analysis of water vapor mixing ratio, a geographically dependent model prediction error correlation function and a multivariate oceanic surface analysis.

  3. Optimal control of anthracnose using mixed strategies.

    PubMed

    Fotsa Mbogne, David Jaures; Thron, Christopher

    2015-11-01

    In this paper we propose and study a spatial diffusion model for the control of anthracnose disease in a bounded domain. The model is a generalization of the one previously developed in [15]. We use the model to simulate two different types of control strategies against anthracnose disease. Strategies that employ chemical fungicides are modeled using a continuous control function; while strategies that rely on cultivational practices (such as pruning and removal of mummified fruits) are modeled with a control function which is discrete in time (though not in space). For comparative purposes, we perform our analyses for a spatially-averaged model as well as the space-dependent diffusion model. Under weak smoothness conditions on parameters we demonstrate the well-posedness of both models by verifying existence and uniqueness of the solution for the growth inhibition rate for given initial conditions. We also show that the set [0, 1] is positively invariant. We first study control by impulsive strategies, then analyze the simultaneous use of mixed continuous and pulse strategies. In each case we specify a cost functional to be minimized, and we demonstrate the existence of optimal control strategies. In the case of pulse-only strategies, we provide explicit algorithms for finding the optimal control strategies for both the spatially-averaged model and the space-dependent model. We verify the algorithms for both models via simulation, and discuss properties of the optimal solutions. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Neutrino mixing and big bang nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Bell, Nicole

    2003-04-01

    We analyse active-active neutrino mixing in the early universe and show that transformation of neutrino-antineutrino asymmetries between flavours is unavoidable when neutrino mixing angles are large. This process is a standard Mikheyev-Smirnov-Wolfenstein flavour transformation, modified by the synchronisation of momentum states which results from neutrino-neutrino forward scattering. The new constraints placed on neutrino asymmetries eliminate the possibility of degenerate big bang nucleosynthesis.Implications of active-sterile neutrino mixing will also be reviewed.

  5. Iterative Usage of Fixed and Random Effect Models for Powerful and Efficient Genome-Wide Association Studies

    PubMed Central

    Liu, Xiaolei; Huang, Meng; Fan, Bin; Buckler, Edward S.; Zhang, Zhiwu

    2016-01-01

    False positives in a Genome-Wide Association Study (GWAS) can be effectively controlled by a fixed effect and random effect Mixed Linear Model (MLM) that incorporates population structure and kinship among individuals to adjust association tests on markers; however, the adjustment also compromises true positives. The modified MLM method, Multiple Loci Linear Mixed Model (MLMM), incorporates multiple markers simultaneously as covariates in a stepwise MLM to partially remove the confounding between testing markers and kinship. To completely eliminate the confounding, we divided MLMM into two parts: Fixed Effect Model (FEM) and a Random Effect Model (REM) and use them iteratively. FEM contains testing markers, one at a time, and multiple associated markers as covariates to control false positives. To avoid model over-fitting problem in FEM, the associated markers are estimated in REM by using them to define kinship. The P values of testing markers and the associated markers are unified at each iteration. We named the new method as Fixed and random model Circulating Probability Unification (FarmCPU). Both real and simulated data analyses demonstrated that FarmCPU improves statistical power compared to current methods. Additional benefits include an efficient computing time that is linear to both number of individuals and number of markers. Now, a dataset with half million individuals and half million markers can be analyzed within three days. PMID:26828793

  6. Second-order closure models for supersonic turbulent flows

    NASA Technical Reports Server (NTRS)

    Speziale, Charles G.; Sarkar, Sutanu

    1991-01-01

    Recent work by the authors on the development of a second-order closure model for high-speed compressible flows is reviewed. This turbulence closure is based on the solution of modeled transport equations for the Favre-averaged Reynolds stress tensor and the solenoidal part of the turbulent dissipation rate. A new model for the compressible dissipation is used along with traditional gradient transport models for the Reynolds heat flux and mass flux terms. Consistent with simple asymptotic analyses, the deviatoric part of the remaining higher-order correlations in the Reynolds stress transport equation are modeled by a variable density extension of the newest incompressible models. The resulting second-order closure model is tested in a variety of compressible turbulent flows which include the decay of isotropic turbulence, homogeneous shear flow, the supersonic mixing layer, and the supersonic flat-plate turbulent boundary layer. Comparisons between the model predictions and the results of physical and numerical experiments are quite encouraging.

  7. Second-order closure models for supersonic turbulent flows

    NASA Technical Reports Server (NTRS)

    Speziale, Charles G.; Sarkar, Sutanu

    1991-01-01

    Recent work on the development of a second-order closure model for high-speed compressible flows is reviewed. This turbulent closure is based on the solution of modeled transport equations for the Favre-averaged Reynolds stress tensor and the solenoidal part of the turbulent dissipation rate. A new model for the compressible dissipation is used along with traditional gradient transport models for the Reynolds heat flux and mass flux terms. Consistent with simple asymptotic analyses, the deviatoric part of the remaining higher-order correlations in the Reynolds stress transport equations are modeled by a variable density extension of the newest incompressible models. The resulting second-order closure model is tested in a variety of compressible turbulent flows which include the decay of isotropic turbulence, homogeneous shear flow, the supersonic mixing layer, and the supersonic flat-plate turbulent boundary layer. Comparisons between the model predictions and the results of physical and numerical experiments are quite encouraging.

  8. Assessing variation in life-history tactics within a population using mixture regression models: a practical guide for evolutionary ecologists.

    PubMed

    Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel

    2017-05-01

    Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often violated in life-history data. Mixed models were quite robust to this violation in the sense that fixed effects were unbiased at the population level. However, fixed effects at the cluster level and random effects were better estimated using mixture models. Our empirical analyses demonstrated that using mixture models facilitates the identification of the diversity of growth and reproductive tactics occurring within a population. Therefore, using this modelling framework allows testing for the presence of clusters and, when clusters occur, provides reliable estimates of fixed and random effects for each cluster of the population. In the presence or expectation of clusters, using mixture models offers a suitable extension of mixed models, particularly when evolutionary ecologists aim at identifying how ecological and evolutionary processes change within a population. Mixture regression models therefore provide a valuable addition to the statistical toolbox of evolutionary ecologists. As these models are complex and have their own limitations, we provide recommendations to guide future users. © 2016 Cambridge Philosophical Society.

  9. Challenges in predicting climate change impacts on pome fruit phenology

    NASA Astrophysics Data System (ADS)

    Darbyshire, Rebecca; Webb, Leanne; Goodwin, Ian; Barlow, E. W. R.

    2014-08-01

    Climate projection data were applied to two commonly used pome fruit flowering models to investigate potential differences in predicted full bloom timing. The two methods, fixed thermal time and sequential chill-growth, produced different results for seven apple and pear varieties at two Australian locations. The fixed thermal time model predicted incremental advancement of full bloom, while results were mixed from the sequential chill-growth model. To further investigate how the sequential chill-growth model reacts under climate perturbed conditions, four simulations were created to represent a wider range of species physiological requirements. These were applied to five Australian locations covering varied climates. Lengthening of the chill period and contraction of the growth period was common to most results. The relative dominance of the chill or growth component tended to predict whether full bloom advanced, remained similar or was delayed with climate warming. The simplistic structure of the fixed thermal time model and the exclusion of winter chill conditions in this method indicate it is unlikely to be suitable for projection analyses. The sequential chill-growth model includes greater complexity; however, reservations in using this model for impact analyses remain. The results demonstrate that appropriate representation of physiological processes is essential to adequately predict changes to full bloom under climate perturbed conditions with greater model development needed.

  10. Quantitative Thermochemical Measurements in High-Pressure Gaseous Combustion

    NASA Technical Reports Server (NTRS)

    Kojima, Jun J.; Fischer, David G.

    2012-01-01

    We present our strategic experiment and thermochemical analyses on combustion flow using a subframe burst gating (SBG) Raman spectroscopy. This unconventional laser diagnostic technique has promising ability to enhance accuracy of the quantitative scalar measurements in a point-wise single-shot fashion. In the presentation, we briefly describe an experimental methodology that generates transferable calibration standard for the routine implementation of the diagnostics in hydrocarbon flames. The diagnostic technology was applied to simultaneous measurements of temperature and chemical species in a swirl-stabilized turbulent flame with gaseous methane fuel at elevated pressure (17 atm). Statistical analyses of the space-/time-resolved thermochemical data provide insights into the nature of the mixing process and it impact on the subsequent combustion process in the model combustor.

  11. Noble gas isotopes in mineral springs within the Cascadia Forearc, Wasihington and Oregon

    USGS Publications Warehouse

    McCrory, Patricia A.; Constantz, James E.; Hunt, Andrew G.

    2014-01-01

    This U.S. Geological Survey report presents laboratory analyses along with field notes for a pilot study to document the relative abundance of noble gases in mineral springs within the Cascadia forearc of Washington and Oregon. Estimates of the depth to the underlying Juan de Fuca oceanic plate beneath the sample sites are derived from the McCrory and others (2012) slab model. Some of these springs have been previously sampled for chemical analyses (Mariner and others, 2006), but none currently have publicly available noble gas data. Helium isotope values as well as the noble gas values and ratios presented below will be used to determine the sources and mixing history of these mineral waters.

  12. Monoterpene chemical speciation in a tropical rainforest:variation with season, height, and time of dayat the Amazon Tall Tower Observatory (ATTO)

    NASA Astrophysics Data System (ADS)

    María Yáñez-Serrano, Ana; Nölscher, Anke Christine; Bourtsoukidis, Efstratios; Gomes Alves, Eliane; Ganzeveld, Laurens; Bonn, Boris; Wolff, Stefan; Sa, Marta; Yamasoe, Marcia; Williams, Jonathan; Andreae, Meinrat O.; Kesselmeier, Jürgen

    2018-03-01

    Speciated monoterpene measurements in rainforest air are scarce, but they are essential for understanding the contribution of these compounds to the overall reactivity of volatile organic compound (VOC) emissions towards the main atmospheric oxidants, such as hydroxyl radicals (OH), ozone (O3) and nitrate radicals (NO3). In this study, we present the chemical speciation of gas-phase monoterpenes measured in the tropical rainforest at the Amazon Tall Tower Observatory (ATTO, Amazonas, Brazil). Samples of VOCs were collected by two automated sampling systems positioned on a tower at 12 and 24 m height and analysed using gas chromatography-flame ionization detection. The samples were collected in October 2015, representing the dry season, and compared with previous wet and dry season studies at the site. In addition, vertical profile measurements (at 12 and 24 m) of total monoterpene mixing ratios were made using proton-transfer-reaction mass spectrometry. The results showed a distinctly different chemical speciation between day and night. For instance, α-pinene was more abundant during the day, whereas limonene was more abundant at night. Reactivity calculations showed that higher abundance does not generally imply higher reactivity. Furthermore, inter- and intra-annual results demonstrate similar chemodiversity during the dry seasons analysed. Simulations with a canopy exchange modelling system show simulated monoterpene mixing ratios that compare relatively well with the observed mixing ratios but also indicate the necessity of more experiments to enhance our understanding of in-canopy sinks of these compounds.

  13. An Efficient Alternative Mixed Randomized Response Procedure

    ERIC Educational Resources Information Center

    Singh, Housila P.; Tarray, Tanveer A.

    2015-01-01

    In this article, we have suggested a new modified mixed randomized response (RR) model and studied its properties. It is shown that the proposed mixed RR model is always more efficient than the Kim and Warde's mixed RR model. The proposed mixed RR model has also been extended to stratified sampling. Numerical illustrations and graphical…

  14. Atomic and molecular supernovae

    NASA Technical Reports Server (NTRS)

    Liu, Weihong

    1997-01-01

    Atomic and molecular physics of supernovae is discussed with an emphasis on the importance of detailed treatments of the critical atomic and molecular processes with the best available atomic and molecular data. The observations of molecules in SN 1987A are interpreted through a combination of spectral and chemical modelings, leading to strong constraints on the mixing and nucleosynthesis of the supernova. The non-equilibrium chemistry is used to argue that carbon dust can form in the oxygen-rich clumps where the efficient molecular cooling makes the nucleation of dust grains possible. For Type Ia supernovae, the analyses of their nebular spectra lead to strong constraints on the supernova explosion models.

  15. Modeling Magma Mixing: Evidence from U-series age dating and Numerical Simulations

    NASA Astrophysics Data System (ADS)

    Philipp, R.; Cooper, K. M.; Bergantz, G. W.

    2007-12-01

    Magma mixing and recharge is an ubiquitous process in the shallow crust, which can trigger eruption and cause magma hybridization. Phenocrysts in mixed magmas are recorders for magma mixing and can be studied by in- situ techniques and analyses of bulk mineral separates. To better understand if micro-textural and compositional information reflects local or reservoir-scale events, a physical model for gathering and dispersal of crystals is necessary. We present the results of a combined geochemical and fluid dynamical study of magma mixing processes at Volcan Quizapu, Chile; two large (1846/47 AD and 1932 AD) dacitic eruptions from the same vent area were triggered by andesitic recharge magma and show various degrees of magma mixing. Employing a multiphase numerical fluid dynamic model, we simulated a simple mixing process of vesiculated mafic magma intruded into a crystal-bearing silicic reservoir. This unstable condition leads to overturn and mixing. In a second step we use the velocity field obtained to calculate the flow path of 5000 crystals randomly distributed over the entire system. Those particles mimic the phenocryst response to the convective motion. There is little local relative motion between silicate liquid and crystals due to the high viscosity of the melts and the rapid overturn rate of the system. Of special interest is the crystal dispersal and gathering, which is quantified by comparing the distance at the beginning and end of the simulation for all particle pairs that are initially closer than a length scale chosen between 1 and 10 m. At the start of the simulation, both the resident and new intruding (mafic) magmas have a unique particle population. Depending on the Reynolds number (Re) and the chosen characteristic length scale of different phenocryst-pairs, we statistically describe the heterogeneity of crystal populations on the thin section scale. For large Re (approx. 25) and a short characteristic length scale of particle-pairs, heterogeneity of particle populations is large. After one overturn event, even the "thin section scale" can contain phenocrysts that derive from the entire magmatic system. We combine these results with time scale information from U-series plagioclase age dating. Apparent crystal residence times from the most evolved and therefore least hybridized rocks for the 1846/47 and 1932 eruptions of Volcan Quizapu are about 5000 and about 3000 yrs, respectively. Based on whole rock chemistry as well as textural and crystal-chemical data, both eruptions tapped the same reservoir and therefore should record similar crystal residence times. Instead, the discordance of these two ages can be explained by magma mixing as modeled above, if some young plagioclase derived from the andesitic recharge magma which triggered the 1846/47 AD eruption got mixed into the dacite remaining in the reservoir after eruption, thus lowering the apparent crystal residence time for magma that was evacuated from the reservoir in 1932.

  16. Quantifying the effect of mixing on the mean age of air in CCMVal-2 and CCMI-1 models

    NASA Astrophysics Data System (ADS)

    Dietmüller, Simone; Eichinger, Roland; Garny, Hella; Birner, Thomas; Boenisch, Harald; Pitari, Giovanni; Mancini, Eva; Visioni, Daniele; Stenke, Andrea; Revell, Laura; Rozanov, Eugene; Plummer, David A.; Scinocca, John; Jöckel, Patrick; Oman, Luke; Deushi, Makoto; Kiyotaka, Shibata; Kinnison, Douglas E.; Garcia, Rolando; Morgenstern, Olaf; Zeng, Guang; Stone, Kane Adam; Schofield, Robyn

    2018-05-01

    The stratospheric age of air (AoA) is a useful measure of the overall capabilities of a general circulation model (GCM) to simulate stratospheric transport. Previous studies have reported a large spread in the simulation of AoA by GCMs and coupled chemistry-climate models (CCMs). Compared to observational estimates, simulated AoA is mostly too low. Here we attempt to untangle the processes that lead to the AoA differences between the models and between models and observations. AoA is influenced by both mean transport by the residual circulation and two-way mixing; we quantify the effects of these processes using data from the CCM inter-comparison projects CCMVal-2 (Chemistry-Climate Model Validation Activity 2) and CCMI-1 (Chemistry-Climate Model Initiative, phase 1). Transport along the residual circulation is measured by the residual circulation transit time (RCTT). We interpret the difference between AoA and RCTT as additional aging by mixing. Aging by mixing thus includes mixing on both the resolved and subgrid scale. We find that the spread in AoA between the models is primarily caused by differences in the effects of mixing and only to some extent by differences in residual circulation strength. These effects are quantified by the mixing efficiency, a measure of the relative increase in AoA by mixing. The mixing efficiency varies strongly between the models from 0.24 to 1.02. We show that the mixing efficiency is not only controlled by horizontal mixing, but by vertical mixing and vertical diffusion as well. Possible causes for the differences in the models' mixing efficiencies are discussed. Differences in subgrid-scale mixing (including differences in advection schemes and model resolutions) likely contribute to the differences in mixing efficiency. However, differences in the relative contribution of resolved versus parameterized wave forcing do not appear to be related to differences in mixing efficiency or AoA.

  17. A novel scale for measuring mixed states in bipolar disorder.

    PubMed

    Cavanagh, Jonathan; Schwannauer, Matthias; Power, Mick; Goodwin, Guy M

    2009-01-01

    Conventional descriptions of bipolar disorder tend to treat the mixed state as something of an afterthought. There is no scale that specifically measures the phenomena of the mixed state. This study aimed to test a novel scale for mixed state in a clinical and community population of bipolar patients. The scale included clinically relevant symptoms of both mania and depression in a bivariate scale. Recovered respondents were asked to recall their last manic episode. The scale allowed endorsement of one or more of the manic and depressive symptoms. Internal consistency analyses were carried out using Cronbach alpha. Factor analysis was carried out using a standard Principal Components Analysis followed by Varimax Rotation. A confirmatory factor analytic method was used to validate the scale structure in a representative clinical sample. The reliability analysis gave a Cronbach alpha value of 0.950, with a range of corrected-item-total-scale correlations from 0.546 (weight change) to 0.830 (mood). The factor analysis revealed a two-factor solution for the manic and depressed items which accounted for 61.2% of the variance in the data. Factor 1 represented physical activity, verbal activity, thought processes and mood. Factor 2 represented eating habits, weight change, passage of time and pain sensitivity. This novel scale appears to capture the key features of mixed states. The two-factor solution fits well with previous models of bipolar disorder and concurs with the view that mixed states may be more than the sum of their parts.

  18. [Primary branch size of Pinus koraiensis plantation: a prediction based on linear mixed effect model].

    PubMed

    Dong, Ling-Bo; Liu, Zhao-Gang; Li, Feng-Ri; Jiang, Li-Chun

    2013-09-01

    By using the branch analysis data of 955 standard branches from 60 sampled trees in 12 sampling plots of Pinus koraiensis plantation in Mengjiagang Forest Farm in Heilongjiang Province of Northeast China, and based on the linear mixed-effect model theory and methods, the models for predicting branch variables, including primary branch diameter, length, and angle, were developed. Considering tree effect, the MIXED module of SAS software was used to fit the prediction models. The results indicated that the fitting precision of the models could be improved by choosing appropriate random-effect parameters and variance-covariance structure. Then, the correlation structures including complex symmetry structure (CS), first-order autoregressive structure [AR(1)], and first-order autoregressive and moving average structure [ARMA(1,1)] were added to the optimal branch size mixed-effect model. The AR(1) improved the fitting precision of branch diameter and length mixed-effect model significantly, but all the three structures didn't improve the precision of branch angle mixed-effect model. In order to describe the heteroscedasticity during building mixed-effect model, the CF1 and CF2 functions were added to the branch mixed-effect model. CF1 function improved the fitting effect of branch angle mixed model significantly, whereas CF2 function improved the fitting effect of branch diameter and length mixed model significantly. Model validation confirmed that the mixed-effect model could improve the precision of prediction, as compare to the traditional regression model for the branch size prediction of Pinus koraiensis plantation.

  19. The MIPAS2D: 2-D analysis of MIPAS observations of ESA target molecules and minor species

    NASA Astrophysics Data System (ADS)

    Arnone, E.; Brizzi, G.; Carlotti, M.; Dinelli, B. M.; Magnani, L.; Papandrea, E.; Ridolfi, M.

    2008-12-01

    Measurements from the MIPAS instrument onboard the ENVISAT satellite were analyzed with the Geofit Multi- Target Retrieval (GMTR) system to obtain 2-dimensional fields of pressure, temperature and volume mixing ratios of H2O, O3, HNO3, CH4, N2O, and NO2. Secondary target species relevant to stratospheric chemistry were also analysed and robust mixing ratios of N2O5, ClONO2, F11, F12, F14 and F22 were obtained. Other minor species with high uncertainties were not included in the database and will be the object of further studies. The analysis covers the original nominal observation mode from July 2002 to March 2004 and it is currently being extended to the ongoing reduced resolution mission. The GMTR algorithm was operated on a fixed 5 degrees latitudinal grid in order to ease the comparison with model calculations and climatological datasets. The generated database of atmospheric fields can be directly used for analyses based on averaging processes with no need of further interpolation. Samples of the obtained products are presented and discussed. The database of the retrieved quantities is made available to the scientific community.

  20. Non-replication of the association between 5HTTLPR and response to psychological therapy for child anxiety disorders

    PubMed Central

    Lester, Kathryn J.; Roberts, Susanna; Keers, Robert; Coleman, Jonathan R. I.; Breen, Gerome; Wong, Chloe C. Y.; Xu, Xiaohui; Arendt, Kristian; Blatter-Meunier, Judith; Bögels, Susan; Cooper, Peter; Creswell, Cathy; Heiervang, Einar R.; Herren, Chantal; Hogendoorn, Sanne M.; Hudson, Jennifer L.; Krause, Karen; Lyneham, Heidi J.; McKinnon, Anna; Morris, Talia; Nauta, Maaike H.; Rapee, Ronald M.; Rey, Yasmin; Schneider, Silvia; Schneider, Sophie C.; Silverman, Wendy K.; Smith, Patrick; Thastum, Mikael; Thirlwall, Kerstin; Waite, Polly; Wergeland, Gro Janne; Eley, Thalia C.

    2016-01-01

    Background We previously reported an association between 5HTTLPR genotype and outcome following cognitive–behavioural therapy (CBT) in child anxiety (Cohort 1). Children homozygous for the low-expression short-allele showed more positive outcomes. Other similar studies have produced mixed results, with most reporting no association between genotype and CBT outcome. Aims To replicate the association between 5HTTLPR and CBT outcome in child anxiety from the Genes for Treatment study (GxT Cohort 2, n = 829). Method Logistic and linear mixed effects models were used to examine the relationship between 5HTTLPR and CBT outcomes. Mega-analyses using both cohorts were performed. Results There was no significant effect of 5HTTLPR on CBT outcomes in Cohort 2. Mega-analyses identified a significant association between 5HTTLPR and remission from all anxiety disorders at follow-up (odds ratio 0.45, P = 0.014), but not primary anxiety disorder outcomes. Conclusions The association between 5HTTLPR genotype and CBT outcome did not replicate. Short-allele homozygotes showed more positive treatment outcomes, but with small, non-significant effects. Future studies would benefit from utilising whole genome approaches and large, homogenous samples. PMID:26294368

  1. Barium Stars: Theoretical Interpretation

    NASA Astrophysics Data System (ADS)

    Husti, Laura; Gallino, Roberto; Bisterzo, Sara; Straniero, Oscar; Cristallo, Sergio

    2009-09-01

    Barium stars are extrinsic Asymptotic Giant Branch (AGB) stars. They present the s-enhancement characteristic for AGB and post-AGB stars, but are in an earlier evolutionary stage (main sequence dwarfs, subgiants, red giants). They are believed to form in binary systems, where a more massive companion evolved faster, produced the s-elements during its AGB phase, polluted the present barium star through stellar winds and became a white dwarf. The samples of barium stars of Allen & Barbuy (2006) and of Smiljanic et al. (2007) are analysed here. Spectra of both samples were obtained at high-resolution and high S/N. We compare these observations with AGB nucleosynthesis models using different initial masses and a spread of 13C-pocket efficiencies. Once a consistent solution is found for the whole elemental distribution of abundances, a proper dilution factor is applied. This dilution is explained by the fact that the s-rich material transferred from the AGB to the nowadays observed stars is mixed with the envelope of the accretor. We also analyse the mass transfer process, and obtain the wind velocity for giants and subgiants with known orbital period. We find evidence that thermohaline mixing is acting inside main sequence dwarfs and we present a method for estimating its depth.

  2. Fractal Analyses of High-Resolution Cloud Droplet Measurements.

    NASA Astrophysics Data System (ADS)

    Malinowski, Szymon P.; Leclerc, Monique Y.; Baumgardner, Darrel G.

    1994-02-01

    Fractal analyses of individual cloud droplet distributions using aircraft measurements along one-dimensional horizontal cross sections through clouds are performed. Box counting and cluster analyses are used to determine spatial scales of inhomogeneity of cloud droplet spacing. These analyses reveal that droplet spatial distributions do not exhibit a fractal behavior. A high variability in local droplet concentration in cloud volumes undergoing mixing was found. In these regions, thin filaments of cloudy air with droplet concentration close to those observed in cloud cores were found. Results suggest that these filaments may be anisotropic. Additional box counting analyses performed for various classes of cloud droplet diameters indicate that large and small droplets are similarly distributed, except for the larger characteristic spacing of large droplets.A cloud-clear air interface defined by a certain threshold of total droplet count (TDC) was investigated. There are indications that this interface is a convoluted surface of a fractal nature, at least in actively developing cumuliform clouds. In contrast, TDC in the cloud interior does not have fractal or multifractal properties. Finally a random Cantor set (RCS) was introduced as a model of a fractal process with an ill-defined internal scale. A uniform measure associated with the RCS after several generations was introduced to simulate the TDC records. Comparison of the model with real TDC records indicates similar properties of both types of data series.

  3. Identifying Glacial Meltwater in the Amundsen Sea, Antarctica

    NASA Astrophysics Data System (ADS)

    Biddle, L. C.; Heywood, K. J.; Jenkins, A.; Kaiser, J.

    2016-02-01

    Pine Island Glacier, located in the Amundsen Sea, is losing mass rapidly due to relatively warm ocean waters melting its ice shelf from below. The resulting increase in meltwater production may be the root of the freshening in the Ross Sea over the last 30 years. Tracing the meltwater travelling away from the ice sheets is important in order to identify the regions most affected by the increased input of this water type. We use water mass characteristics (temperature, salinity, O2 concentration) derived from 105 CTD casts during the Ocean2ice cruise on RRS James Clark Ross in January-March 2014 to calculate meltwater fractions north of Pine Island Glacier. The data show maximum meltwater fractions at the ice front of up to 2.4 % and a plume of meltwater travelling away from the ice front along the 1027.7 kg m-3 isopycnal. We investigate the reliability of these results and attach uncertainties to the measurements made to ascertain the most reliable method of meltwater calculation in the Amundsen Sea. Processes such as atmospheric interaction and biological activity also affect the calculated apparent meltwater fractions. We analyse their effects on the reliability of the calculated meltwater fractions across the region using a bulk mixed layer model based on the one-dimensional Price-Weller-Pinkel model (Price et al., 1986). The model includes sea ice, dissolved oxygen concentrations and a simple respiration model, forced by NCEP climatology and an initial linear mixing profile between Winter Water (WW) and Circumpolar Deep Water (CDW). The model mimics the seasonal cycle of mixed layer warming and freshening and simulates how increases in sea ice formation and the influx of slightly cooler Lower CDW impact on the apparent meltwater fractions. These processes could result in biased meltwater signatures across the eastern Amundsen Sea.

  4. Identifying glacial meltwater in the Amundsen Sea, Antarctica

    NASA Astrophysics Data System (ADS)

    Biddle, Louise; Heywood, Karen; Jenkins, Adrian; Kaiser, Jan

    2016-04-01

    Pine Island Glacier, located in the Amundsen Sea, is losing mass rapidly due to relatively warm ocean waters melting its ice shelf from below. The resulting increase in meltwater production may be the root of the freshening in the Ross Sea over the last 30 years. Tracing the meltwater travelling away from the ice sheets is important in order to identify the regions most affected by the increased input of this water type. We use water mass characteristics (temperature, salinity, O2 concentration) derived from 105 CTD casts during the Ocean2ice cruise on RRS James Clark Ross in January-March 2014 to calculate meltwater fractions north of Pine Island Glacier. The data show maximum meltwater fractions at the ice front of up to 2.4 % and a plume of meltwater travelling away from the ice front along the 1027.7 kg m-3 isopycnal. We investigate the reliability of these results and attach uncertainties to the measurements made to ascertain the most reliable method of meltwater calculation in the Amundsen Sea. Processes such as atmospheric interaction and biological activity also affect the calculated apparent meltwater fractions. We analyse their effects on the reliability of the calculated meltwater fractions across the region using a bulk mixed layer model based on the one-dimensional Price-Weller-Pinkel model (1986). The model includes sea ice, dissolved oxygen concentrations and a simple respiration model, forced by NCEP climatology and an initial linear mixing profile between Winter Water (WW) and Circumpolar Deep Water (CDW). The model mimics the seasonal cycle of mixed layer warming and freshening and simulates how increases in sea ice formation and the influx of slightly cooler Lower CDW impact on the apparent meltwater fractions. These processes could result in biased meltwater signatures across the eastern Amundsen Sea.

  5. Effect of correlation on covariate selection in linear and nonlinear mixed effect models.

    PubMed

    Bonate, Peter L

    2017-01-01

    The effect of correlation among covariates on covariate selection was examined with linear and nonlinear mixed effect models. Demographic covariates were extracted from the National Health and Nutrition Examination Survey III database. Concentration-time profiles were Monte Carlo simulated where only one covariate affected apparent oral clearance (CL/F). A series of univariate covariate population pharmacokinetic models was fit to the data and compared with the reduced model without covariate. The "best" covariate was identified using either the likelihood ratio test statistic or AIC. Weight and body surface area (calculated using Gehan and George equation, 1970) were highly correlated (r = 0.98). Body surface area was often selected as a better covariate than weight, sometimes as high as 1 in 5 times, when weight was the covariate used in the data generating mechanism. In a second simulation, parent drug concentration and three metabolites were simulated from a thorough QT study and used as covariates in a series of univariate linear mixed effects models of ddQTc interval prolongation. The covariate with the largest significant LRT statistic was deemed the "best" predictor. When the metabolite was formation-rate limited and only parent concentrations affected ddQTc intervals the metabolite was chosen as a better predictor as often as 1 in 5 times depending on the slope of the relationship between parent concentrations and ddQTc intervals. A correlated covariate can be chosen as being a better predictor than another covariate in a linear or nonlinear population analysis by sheer correlation These results explain why for the same drug different covariates may be identified in different analyses. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Bi-phasic trends in mercury concentrations in blood of Wisconsin common loons during 1992–2010

    USGS Publications Warehouse

    Meyer, Michael W.; Rasmussen, Paul W.; Watras, Carl J.; Fevold, Brick M.; Kenow, Kevin P.

    2011-01-01

    Wisconsin Department of Natural Resources (WDNR) assessed the ecological risk of mercury (Hg) in aquatic systems by monitoring common loon (Gavia immer) population dynamics and blood Hg concentrations. We report temporal trends in blood Hg concentrations based on 334 samples collected from adults recaptured in subsequent years (resampled 2-9 times) and from 421 blood samples of chicks collected at lakes resampled 2-8 times 1992-2010.. Temporal trends were identified with generalized additive mixed effects models (GAMMs) and mixed effects models to account for the potential lack of independence among observations from the same loon or same lake. Trend analyses indicated that Hg concentrations in the blood of Wisconsin loons declined over the period 1992-2000, and increased during 2002-2010, but not to the level observed in the early 1990s. The best fitting linear mixed effects model included separate trends for the two time periods. The estimated trend in Hg concentration among the adult loon population during 1992-2000 was -2.6% per year and the estimated trend during 2002-2010 was +1.8% per year; chick blood Hg concentrations decreased by -6.5% per year during 1992-2000, but increased 1.8% per year during 2002-2010. This bi-phasic pattern is similar to trends observed for concentrations of methylmercury (meHg) and SO4 in lake water of a well studied seepage lake (Little Rock Lake, Vilas County) within our study area. A cause-effect relationship between these independent trends is hypothesized.

  7. Numerical model of frazil ice and suspended sediment concentrations and formation of sediment laden ice in the Kara Sea

    USGS Publications Warehouse

    Sherwood, C.R.

    2000-01-01

    A one-dimensional (vertical) numerical model of currents, mixing, frazil ice concentration, and suspended sediment concentration has been developed and applied in the shallow southeastern Kara Sea. The objective of the calculations is to determine whether conditions suitable for turbid ice formation can occur during times of rapid cooling and wind- and wave-induced sediment resuspension. Although the model uses a simplistic approach to ice particles and neglects ice-sediment interactions, the results for low-stratification, shallow (∼20-m) freeze-up conditions indicate that the coconcentrations of frazil ice and suspended sediment in the water column are similar to observed concentrations of sediment in turbid ice. This suggests that wave-induced sediment resuspension is a viable mechanism for turbid ice formation, and enrichment mechanisms proposed to explain the high concentrations of sediment in turbid ice relative to sediment concentrations in underlying water may not be necessary in energetic conditions. However, salinity stratification found near the Ob' and Yenisey Rivers damps mixing between ice-laden surface water and sediment-laden bottom water and probably limits incorporation of resuspended sediment into turbid ice until prolonged or repeated wind events mix away the stratification. Sensitivity analyses indicate that shallow (≤20 m), unstratified waters with fine bottom sediment (settling speeds of ∼1 mm s−1 or less) and long open water fetches (>25 km) are ideal conditions for resuspension.

  8. Therapy preferences of patients with lung and colon cancer: a discrete choice experiment.

    PubMed

    Schmidt, Katharina; Damm, Kathrin; Vogel, Arndt; Golpon, Heiko; Manns, Michael P; Welte, Tobias; Graf von der Schulenburg, J-Matthias

    2017-01-01

    There is increasing interest in studies that examine patient preferences to measure health-related outcomes. Understanding patients' preferences can improve the treatment process and is particularly relevant for oncology. In this study, we aimed to identify the subgroup-specific treatment preferences of German patients with lung cancer (LC) or colorectal cancer (CRC). Six discrete choice experiment (DCE) attributes were established on the basis of a systematic literature review and qualitative interviews. The DCE analyses comprised generalized linear mixed-effects model and latent class mixed logit model. The study cohort comprised 310 patients (194 with LC, 108 with CRC, 8 with both types of cancer) with a median age of 63 (SD =10.66) years. The generalized linear mixed-effects model showed a significant ( P <0.05) degree of association for all of the tested attributes. "Strongly increased life expectancy" was the attribute given the greatest weight by all patient groups. Using latent class mixed logit model analysis, we identified three classes of patients. Patients who were better informed tended to prefer a more balanced relationship between length and health-related quality of life (HRQoL) than those who were less informed. Class 2 (LC patients with low HRQoL who had undergone surgery) gave a very strong weighting to increased length of life. We deduced from Class 3 patients that those with a relatively good life expectancy (CRC compared with LC) gave a greater weight to moderate effects on HRQoL than to a longer life. Overall survival was the most important attribute of therapy for patients with LC or CRC. Differences in treatment preferences between subgroups should be considered in regard to treatment and development of guidelines. Patients' preferences were not affected by sex or age, but were affected by the cancer type, HRQoL, surgery status, and the main source of information on the disease.

  9. Job-mix modeling and system analysis of an aerospace multiprocessor.

    NASA Technical Reports Server (NTRS)

    Mallach, E. G.

    1972-01-01

    An aerospace guidance computer organization, consisting of multiple processors and memory units attached to a central time-multiplexed data bus, is described. A job mix for this type of computer is obtained by analysis of Apollo mission programs. Multiprocessor performance is then analyzed using: 1) queuing theory, under certain 'limiting case' assumptions; 2) Markov process methods; and 3) system simulation. Results of the analyses indicate: 1) Markov process analysis is a useful and efficient predictor of simulation results; 2) efficient job execution is not seriously impaired even when the system is so overloaded that new jobs are inordinately delayed in starting; 3) job scheduling is significant in determining system performance; and 4) a system having many slow processors may or may not perform better than a system of equal power having few fast processors, but will not perform significantly worse.

  10. A continuous mixing model for pdf simulations and its applications to combusting shear flows

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Chen, J.-Y.

    1991-01-01

    The problem of time discontinuity (or jump condition) in the coalescence/dispersion (C/D) mixing model is addressed in this work. A C/D mixing model continuous in time is introduced. With the continuous mixing model, the process of chemical reaction can be fully coupled with mixing. In the case of homogeneous turbulence decay, the new model predicts a pdf very close to a Gaussian distribution, with finite higher moments also close to that of a Gaussian distribution. Results from the continuous mixing model are compared with both experimental data and numerical results from conventional C/D models.

  11. Stratospheric water vapour in the vicinity of the Arctic polar vortex

    NASA Astrophysics Data System (ADS)

    Maturilli, M.; Fierli, F.; Yushkov, V.; Lukyanov, A.; Khaykin, S.; Hauchecorne, A.

    2006-07-01

    The stratospheric water vapour mixing ratio inside, outside, and at the edge of the polar vortex has been accurately measured by the FLASH-B Lyman-Alpha hygrometer during the LAUTLOS campaign in Sodankylä, Finland, in January and February 2004. The retrieved H2O profiles reveal a detailed view on the Arctic lower stratospheric water vapour distribution, and provide a valuable dataset for the validation of model and satellite data. Analysing the measurements with the semi-lagrangian advection model MIMOSA, water vapour profiles typical for the polar vortex' interior and exterior have been identified, and laminae in the observed profiles have been correlated to filamentary structures in the potential vorticity field. Applying the validated MIMOSA transport scheme to specific humidity fields from operational ECMWF analyses, large discrepancies from the observed profiles arise. Although MIMOSA is able to reproduce weak water vapour filaments and improves the shape of the profiles compared to operational ECMWF analyses, both models reveal a dry bias of about 1 ppmv in the lower stratosphere above 400 K, accounting for a relative difference from the measurements in the order of 20%. The large dry bias in the analysis representation of stratospheric water vapour in the Arctic implies the need for future regular measurements of water vapour in the polar stratosphere to allow the validation and improvement of climate models.

  12. General practice performance in referral for suspected cancer: influence of number of cases and case-mix on publicly reported data.

    PubMed

    Murchie, P; Chowdhury, A; Smith, S; Campbell, N C; Lee, A J; Linden, D; Burton, C D

    2015-05-26

    Publicly available data show variation in GPs' use of urgent suspected cancer (USC) referral pathways. We investigated whether this could be due to small numbers of cancer cases and random case-mix, rather than due to true variation in performance. We analysed individual GP practice USC referral detection rates (proportion of the practice's cancer cases that are detected via USC) and conversion rates (proportion of the practice's USC referrals that prove to be cancer) in routinely collected data from GP practices in all of England (over 4 years) and northeast Scotland (over 7 years). We explored the effect of pooling data. We then modelled the effects of adding random case-mix to practice variation. Correlations between practice detection rate and conversion rate became less positive when data were aggregated over several years. Adding random case-mix to between-practice variation indicated that the median proportion of poorly performing practices correctly identified after 25 cancer cases were examined was 20% (IQR 17 to 24) and after 100 cases was 44% (IQR 40 to 47). Much apparent variation in GPs' use of suspected cancer referral pathways can be attributed to random case-mix. The methods currently used to assess the quality of GP-suspected cancer referral performance, and to compare individual practices, are misleading. These should no longer be used, and more appropriate and robust methods should be developed.

  13. Estimating community health needs against a Triple Aim background: What can we learn from current predictive risk models?

    PubMed

    Elissen, Arianne M J; Struijs, Jeroen N; Baan, Caroline A; Ruwaard, Dirk

    2015-05-01

    To support providers and commissioners in accurately assessing their local populations' health needs, this study produces an overview of Dutch predictive risk models for health care, focusing specifically on the type, combination and relevance of included determinants for achieving the Triple Aim (improved health, better care experience, and lower costs). We conducted a mixed-methods study combining document analyses, interviews and a Delphi study. Predictive risk models were identified based on a web search and expert input. Participating in the study were Dutch experts in predictive risk modelling (interviews; n=11) and experts in healthcare delivery, insurance and/or funding methodology (Delphi panel; n=15). Ten predictive risk models were analysed, comprising 17 unique determinants. Twelve were considered relevant by experts for estimating community health needs. Although some compositional similarities were identified between models, the combination and operationalisation of determinants varied considerably. Existing predictive risk models provide a good starting point, but optimally balancing resources and targeting interventions on the community level will likely require a more holistic approach to health needs assessment. Development of additional determinants, such as measures of people's lifestyle and social network, may require policies pushing the integration of routine data from different (healthcare) sources. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. General quantitative genetic methods for comparative biology: phylogenies, taxonomies and multi-trait models for continuous and categorical characters.

    PubMed

    Hadfield, J D; Nakagawa, S

    2010-03-01

    Although many of the statistical techniques used in comparative biology were originally developed in quantitative genetics, subsequent development of comparative techniques has progressed in relative isolation. Consequently, many of the new and planned developments in comparative analysis already have well-tested solutions in quantitative genetics. In this paper, we take three recent publications that develop phylogenetic meta-analysis, either implicitly or explicitly, and show how they can be considered as quantitative genetic models. We highlight some of the difficulties with the proposed solutions, and demonstrate that standard quantitative genetic theory and software offer solutions. We also show how results from Bayesian quantitative genetics can be used to create efficient Markov chain Monte Carlo algorithms for phylogenetic mixed models, thereby extending their generality to non-Gaussian data. Of particular utility is the development of multinomial models for analysing the evolution of discrete traits, and the development of multi-trait models in which traits can follow different distributions. Meta-analyses often include a nonrandom collection of species for which the full phylogenetic tree has only been partly resolved. Using missing data theory, we show how the presented models can be used to correct for nonrandom sampling and show how taxonomies and phylogenies can be combined to give a flexible framework with which to model dependence.

  15. Seal Joint Analysis and Design for the Ares-I Upper Stage LOX Tank

    NASA Technical Reports Server (NTRS)

    Phillips, Dawn R.; Wingate, Robert J.

    2011-01-01

    The sealing capability of the Ares-I Upper Stage liquid oxygen tank-to-sump joint is assessed by analyzing the deflections of the joint components. Analyses are performed using three-dimensional symmetric wedge finite element models and the ABAQUS commercial finite element software. For the pressure loads and feedline interface loads, the analyses employ a mixed factor of safety approach to comply with the Constellation Program factor of safety requirements. Naflex pressure-assisted seals are considered first because they have been used successfully in similar seal joints in the Space Shuttle External Tank. For the baseline sump seal joint configuration with a Naflex seal, the predicted joint opening greatly exceeds the seal design specification. Three redesign options of the joint that maintain the use of a Naflex seal are studied. The joint openings for the redesigned seal joints show improvement over the baseline configuration; however, these joint openings still exceed the seal design specification. RACO pressure-assisted seals are considered next because they are known to also be used on the Space Shuttle External Tank, and the joint opening allowable is much larger than the specification for the Naflex seals. The finite element models for the RACO seal analyses are created by modifying the models that were used for the Naflex seal analyses. The analyses show that the RACO seal may provide sufficient sealing capability for the sump seal joint. The results provide reasonable data to recommend the design change and plan a testing program to determine the capability of RACO seals in the Ares-I Upper Stage liquid oxygen tank sump seal joint.

  16. An Introductory Mixed-Methods Intersectionality Analysis of College Access and Equity: An Examination of First-Generation Asian Americans and Pacific Islanders

    ERIC Educational Resources Information Center

    Museus, Samuel D.

    2011-01-01

    In this article, the author discusses how researchers can use mixed-methods approaches and intersectional analyses to understand college access among first-generation Asian American and Pacific Islanders (AAPIs). First, he discusses the utility of mixed-methods approaches and intersectionality research in studying college access. Then, he…

  17. Energy content in dried leaf litter of some oaks and mixed mesophytic species that replace oaks

    Treesearch

    Aaron D. Stottlemeyer; G. Geoff Wang; Patrick H. Brose; Thomas A. Waldrop

    2010-01-01

    Mixed-mesophytic hardwood tree species are replacing upland oaks in vast areas of the Eastern United States deciduous forest. Some researchers have suggested that the leaf litter of mixed-mesophytic, oak replacement species renders forests less flammable where forest managers wish to restore a natural fire regime. We performed chemical analyses on dried leaf litter...

  18. Hyperbolic Discounting: Value and Time Processes of Substance Abusers and Non-Clinical Individuals in Intertemporal Choice

    PubMed Central

    2014-01-01

    The single parameter hyperbolic model has been frequently used to describe value discounting as a function of time and to differentiate substance abusers and non-clinical participants with the model's parameter k. However, k says little about the mechanisms underlying the observed differences. The present study evaluates several alternative models with the purpose of identifying whether group differences stem from differences in subjective valuation, and/or time perceptions. Using three two-parameter models, plus secondary data analyses of 14 studies with 471 indifference point curves, results demonstrated that adding a valuation, or a time perception function led to better model fits. However, the gain in fit due to the flexibility granted by a second parameter did not always lead to a better understanding of the data patterns and corresponding psychological processes. The k parameter consistently indexed group and context (magnitude) differences; it is thus a mixed measure of person and task level effects. This was similar for a parameter meant to index payoff devaluation. A time perception parameter, on the other hand, fluctuated with contexts in a non-predicted fashion and the interpretation of its values was inconsistent with prior findings that supported enlarged perceived delays for substance abusers compared to controls. Overall, the results provide mixed support for hyperbolic models of intertemporal choice in terms of the psychological meaning afforded by their parameters. PMID:25390941

  19. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.

    PubMed

    Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed

    2013-01-01

    In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.

  20. Enhanced project management tool

    NASA Technical Reports Server (NTRS)

    Hsu, Chen-Jung (Inventor); Patel, Hemil N. (Inventor); Maluf, David A. (Inventor); Moh Hashim, Jairon C. (Inventor); Tran, Khai Peter B. (Inventor)

    2012-01-01

    A system for managing a project that includes multiple tasks and a plurality of workers. Input information includes characterizations based upon a human model, a team model and a product model. Periodic reports, such as one or more of a monthly report, a task plan report, a schedule report, a budget report and a risk management report, are generated and made available for display or further analysis or collection into a customized report template. An extensible database allows searching for information based upon context and upon content. Seven different types of project risks are addressed, including non-availability of required skill mix of workers. The system can be configured to exchange data and results with corresponding portions of similar project analyses, and to provide user-specific access to specified information.

  1. METHAMPHETAMINE USE AMONG RURAL WHITE AND NATIVE AMERICAN ADOLESCENTS: AN APPLICATION OF THE STRESS PROCESS MODEL

    PubMed Central

    Eitle, David J.; McNulty Eitle, Tamela

    2016-01-01

    Methamphetamine use has been identified as having significant adverse health consequences, yet we know little about the correlates of its use. Additionally, research has found that Native Americans are at the highest risk for methamphetamine use. Our exploratory study, informed by the stress process model, examines stress and stress buffering factors associated with methamphetamine use among a cross-sectional sample of rural white and Native American adolescents (n=573). Results of logistic regression analyses revealed mixed support for the stress process model; while stress exposure and family methamphetamine use predicted past year methamphetamine use, the inclusion of these variables failed to attenuate the association between race and past year use. PMID:25445505

  2. The 3D Navier-Stokes analysis of a Mach 2.68 bifurcated rectangular mixed-compression inlet

    NASA Technical Reports Server (NTRS)

    Mizukami, M.; Saunders, J. D.

    1995-01-01

    The supersonic diffuser of a Mach 2.68 bifurcated, rectangular, mixed-compression inlet was analyzed using a three-dimensional (3D) Navier-Stokes flow solver. A two-equation turbulence model, and a porous bleed model based on unchoked bleed hole discharge coefficients were used. Comparisons were made with experimental data, inviscid theory, and two-dimensional Navier-Stokes analyses. The main objective was to gain insight into the inlet fluid dynamics. Examination of the computational results along with the experimental data suggest that the cowl shock-sidewall boundary layer interaction near the leading edge caused a substantial separation in the wind tunnel inlet model. As a result, the inlet performance may have been compromised by increased spillage and higher bleed mass flow requirements. The internal flow contained substantial waves that were not in the original inviscid design. 3D effects were fairly minor for this inlet at on-design conditions. Navier-Stokes analysis appears to be an useful tool for gaining insight into the inlet fluid dynamics. It provides a higher fidelity simulation of the flowfield than the original inviscid design, by taking into account boundary layers, porous bleed, and their interactions with shock waves.

  3. A Study on the Heat Flow Characteristics of IRSS

    NASA Astrophysics Data System (ADS)

    Cho, Yong-Jin; Ko, Dae-Eun

    2017-11-01

    The infrared signatures emitted from the hot waste gas generated by the combustion engine and generator of a naval ship and from the metal surface around the funnel are the targets of the enemy threatening weapon system, thereby reducing the survivability of the ship. Such infrared signatures are reduced by installing an infrared signature suppression system (IRSS) in the naval ship. An IRSS consists of three parts: an eductor that creates a turbulent flow in the waste gas, a mixing tube that mixes the waste gas with the ambient air, and a diffuser that forms an air film using the pressure difference between the waste gas and the outside air. This study analyzed the test model of the IRSS developed by an advanced company and, based on this, conducted heat flow analyses as a basic study to improve the performance of the IRSS. The results were compared and analyzed considering various turbulence models. As a result, the temperatures and velocities of the waste gas at the eductor inlet and the diffuser outlet as well as the temperature of the diffuser metal surface were obtained. It was confirmed that these results were in good agreement with the measurement results of the model test.

  4. A time dependent mixing model to close PDF equations for transport in heterogeneous aquifers

    NASA Astrophysics Data System (ADS)

    Schüler, L.; Suciu, N.; Knabner, P.; Attinger, S.

    2016-10-01

    Probability density function (PDF) methods are a promising alternative to predicting the transport of solutes in groundwater under uncertainty. They make it possible to derive the evolution equations of the mean concentration and the concentration variance, used in moment methods. The mixing model, describing the transport of the PDF in concentration space, is essential for both methods. Finding a satisfactory mixing model is still an open question and due to the rather elaborate PDF methods, a difficult undertaking. Both the PDF equation and the concentration variance equation depend on the same mixing model. This connection is used to find and test an improved mixing model for the much easier to handle concentration variance. Subsequently, this mixing model is transferred to the PDF equation and tested. The newly proposed mixing model yields significantly improved results for both variance modelling and PDF modelling.

  5. Unifying error structures in commonly used biotracer mixing models.

    PubMed

    Stock, Brian C; Semmens, Brice X

    2016-10-01

    Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet. © 2016 by the Ecological Society of America.

  6. Lagrangian mixed layer modeling of the western equatorial Pacific

    NASA Technical Reports Server (NTRS)

    Shinoda, Toshiaki; Lukas, Roger

    1995-01-01

    Processes that control the upper ocean thermohaline structure in the western equatorial Pacific are examined using a Lagrangian mixed layer model. The one-dimensional bulk mixed layer model of Garwood (1977) is integrated along the trajectories derived from a nonlinear 1 1/2 layer reduced gravity model forced with actual wind fields. The Global Precipitation Climatology Project (GPCP) data are used to estimate surface freshwater fluxes for the mixed layer model. The wind stress data which forced the 1 1/2 layer model are used for the mixed layer model. The model was run for the period 1987-1988. This simple model is able to simulate the isothermal layer below the mixed layer in the western Pacific warm pool and its variation. The subduction mechanism hypothesized by Lukas and Lindstrom (1991) is evident in the model results. During periods of strong South Equatorial Current, the warm and salty mixed layer waters in the central Pacific are subducted below the fresh shallow mixed layer in the western Pacific. However, this subduction mechanism is not evident when upwelling Rossby waves reach the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific due to episodes of strong wind and light precipitation associated with the El Nino-Southern Oscillation. Comparison of the results between the Lagrangian mixed layer model and a locally forced Eulerian mixed layer model indicated that horizontal advection of salty waters from the central Pacific strongly affects the upper ocean salinity variation in the western Pacific, and that this advection is necessary to maintain the upper ocean thermohaline structure in this region.

  7. Patient Expectancy as a Mediator of Placebo Effects in Antidepressant Clinical Trials.

    PubMed

    Rutherford, Bret R; Wall, Melanie M; Brown, Patrick J; Choo, Tse-Hwei; Wager, Tor D; Peterson, Bradley S; Chung, Sarah; Kirsch, Irving; Roose, Steven P

    2017-02-01

    Causes of placebo effects in antidepressant trials have been inferred from observational studies and meta-analyses, but their mechanisms have not been directly established. The goal of this study was to examine in a prospective, randomized controlled trial whether patient expectancy mediates placebo effects in antidepressant studies. Adult outpatients with major depressive disorder were randomly assigned to open or placebo-controlled citalopram treatment. Following measurement of pre- and postrandomization expectancy, participants were treated with citalopram or placebo for 8 weeks. Independent samples t tests determined whether patient expectancy differed between the open and placebo-controlled groups, and mixed-effects models assessed group effects on Hamilton Depression Rating Scale (HAM-D) scores over time while controlling for treatment assignment. Finally, mediation analyses tested whether between-group differences in patient expectancy mediated the group effect on HAM-D scores. Postrandomization expectancy scores were significantly higher in the open group (mean=12.1 [SD=2.1]) compared with the placebo-controlled group (mean=11.0 [SD=2.0]). Mixed-effects modeling revealed a significant week-by-group interaction, indicating that HAM-D scores for citalopram-treated participants declined at a faster rate in the open group compared with the placebo-controlled group. Patient expectations postrandomization partially mediated group effects on week 8 HAM-D. Patient expectancy is a significant mediator of placebo effects in antidepressant trials. Expectancy-related interventions should be investigated as a means of controlling placebo responses in antidepressant clinical trials and improving patient outcome in clinical treatment.

  8. Shifts in stable-isotope signatures confirm parasitic relationship of freshwater mussel glochidia attached to host fish

    USGS Publications Warehouse

    Fritts, Mark W.; Fritts, Andrea K.; Carleton, Scott A.; Bringolf, Robert B.

    2013-01-01

    The parasitic nature of the association between glochidia of unionoidean bivalves and their host fish (i.e. the role of fish hosts in providing nutritional resources to the developing glochidia) is still uncertain. While previous work has provided descriptions of development of glochidia on fish hosts, earlier studies have not explicitly documented the flow of nutrition from the host fish to the juvenile mussel. Therefore, our objective was to use stable isotope analysis to quantitatively document nutrient flow between fish and glochidia. Glochidia were collected from nine adult Lampsilis cardium and used to inoculate Micropterus salmoides(n = 27; three fish per maternal mussel) that produced juvenile mussels for the experiment. Adult mussel tissue samples, glochidia, transformed juvenile mussels and fish gill tissues were analysed for δ15N and δ13C isotope ratios. We used a linear mixing model to estimate the fraction of juvenile mussel tissue derived from the host fish's tissue during attachment. Our analyses indicate a distinct shift in both C and N isotopic ratios from the glochidial stage to the juvenile stage during mussel attachment and development. Linear mixing model analysis indicated that 57.4% of the δ15N in juvenile tissues were obtained from the host fish. This work provides novel evidence that larval unionoideans are true parasites that derive nutrition from host fish during their metamorphosis into the juvenile stage.

  9. Restructuring in response to case mix reimbursement in nursing homes: A contingency approach

    PubMed Central

    Zinn, Jacqueline; Feng, Zhanlian; Mor, Vincent; Intrator, Orna; Grabowski, David

    2013-01-01

    Background Resident-based case mix reimbursement has become the dominant mechanism for publicly funded nursing home care. In 1998 skilled nursing facility reimbursement changed from cost-based to case mix adjusted payments under the Medicare Prospective Payment System for the costs of all skilled nursing facility care provided to Medicare recipients. In addition, as of 2004, 35 state Medicaid programs had implemented some form of case mix reimbursement. Purpose The purpose of the study is to determine if the implementation of Medicare and Medicaid case mix reimbursement increased the administrative burden on nursing homes, as evidenced by increased levels of nurses in administrative functions. Methodology/Approach The primary data for this study come from the Centers for Medicare and Medicaid Services Online Survey Certification and Reporting database from 1997 through 2004, a national nursing home database containing aggregated facility-level information, including staffing, organizational characteristics and resident conditions, on all Medicare/Medicaid certified nursing facilities in the country. We conducted multivariate regression analyses using a facility fixed-effects model to examine the effects of the implementation of Medicaid case mix reimbursement and Medicare Prospective Payment System on changes in the level of total administrative nurse staffing in nursing homes. Findings Both Medicaid case mix reimbursement and Medicare Prospective Payment System increased the level of administrative nurse staffing, on average by 5.5% and 4.0% respectively. However, lack of evidence for a substitution effect suggests that any decline in direct care staffing after the introduction of case mix reimbursement is not attributable to a shift from clinical nursing resources to administrative functions. Practice Implications Our findings indicate that the administrative burden posed by case mix reimbursement has resource implications for all freestanding facilities. At the margin, the increased administrative burden imposed by case mix may become a factor influencing a range of decisions, including resident admission and staff hiring. PMID:18360162

  10. Restructuring in response to case mix reimbursement in nursing homes: a contingency approach.

    PubMed

    Zinn, Jacqueline; Feng, Zhanlian; Mor, Vincent; Intrator, Orna; Grabowski, David

    2008-01-01

    Resident-based case mix reimbursement has become the dominant mechanism for publicly funded nursing home care. In 1998 skilled nursing facility reimbursement changed from cost-based to case mix adjusted payments under the Medicare Prospective Payment System for the costs of all skilled nursing facility care provided to Medicare recipients. In addition, as of 2004, 35 state Medicaid programs had implemented some form of case mix reimbursement. The purpose of the study is to determine if the implementation of Medicare and Medicaid case mix reimbursement increased the administrative burden on nursing homes, as evidenced by increased levels of nurses in administrative functions. The primary data for this study come from the Centers for Medicare and Medicaid Services Online Survey Certification and Reporting database from 1997 through 2004, a national nursing home database containing aggregated facility-level information, including staffing, organizational characteristics and resident conditions, on all Medicare/Medicaid certified nursing facilities in the country. We conducted multivariate regression analyses using a facility fixed-effects model to examine the effects of the implementation of Medicaid case mix reimbursement and Medicare Prospective Payment System on changes in the level of total administrative nurse staffing in nursing homes. Both Medicaid case mix reimbursement and Medicare Prospective Payment System increased the level of administrative nurse staffing, on average by 5.5% and 4.0% respectively. However, lack of evidence for a substitution effect suggests that any decline in direct care staffing after the introduction of case mix reimbursement is not attributable to a shift from clinical nursing resources to administrative functions. Our findings indicate that the administrative burden posed by case mix reimbursement has resource implications for all freestanding facilities. At the margin, the increased administrative burden imposed by case mix may become a factor influencing a range of decisions, including resident admission and staff hiring.

  11. Differential expression analysis for RNAseq using Poisson mixed models

    PubMed Central

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny

    2017-01-01

    Abstract Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. PMID:28369632

  12. Numerical simulation of the roll levelling of third generation fortiform 1050 steel using a nonlinear combined hardening material model

    NASA Astrophysics Data System (ADS)

    Galdos, L.; Saenz de Argandoña, E.; Mendiguren, J.; Silvestre, E.

    2017-09-01

    The roll levelling is a flattening process used to remove the residual stresses and imperfections of metal strips by means of plastic deformations. During the process, the metal sheet is subjected to cyclic tension-compression deformations leading to a flat product. The process is especially important to avoid final geometrical errors when coils are cold formed or when thick plates are cut by laser. In the last years, and due to the appearance of high strength materials such as Ultra High Strength Steels, machine design engineers are demanding reliable tools for the dimensioning of the levelling facilities. Like in other metal forming fields, finite element analysis seems to be the most widely used solution to understand the occurring phenomena and to calculate the processing loads. In this paper, the roll levelling process of the third generation Fortiform 1050 steel is numerically analysed. The process has been studied using the MSC MARC software and two different material laws. A pure isotropic hardening law has been used and set as the baseline study. In the second part, tension-compression tests have been carried out to analyse the cyclic behaviour of the steel. With the obtained data, a new material model using a combined isotropic-kinematic hardening formulation has been fitted. Finally, the influence of the material model in the numerical results has been analysed by comparing a pure isotropic model and the later combined mixed hardening model.

  13. Miscibility and Thermodynamics of Mixing of Different Models of Formamide and Water in Computer Simulation.

    PubMed

    Kiss, Bálint; Fábián, Balázs; Idrissi, Abdenacer; Szőri, Milán; Jedlovszky, Pál

    2017-07-27

    The thermodynamic changes that occur upon mixing five models of formamide and three models of water, including the miscibility of these model combinations itself, is studied by performing Monte Carlo computer simulations using an appropriately chosen thermodynamic cycle and the method of thermodynamic integration. The results show that the mixing of these two components is close to the ideal mixing, as both the energy and entropy of mixing turn out to be rather close to the ideal term in the entire composition range. Concerning the energy of mixing, the OPLS/AA_mod model of formamide behaves in a qualitatively different way than the other models considered. Thus, this model results in negative, while the other ones in positive energy of mixing values in combination with all three water models considered. Experimental data supports this latter behavior. Although the Helmholtz free energy of mixing always turns out to be negative in the entire composition range, the majority of the model combinations tested either show limited miscibility, or, at least, approach the miscibility limit very closely in certain compositions. Concerning both the miscibility and the energy of mixing of these model combinations, we recommend the use of the combination of the CHARMM formamide and TIP4P water models in simulations of water-formamide mixtures.

  14. Neighbourhood walkability, leisure-time and transport-related physical activity in a mixed urban-rural area.

    PubMed

    de Sa, Eric; Ardern, Chris I

    2014-01-01

    Objectives. To develop a walkability index specific to mixed rural/suburban areas, and to explore the relationship between walkability scores and leisure time physical activity. Methods. Respondents were geocoded with 500 m and 1,000 m buffer zones around each address. A walkability index was derived from intersections, residential density, and land-use mix according to built environment measures. Multivariable logistic regression models were used to quantify the association between the index and physical activity levels. Analyses used cross-sectional data from the 2007-2008 Canadian Community Health Survey (n = 1158; ≥18 y). Results. Respondents living in highly walkable 500 m buffer zones (upper quartiles of the walkability index) were more likely to walk or cycle for leisure than those living in low-walkable buffer zones (quartile 1). When a 1,000 m buffer zone was applied, respondents in more walkable neighbourhoods were more likely to walk or cycle for both leisure-time and transport-related purposes. Conclusion. Developing a walkability index can assist in exploring the associations between measures of the built environment and physical activity to prioritize neighborhood change.

  15. Elucidating the Higher Stability of Vanadium (V) Cations in Mixed Acid Based Redox Flow Battery Electrolytes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vijayakumar, M.; Wang, Wei; Nie, Zimin

    2013-11-01

    The Vanadium (V) cation structures in mixed acid based electrolyte solution were analysed by density functional theory (DFT) based computational modelling and 51V and 35Cl Nuclear Magnetic Resonance (NMR) spectroscopy. The Vanadium (V) cation exists as di-nuclear [V2O3Cl2.6H2O]2+ compound at higher vanadium concentrations (≥1.75M). In particular, at high temperatures (>295K) this di-nuclear compound undergoes ligand exchange process with nearby solvent chlorine molecule and forms chlorine bonded [V2O3Cl2.6H2O]2+ compound. This chlorine bonded [V2O3Cl2.6H2O]2+ compound might be resistant to the de-protonation reaction which is the initial step in the precipitation reaction in Vanadium based electrolyte solutions. The combined theoretical and experimental approachmore » reveals that formation of chlorine bonded [V2O3Cl2.6H2O]2+ compound might be central to the observed higher thermal stability of mixed acid based Vanadium (V) electrolyte solutions.« less

  16. Transient Ejector Analysis (TEA) code user's guide

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1993-01-01

    A FORTRAN computer program for the semi analytic prediction of unsteady thrust augmenting ejector performance has been developed, based on a theoretical analysis for ejectors. That analysis blends classic self-similar turbulent jet descriptions with control-volume mixing region elements. Division of the ejector into an inlet, diffuser, and mixing region allowed flexibility in the modeling of the physics for each region. In particular, the inlet and diffuser analyses are simplified by a quasi-steady-analysis, justified by the assumption that pressure is the forcing function in those regions. Only the mixing region is assumed to be dominated by viscous effects. The present work provides an overview of the code structure, a description of the required input and output data file formats, and the results for a test case. Since there are limitations to the code for applications outside the bounds of the test case, the user should consider TEA as a research code (not as a production code), designed specifically as an implementation of the proposed ejector theory. Program error flags are discussed, and some diagnostic routines are presented.

  17. Surface-water radon-222 distribution along the west-central Florida shelf

    USGS Publications Warehouse

    Smith, C.G.; Robbins, L.L.

    2012-01-01

    In February 2009 and August 2009, the spatial distribution of radon-222 in surface water was mapped along the west-central Florida shelf as collaboration between the Response of Florida Shelf Ecosystems to Climate Change project and a U.S. Geological Survey Mendenhall Research Fellowship project. This report summarizes the surface distribution of radon-222 from two cruises and evaluates potential physical controls on radon-222 fluxes. Radon-222 is an inert gas produced overwhelmingly in sediment and has a short half-life of 3.8 days; activities in surface water ranged between 30 and 170 becquerels per cubic meter. Overall, radon-222 activities were enriched in nearshore surface waters relative to offshore waters. Dilution in offshore waters is expected to be the cause of the low offshore activities. While thermal stratification of the water column during the August survey may explain higher radon-222 activities relative to the February survey, radon-222 activity and integrated surface-water inventories decreased exponentially from the shoreline during both cruises. By estimating radon-222 evasion by wind from nearby buoy data and accounting for internal production from dissolved radium-226, its radiogenic long-lived parent, a simple one-dimensional model was implemented to determine the role that offshore mixing, benthic influx, and decay have on the distribution of excess radon-222 inventories along the west Florida shelf. For multiple statistically based boundary condition scenarios (first quartile, median, third quartile, and maximum radon-222 inshore of 5 kilometers), the cross-shelf mixing rates and average nearshore submarine groundwater discharge (SGD) rates varied from 100.38 to 10-3.4 square kilometers per day and 0.00 to 1.70 centimeters per day, respectively. This dataset and modeling provide the first attempt to assess cross-shelf mixing and SGD on such a large spatial scale. Such estimates help scale up SGD rates that are often made at 1- to 10-meter resolution to a coarser but more regionally applicable scale of 1- to 10-kilometer resolution. More stringent analyses and model evaluation are required, but results and analyses presented in this report provide the foundation for conducting a more rigorous statistical assessment.

  18. Spectroscopic and physical parameters of Galactic O-type stars. III. Mass discrepancy and rotational mixing

    NASA Astrophysics Data System (ADS)

    Markova, N.; Puls, J.; Langer, N.

    2018-05-01

    Context. Massive stars play a key role in the evolution of galaxies and our Universe. Aims: Our goal is to compare observed and predicted properties of single Galactic O stars to identify and constrain uncertain physical parameters and processes in stellar evolution and atmosphere models. Methods: We used a sample of 53 objects of all luminosity classes and with spectral types from O3 to O9.7. For 30 of these, we determined the main photospheric and wind parameters, including projected rotational rates accounting for macroturbulence, and He and N surface abundances, using optical spectroscopy and applying the model atmosphere code FASTWIND. For the remaining objects, similar data from the literature, based on analyses by means of the CMFGEN code, were used instead. The properties of our sample were then compared to published predictions based on two grids of single massive star evolution models that include rotationally induced mixing. Results: Any of the considered model grids face problem in simultaneously reproducing the stellar masses, equatorial gravities, surface abundances, and rotation rates of our sample stars. The spectroscopic masses derived for objects below 30 M⊙ tend to be smaller than the evolutionary ones, no matter which of the two grids have been used as a reference. While this result may indicate the need to improve the model atmosphere calculations (e.g. regarding the treatment of turbulent pressure), our analysis shows that the established mass problem cannot be fully explained in terms of inaccurate parameters obtained by quantitative spectroscopy or inadequate model values of Vrot on the zero age main sequence. Within each luminosity class, we find a close correlation of N surface abundance and luminosity, and a stronger N enrichment in more massive and evolved O stars. Additionally, we also find a correlation of the surface nitrogen and helium abundances. The large number of nitrogen-enriched stars above 30 M⊙ argues for rotationally induced mixing as the most likely explanation. However, none of the considered models can match the observed trends correctly, especially in the high mass regime. Conclusions: We confirm mass discrepancy for objects in the low mass O-star regime. We conclude that the rotationally induced mixing of helium to the stellar surface is too strong in some of the models. We also suggest that present inadequacies of the models to represent the N enrichment in more massive stars with relatively slow rotation might be related (among other issues) to problematic efficiencies of rotational mixing. We are left with a picture in which invoking binarity and magnetic fields is required to achieve a more complete agreement of the observed surface properties of a population of massive main-sequence stars with corresponding evolutionary models.

  19. A flavor symmetry model for bilarge leptonic mixing and the lepton masses

    NASA Astrophysics Data System (ADS)

    Ohlsson, Tommy; Seidl, Gerhart

    2002-11-01

    We present a model for leptonic mixing and the lepton masses based on flavor symmetries and higher-dimensional mass operators. The model predicts bilarge leptonic mixing (i.e., the mixing angles θ12 and θ23 are large and the mixing angle θ13 is small) and an inverted hierarchical neutrino mass spectrum. Furthermore, it approximately yields the experimental hierarchical mass spectrum of the charged leptons. The obtained values for the leptonic mixing parameters and the neutrino mass squared differences are all in agreement with atmospheric neutrino data, the Mikheyev-Smirnov-Wolfenstein large mixing angle solution of the solar neutrino problem, and consistent with the upper bound on the reactor mixing angle. Thus, we have a large, but not close to maximal, solar mixing angle θ12, a nearly maximal atmospheric mixing angle θ23, and a small reactor mixing angle θ13. In addition, the model predicts θ 12≃ {π}/{4}-θ 13.

  20. Comparison of bacteriophage and enteric virus removal in pilot scale activated sludge plants.

    PubMed

    Arraj, A; Bohatier, J; Laveran, H; Traore, O

    2005-01-01

    The aim of this experimental study was to determine comparatively the removal of two types of bacteriophages, a somatic coliphage and an F-specific RNA phage and of three types of enteric viruses, hepatitis A virus (HAV), poliovirus and rotavirus during sewage treatment by activated sludge using laboratory pilot plants. The cultivable simian rotavirus SA11, the HAV HM 175/18f cytopathic strain and poliovirus were quantified by cell culture. The bacteriophages were quantified by plaque formation on the host bacterium in agar medium. In each experiment, two pilots simulating full-scale activated sludge plants were inoculated with viruses at known concentrations, and mixed liquor and effluent samples were analysed regularly. In the mixed liquor, liquid and solid fractions were analysed separately. The viral behaviour in both the liquid and solid phases was similar between pilots of each experiment. Viral concentrations decreased rapidly following viral injection in the pilots. Ten minutes after the injections, viral concentrations in the liquid phase had decreased from 1.0 +/- 0.4 log to 2.2 +/- 0.3 log. Poliovirus and HAV were predominantly adsorbed on the solid matters of the mixed liquor while rotavirus was not detectable in the solid phase. In our model, the estimated mean log viral reductions after 3-day experiment were 9.2 +/- 0.4 for rotavirus, 6.6 +/- 2.4 for poliovirus, 5.9 +/- 3.5 for HAV, 3.2 +/- 1.2 for MS2 and 2.3 +/- 0.5 for PhiX174. This study demonstrates that the pilots are useful models to assess the removal of infectious enteric viruses and bacteriophages by activated sludge treatment. Our results show the efficacy of the activated sludge treatment on the five viruses and suggest that coliphages could be an acceptable indicator of viral removal in this treatment system.

  1. The Ground Flash Fraction Retrieval Algorithm Employing Differential Evolution: Simulations and Applications

    NASA Technical Reports Server (NTRS)

    Koshak, William; Solakiewicz, Richard

    2012-01-01

    The ability to estimate the fraction of ground flashes in a set of flashes observed by a satellite lightning imager, such as the future GOES-R Geostationary Lightning Mapper (GLM), would likely improve operational and scientific applications (e.g., severe weather warnings, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method, called the Ground Flash Fraction Retrieval Algorithm (GoFFRA), was recently developed for estimating the ground flash fraction. The method uses a constrained mixed exponential distribution model to describe a particular lightning optical measurement called the Maximum Group Area (MGA). To obtain the optimum model parameters (one of which is the desired ground flash fraction), a scalar function must be minimized. This minimization is difficult because of two problems: (1) Label Switching (LS), and (2) Parameter Identity Theft (PIT). The LS problem is well known in the literature on mixed exponential distributions, and the PIT problem was discovered in this study. Each problem occurs when one allows the numerical minimizer to freely roam through the parameter search space; this allows certain solution parameters to interchange roles which leads to fundamental ambiguities, and solution error. A major accomplishment of this study is that we have employed a state-of-the-art genetic-based global optimization algorithm called Differential Evolution (DE) that constrains the parameter search in such a way as to remove both the LS and PIT problems. To test the performance of the GoFFRA when DE is employed, we applied it to analyze simulated MGA datasets that we generated from known mixed exponential distributions. Moreover, we evaluated the GoFFRA/DE method by applying it to analyze actual MGAs derived from low-Earth orbiting lightning imaging sensor data; the actual MGA data were classified as either ground or cloud flash MGAs using National Lightning Detection Network[TM] (NLDN) data. Solution error plots are provided for both the simulations and actual data analyses.

  2. Mixed Membership Distributions with Applications to Modeling Multiple Strategy Usage

    ERIC Educational Resources Information Center

    Galyardt, April

    2012-01-01

    This dissertation examines two related questions. "How do mixed membership models work?" and "Can mixed membership be used to model how students use multiple strategies to solve problems?". Mixed membership models have been used in thousands of applications from text and image processing to genetic microarray analysis. Yet…

  3. The effort-reward imbalance work-stress model and daytime salivary cortisol and dehydroepiandrosterone (DHEA) among Japanese women.

    PubMed

    Ota, Atsuhiko; Mase, Junji; Howteerakul, Nopporn; Rajatanun, Thitipat; Suwannapong, Nawarat; Yatsuya, Hiroshi; Ono, Yuichiro

    2014-09-17

    We examined the influence of work-related effort-reward imbalance and overcommitment to work (OC), as derived from Siegrist's Effort-Reward Imbalance (ERI) model, on the hypothalamic-pituitary-adrenocortical (HPA) axis. We hypothesized that, among healthy workers, both cortisol and dehydroepiandrosterone (DHEA) secretion would be increased by effort-reward imbalance and OC and, as a result, cortisol-to-DHEA ratio (C/D ratio) would not differ by effort-reward imbalance or OC. The subjects were 115 healthy female nursery school teachers. Salivary cortisol, DHEA, and C/D ratio were used as indexes of HPA activity. Mixed-model analyses of variance revealed that neither the interaction between the ERI model indicators (i.e., effort, reward, effort-to-reward ratio, and OC) and the series of measurement times (9:00, 12:00, and 15:00) nor the main effect of the ERI model indicators was significant for daytime salivary cortisol, DHEA, or C/D ratio. Multiple linear regression analyses indicated that none of the ERI model indicators was significantly associated with area under the curve of daytime salivary cortisol, DHEA, or C/D ratio. We found that effort, reward, effort-reward imbalance, and OC had little influence on daytime variation patterns, levels, or amounts of salivary HPA-axis-related hormones. Thus, our hypotheses were not supported.

  4. The Cloud Ice Mountain Experiment (CIME) 1998: experiment overview and modelling of the microphysical processes during the seeding by isentropic gas expansion

    NASA Astrophysics Data System (ADS)

    Wobrock, Wolfram; Flossmann, Andrea I.; Monier, Marie; Pichon, Jean-Marc; Cortez, Laurent; Fournol, Jean-François; Schwarzenböck, Alfons; Mertes, Stephan; Heintzenberg, Jost; Laj, Paolo; Orsi, Giordano; Ricci, Loretta; Fuzzi, Sandro; Brink, Harry Ten; Jongejan, Piet; Otjes, René

    The second field campaign of the Cloud Ice Mountain Experiment (CIME) project took place in February 1998 on the mountain Puy de Dôme in the centre of France. The content of residual aerosol particles, of H 2O 2 and NH 3 in cloud droplets was evaluated by evaporating the drops larger than 5 μm in a Counterflow Virtual Impactor (CVI) and by measuring the residual particle concentration and the released gas content. The same trace species were studied behind a round jet impactor for the complementary interstitial aerosol particles smaller than 5 μm diameter. In a second step of experiments, the ambient supercooled cloud was converted to a mixed phase cloud by seeding the cloud with ice particles by the gas release from pressurised gas bottles. A comparison between the physical and chemical characteristics of liquid drops and ice particles allows a study of the fate of the trace constituents during the presence of ice crystals in the cloud. In the present paper, an overview is given of the CIME 98 experiment and the instrumentation deployed. The meteorological situation during the experiment was analysed with the help of a cloud scale model. The microphysics processes and the behaviour of the scavenged aerosol particles before and during seeding are analysed with the detailed microphysical model ExMix. The simulation results agreed well with the observations and confirmed the assumption that the Bergeron-Findeisen process was dominating during seeding and was influencing the partitioning of aerosol particles between drops and ice crystals. The results of the CIME 98 experiment give an insight on microphysical changes, redistribution of aerosol particles and cloud chemistry during the Bergeron-Findeisen process when acting also in natural clouds.

  5. Robotic partial nephrectomy - Evaluation of the impact of case mix on the procedural learning curve.

    PubMed

    Roman, A; Ahmed, K; Challacombe, B

    2016-05-01

    Although Robotic partial nephrectomy (RPN) is an emerging technique for the management of small renal masses, this approach is technically demanding. To date, there is limited data on the nature and progression of the learning curve in RPN. To analyse the impact of case mix on the RPN LC and to model the learning curve. The records of the first 100 RPN performed, were analysed at our institution that were carried out by a single surgeon (B.C) (June 2010-December 2013). Cases were split based on their Preoperative Aspects and Dimensions Used for an Anatomical (PADUA) score into the following groups: 6-7, 8-9 and >10. Using a split group (20 patients in each group) and incremental analysis, the mean, the curve of best fit and R(2) values were calculated for each group. Of 100 patients (F:28, M:72), the mean age was 56.4 ± 11.9 years. The number of patients in each PADUA score groups: 6-7, 8-9 and >10 were 61, 32 and 7 respectively. An increase in incidence of more complex cases throughout the cohort was evident within the 8-9 group (2010: 1 case, 2013: 16 cases). The learning process did not significantly affect the proxies used to assess surgical proficiency in this study (operative time and warm ischaemia time). Case difficulty is an important parameter that should be considered when evaluating procedural learning curves. There is not one well fitting model that can be used to model the learning curve. With increasing experience, clinicians tend to operate on more difficult cases. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  6. Towards Time-Scaling of Mixing for the Campanian Ignimbrite: Systemic Variation in Sr-Isotopic Composition from Mixing Experiments

    NASA Astrophysics Data System (ADS)

    de Campos, Cristina; Civetta, Lucia; Perugini, Diego; Dingwell, Donald B.

    2010-05-01

    Eruptions in the Campi Flegrei caldera, the most dangerous volcanic setting in Europe, are thought to be triggered by short-term pre-eruptive mixing of trachytic to trachydacitic resident and new basaltic, trachyandesitic (=shoshonitic) magma, in shallow magma chambers (e. g. Arienzo et al, 2008, Bull. Volcanol.). Previous geochemical and volcanological data on the Campanian Ignimbrite, (>150 km3, 39 Ma), in Campi Flegrei, point towards a layered reservoir, which evolved from the replenishment of the magma chamber with shoshonitic magma and short-term pre-eruptive mixing between a trachytic and a phonolitic trachytic magma. With the purpose to experimentally study the mobility and homogenization of Rb-Sr isotopes in this system, we performed mixing experiments using natural phonolitic trachytic (end-member A - S. Nicola type) and trachytic (end-member B - Mondragone-type) samples, representing the two end-members involved in the origin of the Campanian Ignimbrite. Resultant glasses from a time series, ranging from 1-hour up to 1-week, under constant flow velocity (0.5 rotations per minute; after De Campos et al., 2008. Chem. Geol.), have been analysed with respect to the Rb- and Sr-systematics. Our results reveal a progressive homogenization of the contrasting Sr-isotopes towards a hybrid value. With increasing experimental duration a clear decrease in the standard deviation of isotopic ratios has been observed, reflecting progressive isotopic homogenization. Our results also support the effectiveness of mixing in the Campi Flegrei reservoirs, in liquidus, under high temperature, before the onset of fractional crystallization. Since different eruptive events from Campi Flegrei can be well characterized by means of isotopic composition, the main goal for the present study will be to use experimental data and numerical modeling in order to estimate time scales of mixing associated with the eruption of the Campanian Ignimbrite, and then compare them to the several other volcanic events in Campi Flegrei. The results to be presented will be corrected according to the recently developed numerical modeling by Perugini et al. (in print, Bull. Volcanol.).

  7. Effects of Precipitation on Ocean Mixed-Layer Temperature and Salinity as Simulated in a 2-D Coupled Ocean-Cloud Resolving Atmosphere Model

    NASA Technical Reports Server (NTRS)

    Li, Xiaofan; Sui, C.-H.; Lau, K-M.; Adamec, D.

    1999-01-01

    A two-dimensional coupled ocean-cloud resolving atmosphere model is used to investigate possible roles of convective scale ocean disturbances induced by atmospheric precipitation on ocean mixed-layer heat and salt budgets. The model couples a cloud resolving model with an embedded mixed layer-ocean circulation model. Five experiment are performed under imposed large-scale atmospheric forcing in terms of vertical velocity derived from the TOGA COARE observations during a selected seven-day period. The dominant variability of mixed-layer temperature and salinity are simulated by the coupled model with imposed large-scale forcing. The mixed-layer temperatures in the coupled experiments with 1-D and 2-D ocean models show similar variations when salinity effects are not included. When salinity effects are included, however, differences in the domain-mean mixed-layer salinity and temperature between coupled experiments with 1-D and 2-D ocean models could be as large as 0.3 PSU and 0.4 C respectively. Without fresh water effects, the nocturnal heat loss over ocean surface causes deep mixed layers and weak cooling rates so that the nocturnal mixed-layer temperatures tend to be horizontally-uniform. The fresh water flux, however, causes shallow mixed layers over convective areas while the nocturnal heat loss causes deep mixed layer over convection-free areas so that the mixed-layer temperatures have large horizontal fluctuations. Furthermore, fresh water flux exhibits larger spatial fluctuations than surface heat flux because heavy rainfall occurs over convective areas embedded in broad non-convective or clear areas, whereas diurnal signals over whole model areas yield high spatial correlation of surface heat flux. As a result, mixed-layer salinities contribute more to the density differences than do mixed-layer temperatures.

  8. Prediction of stock markets by the evolutionary mix-game model

    NASA Astrophysics Data System (ADS)

    Chen, Fang; Gou, Chengling; Guo, Xiaoqian; Gao, Jieping

    2008-06-01

    This paper presents the efforts of using the evolutionary mix-game model, which is a modified form of the agent-based mix-game model, to predict financial time series. Here, we have carried out three methods to improve the original mix-game model by adding the abilities of strategy evolution to agents, and then applying the new model referred to as the evolutionary mix-game model to forecast the Shanghai Stock Exchange Composite Index. The results show that these modifications can improve the accuracy of prediction greatly when proper parameters are chosen.

  9. Size-dependence of tree growth response to drought for Norway spruce and European beech individuals in monospecific and mixed-species stands.

    PubMed

    Ding, H; Pretzsch, H; Schütze, G; Rötzer, T

    2017-09-01

    Climate anomalies have resulted in changing forest productivity, increasing tree mortality in Central and Southern Europe. This has resulted in more severe and frequent ecological disturbances to forest stands. This study analysed the size-dependence of growth response to drought years based on 384 tree individuals of Norway spruce [Picea abies (L.) Karst.] and European beech [Fagus sylvatica ([L.)] in Bavaria, Germany. Samples were collected in both monospecific and mixed-species stands. To quantify the growth response to drought stress, indices for basal area increment, resistance, recovery and resilience were calculated from tree ring measurements of increment cores. Linear mixed models were developed to estimate the influence of drought periods. The results show that ageing-related growth decline is significant in drought years. Drought resilience and resistance decrease significantly with growth size among Norway spruce individuals. Evidence is also provided for robustness in the resilience capacity of European beech during drought stress. Spruce benefits from species mixing with deciduous beech, with over-yielding spruce in pure stands. The importance of the influence of size-dependence within tree growth studies during disturbances is highlighted and should be considered in future studies of disturbances, including drought. © 2017 German Botanical Society and The Royal Botanical Society of the Netherlands.

  10. Synergistic and Antagonistic Effects of Salinity and pH on Germination in Switchgrass (Panicum virgatum L.)

    PubMed Central

    Liu, Yuan; Wang, Quanzhen; Zhang, Yunwei; Cui, Jian; Chen, Guo; Xie, Bao; Wu, Chunhui; Liu, Haitao

    2014-01-01

    The effects of salt-alkaline mixed stress on switchgrass were investigated by evaluating seed germination and the proline, malondialdehyde (MDA) and soluble sugar contents in three switchgrass (Panicum virgatum L.) cultivars in order to identify which can be successfully produced on marginal lands affected by salt-alkaline mixed stress. The experimental conditions consisted of four levels of salinity (10, 60, 110 and 160 mM) and four pH levels (7.1, 8.3, 9.5 and 10.7). The effects of salt-alkaline mixed stress with equivalent coupling of the salinity and pH level on the switchgrass were explored via model analyses. Switchgrass was capable of germinating and surviving well in all treatments under low-alkaline pH (pH≤8.3), regardless of the salinity. However, seed germination and seedling growth were sharply reduced at higher pH values in conjunction with salinity. The salinity and pH had synergetic effects on the germination percentage, germination index, plumular length and the soluble sugar and proline contents in switchgrass. However, these two factors exhibited antagonistic effects on the radicular length of switchgrass. The combined effects of salinity and pH and the interactions between them should be considered when evaluating the strength of salt-alkaline mixed stress. PMID:24454834

  11. Mixed reality framework for collective motion patterns of swarms with delay coupling

    NASA Astrophysics Data System (ADS)

    Szwaykowska, Klementyna; Schwartz, Ira

    The formation of coherent patterns in swarms of interacting self-propelled autonomous agents is an important subject for many applications within the field of distributed robotic systems. However, there are significant logistical challenges associated with testing fully distributed systems in real-world settings. In this paper, we provide a rigorous theoretical justification for the use of mixed-reality experiments as a stepping stone to fully physical testing of distributed robotic systems. We also model and experimentally realize a mixed-reality large-scale swarm of delay-coupled agents. Our analyses, assuming agents communicating over an Erdos-Renyi network, demonstrate the existence of stable coherent patterns that can be achieved only with delay coupling and that are robust to decreasing network connectivity and heterogeneity in agent dynamics. We show how the bifurcation structure for emergence of different patterns changes with heterogeneity in agent acceleration capabilities and limited connectivity in the network as a function of coupling strength and delay. Our results are verified through simulation as well as preliminary experimental results of delay-induced pattern formation in a mixed-reality swarm. K. S. was a National Research Council postdoctoral fellow. I.B.S was supported by the U.S. Naval Research Laboratory funding (N0001414WX00023) and office of Naval Research (N0001414WX20610).

  12. Walking through the statistical black boxes of plant breeding.

    PubMed

    Xavier, Alencar; Muir, William M; Craig, Bruce; Rainey, Katy Martin

    2016-10-01

    The main statistical procedures in plant breeding are based on Gaussian process and can be computed through mixed linear models. Intelligent decision making relies on our ability to extract useful information from data to help us achieve our goals more efficiently. Many plant breeders and geneticists perform statistical analyses without understanding the underlying assumptions of the methods or their strengths and pitfalls. In other words, they treat these statistical methods (software and programs) like black boxes. Black boxes represent complex pieces of machinery with contents that are not fully understood by the user. The user sees the inputs and outputs without knowing how the outputs are generated. By providing a general background on statistical methodologies, this review aims (1) to introduce basic concepts of machine learning and its applications to plant breeding; (2) to link classical selection theory to current statistical approaches; (3) to show how to solve mixed models and extend their application to pedigree-based and genomic-based prediction; and (4) to clarify how the algorithms of genome-wide association studies work, including their assumptions and limitations.

  13. An Estimation of a Nonlinear Dynamic Process Using Latent Class Extended Mixed Models: Affect Profiles After Terrorist Attacks.

    PubMed

    Burro, Roberto; Raccanello, Daniela; Pasini, Margherita; Brondino, Margherita

    2018-01-01

    Conceptualizing affect as a complex nonlinear dynamic process, we used latent class extended mixed models (LCMM) to understand whether there were unobserved groupings in a dataset including longitudinal measures. Our aim was to identify affect profiles over time in people vicariously exposed to terrorism, studying their relations with personality traits. The participants were 193 university students who completed online measures of affect during the seven days following two terrorist attacks (Paris, November 13, 2015; Brussels, March 22, 2016); Big Five personality traits; and antecedents of affect. After selecting students whose negative affect was influenced by the two attacks (33%), we analysed the data with the LCMM package of R. We identified two affect profiles, characterized by different trends over time: The first profile comprised students with lower positive affect and higher negative affect compared to the second profile. Concerning personality traits, conscientious-ness was lower for the first profile compared to the second profile, and vice versa for neuroticism. Findings are discussed for both their theoretical and applied relevance.

  14. The Martian atmospheric planetary boundary layer stability, fluxes, spectra, and similarity

    NASA Technical Reports Server (NTRS)

    Tillman, James E.

    1994-01-01

    This is the first analysis of the high frequency data from the Viking lander and spectra of wind, in the Martian atmospheric surface layer, along with the diurnal variation of the height of the mixed surface layer, are calculated for the first time for Mars. Heat and momentum fluxes, stability, and z(sub O) are estimated for early spring, from a surface temperature model and from Viking Lander 2 temperatures and winds at 44 deg N, using Monin-Obukhov similarity theory. The afternoon maximum height of the mixed layer for these seasons and conditions is estimated to lie between 3.6 and 9.2 km. Estimations of this height is of primary importance to all models of the boundary layer and Martian General Circulation Models (GCM's). Model spectra for two measuring heights and three surface roughnesses are calculated using the depth of the mixed layer, and the surface layer parameters and flow distortion by the lander is also taken into account. These experiments indicate that z(sub O), probably lies between 1.0 and 3.0 cm, and most likely is closer to 1.0 cm. The spectra are adjusted to simulate aliasing and high frequency rolloff, the latter caused both by the sensor response and the large Kolmogorov length on Mars. Since the spectral models depend on the surface parameters, including the estimated surface temperature, their agreement with the calculated spectra indicates that the surface layer estimates are self consistent. This agreement is especially noteworthy in that the inertial subrange is virtually absent in the Martian atmosphere at this height, due to the large Kolmogorov length scale. These analyses extend the range of applicability of terrestrial results and demonstrate that it is possible to estimate the effects of severe aliasing of wind measurements, to produce a models which agree well with the measured spectra. The results show that similarity theory developed for Earth applies to Mars, and that the spectral models are universal.

  15. Hydrothermal gases in a shallow aquifer at Mt. Amiata, Italy: insights from stable isotopes and geochemical modelling.

    PubMed

    Pierotti, Lisa; Cortecci, Gianni; Gherardi, Fabrizio

    2016-01-01

    We investigate the interaction between hydrothermal gases and groundwater in a major aquifer exploited for potable supply in the geothermal-volcanic area of Mt. Amiata, Central Italy. Two springs and two wells located on different sides of the volcanic edifice have been repeatedly sampled over the last 11 years. More than 160 chemical analyses and 10 isotopic analyses of total dissolved carbon (δ(13)C - total dissolved inorganic carbon (TDIC) = -15.9 to -7.8 ‰ vs. V-PDB) and sulphate (δ(34)S-SO4 = -6.9 to 5.1 ‰ vs. V-CDT) have been processed with geochemical modelling techniques. Best-fitting conditions between analytical data and model outputs have been achieved by numerical optimization, allowing for a quantitative description of gas-water-rock interactions occurring in this aquifer. Numerical calculations support a conceptual model that considers water-rock interactions to occur in the volcanic aquifer after inflow of deep-seated gases (CO2(g) and H2S(g)), and total conversion of H2S(g) to SO4, in the absence of mixing with geothermal waters from reservoirs currently exploited for electricity generation.

  16. Diet of bottlenose dolphins (Tursiops truncatus) from the Gulf of Cadiz: Insights from stomach content and stable isotope analyses.

    PubMed

    Giménez, Joan; Marçalo, Ana; Ramírez, Francisco; Verborgh, Philippe; Gauffier, Pauline; Esteban, Ruth; Nicolau, Lídia; González-Ortegón, Enrique; Baldó, Francisco; Vilas, César; Vingada, José; G Forero, Manuela; de Stephanis, Renaud

    2017-01-01

    The ecological role of species can vary among populations depending on local and regional differences in diet. This is particularly true for top predators such as the bottlenose dolphin (Tursiops truncatus), which exhibits a highly varied diet throughout its distribution range. Local dietary assessments are therefore critical to fully understand the role of this species within marine ecosystems, as well as its interaction with important ecosystem services such as fisheries. Here, we combined stomach content analyses (SCA) and stable isotope analyses (SIA) to describe bottlenose dolphins diet in the Gulf of Cadiz (North Atlantic Ocean). Prey items identified using SCA included European conger (Conger conger) and European hake (Merluccius merluccius) as the most important ingested prey. However, mass-balance isotopic mixing model (MixSIAR), using δ13C and δ15N, indicated that the assimilated diet consisted mainly on Sparidae species (e.g. seabream, Diplodus annularis and D. bellottii, rubberlip grunt, Plectorhinchus mediterraneus, and common pandora, Pagellus erythrinus) and a mixture of other species including European hake, mackerels (Scomber colias, S. japonicus and S. scombrus), European conger, red bandfish (Cepola macrophthalma) and European pilchard (Sardina pilchardus). These contrasting results highlight differences in the temporal and taxonomic resolution of each approach, but also point to potential differences between ingested (SCA) and assimilated (SIA) diets. Both approaches provide different insights, e.g. determination of consumed fish biomass for the management of fish stocks (SCA) or identification of important assimilated prey species to the consumer (SIA).

  17. Payment schemes and cost efficiency: evidence from Swiss public hospitals.

    PubMed

    Meyer, Stefan

    2015-03-01

    This paper aims at analysing the impact of prospective payment schemes on cost efficiency of acute care hospitals in Switzerland. We study a panel of 121 public hospitals subject to one of four payment schemes. While several hospitals are still reimbursed on a per diem basis for the treatment of patients, most face flat per-case rates-or mixed schemes, which combine both elements of reimbursement. Thus, unlike previous studies, we are able to simultaneously analyse and isolate the cost-efficiency effects of different payment schemes. By means of stochastic frontier analysis, we first estimate a hospital cost frontier. Using the two-stage approach proposed by Battese and Coelli (Empir Econ 20:325-332, 1995), we then analyse the impact of these payment schemes on the cost efficiency of hospitals. Controlling for hospital characteristics, local market conditions in the 26 Swiss states (cantons), and a time trend, we show that, compared to per diem, hospitals which are reimbursed by flat payment schemes perform better in terms of cost efficiency. Our results suggest that mixed schemes create incentives for cost containment as well, although to a lesser extent. In addition, our findings indicate that cost-efficient hospitals are primarily located in cantons with competitive markets, as measured by the Herfindahl-Hirschman index in inpatient care. Furthermore, our econometric model shows that we obtain biased estimates from frontier analysis if we do not account for heteroscedasticity in the inefficiency term.

  18. Computational and Experimental Flow Field Analyses of Separate Flow Chevron Nozzles and Pylon Interaction

    NASA Technical Reports Server (NTRS)

    Massey, Steven J.; Thomas, Russell H.; AbdolHamid, Khaled S.; Elmiligui, Alaa A.

    2003-01-01

    A computational and experimental flow field analyses of separate flow chevron nozzles is presented. The goal of this study is to identify important flow physics and modeling issues required to provide highly accurate flow field data which will later serve as input to the Jet3D acoustic prediction code. Four configurations are considered: a baseline round nozzle with and without a pylon, and a chevron core nozzle with and without a pylon. The flow is simulated by solving the asymptotically steady, compressible, Reynolds-averaged Navier-Stokes equations using an implicit, up-wind, flux-difference splitting finite volume scheme and standard two-equation kappa-epsilon turbulence model with a linear stress representation and the addition of a eddy viscosity dependence on total temperature gradient normalized by local turbulence length scale. The current CFD results are seen to be in excellent agreement with Jet Noise Lab data and show great improvement over previous computations which did not compensate for enhanced mixing due to high temperature gradients.

  19. How are the Concepts and Theories of Acid Base Reactions Presented? Chemistry in Textbooks and as Presented by Teachers

    NASA Astrophysics Data System (ADS)

    Furió-Más, Carlos; Calatayud, María Luisa; Guisasola, Jenaro; Furió-Gómez, Cristina

    2005-09-01

    This paper investigates the views of science and scientific activity that can be found in chemistry textbooks and heard from teachers when acid base reactions are introduced to grade 12 and university chemistry students. First, the main macroscopic and microscopic conceptual models are developed. Second, we attempt to show how the existence of views of science in textbooks and of chemistry teachers contributes to an impoverished image of chemistry. A varied design has been elaborated to analyse some epistemological deficiencies in teaching acid base reactions. Textbooks have been analysed and teachers have been interviewed. The results obtained show that the teaching process does not emphasize the macroscopic presentation of acids and bases. Macroscopic and microscopic conceptual models involved in the explanation of acid base processes are mixed in textbooks and by teachers. Furthermore, the non-problematic introduction of concepts, such as the hydrolysis concept, and the linear, cumulative view of acid base theories (Arrhenius and Brönsted) were detected.

  20. Diurnal rhythms in peripheral blood immune cell numbers of domestic pigs.

    PubMed

    Engert, Larissa C; Weiler, Ulrike; Pfaffinger, Birgit; Stefanski, Volker; Schmucker, Sonja S

    2018-02-01

    Diurnal rhythms within the immune system are considered important for immune competence. Until now, they were mostly studied in humans and rodents. However, as the domestic pig is regarded as suitable animal model and due to its importance in agriculture, this study aimed to characterize diurnal rhythmicity in porcine circulating leukocyte numbers. Eighteen pigs were studied over periods of up to 50 h. Cosinor analyses revealed diurnal rhythms in cell numbers of most investigated immune cell populations in blood. Whereas T cell, dendritic cell, and eosinophil counts peaked during nighttime, NK cell and neutrophil counts peaked during daytime. Relative amplitudes of cell numbers in blood differed in T helper cell subtypes with distinctive differentiation states. Mixed model analyses revealed that plasma cortisol concentration was negatively associated with cell numbers of most leukocyte types, except for NK cells and neutrophils. The observed rhythms mainly resemble those found in humans and rodents. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Investigation of the Asphalt Pavement Analyzer (APA) testing program in Nebraska.

    DOT National Transportation Integrated Search

    2008-03-01

    The asphalt pavement analyzer (APA) has been widely used to evaluate hot-mix asphalt (HMA) rutting potential in mix : design and quality control-quality assurance (QC-QA) applications, because the APA testing and its data analyses are : relatively si...

  2. Application of Mixed-Methods Approaches to Higher Education and Intersectional Analyses

    ERIC Educational Resources Information Center

    Griffin, Kimberly A.; Museus, Samuel D.

    2011-01-01

    In this article, the authors discuss the utility of combining quantitative and qualitative methods in conducting intersectional analyses. First, they discuss some of the paradigmatic underpinnings of qualitative and quantitative research, and how these methods can be used in intersectional analyses. They then consider how paradigmatic pragmatism…

  3. Competition for light and light use efficiency for Acacia mangium and Eucalyptus grandis trees in mono-specific and mixed-species plantations in Brazil

    NASA Astrophysics Data System (ADS)

    Le Maire, G.; Nouvellon, Y.; Gonçalves, J.; Bouillet, J.; Laclau, J.

    2010-12-01

    Mixed plantations with N-fixing species might be an attractive option for limiting the use of fertilizer in highly productive Eucalyptus plantations. A randomized block design was set up in southern Brazil, including a replacement series and an additive series design, as well as a nitrogen fertilization treatment, and conducted during a full 6 years rotation. The gradient of competition between Eucalyptus and Acacia in this design resulted in very different conditions of growth of Acacia, from totally dominated up to dominant canopies. We used the MAESTRA model to estimate the amount of absorbed photosynthetically active radiation (APAR) at tree level. This model requires the description of the scene and distinct structural variables of the two species, and their evolution with time. The competition for light is analysed by comparing the inter-specific values of APAR during a period of 2 years at the end of the rotation. APAR is further compared to the measured increment in stem wood biomass of the tree, and their ratio is an estimation of the light use efficiency for stemwood production at tree-scale. Variability of these LUE are analysed in respect to the species, the size of the tree, and at plot scale (competition level). Stemwood production was 3400, 3900 and 2400 gDM/m2 while APAR was 1640, 2280 and 2900 MJ/y for the pure Eucalyptus, pure Acacia and 50/50 mixed plantation, respectively, for an average LAI of 3.7, 3.3 and 4.5, respectively. Individual LUE for stemwood was estimated at an average value of 1.72 and 1.41 gDM/MJ/tree for Eucalyptus and Acacia, respectively, and at 0.92 and 0.40 gDM/MJ/tree when they were planted in mixed 50/50 plantations. LUE was highly dependant on tree size for both species. At the plot scale, LUE for stemwood were 2.1 gDM/MJ and 1.75 for Eucalyptus and Acacias, respectively, and 0.85 for the mixed 50/50 plantation. These results suggest that the mixed 50/50 plantation, which absorbed a higher amount of light, produce less stemwood since half of the canopy (Acacias) are dominated, and the other half does not benefit much in terms of tree growth compared to absorbed light. The eventual benefit of the nitrogen-fixing species is not visible in the mixture with 50% of each species. More attention has to be paid to introducing acacias in an additive series with the same density of eucalyptus trees as in the monospecific stands.

  4. The early identification of risk factors on the pathway to school dropout in the SIODO study: a sequential mixed-methods study.

    PubMed

    Theunissen, Marie-José; Griensven van, Ilse; Verdonk, Petra; Feron, Frans; Bosma, Hans

    2012-11-27

    School dropout is a persisting problem with major socioeconomic consequences. Although poor health probably contributes to pathways leading to school dropout and health is likely negatively affected by dropout, these issues are relatively absent on the public health agenda. This emphasises the importance of integrative research aimed at identifying children at risk for school dropout at an early stage, discovering how socioeconomic status and gender affect health-related pathways that lead to dropout and developing a prevention tool that can be used in public health services for youth. The SIODO study is a sequential mixed-methods study. A case-control study will be conducted among 18 to 24 year olds in the south of the Netherlands (n = 580). Data are currently being collected from compulsory education departments at municipalities (dropout data), regional public health services (developmental data from birth onwards) and an additional questionnaire has been sent to participants (e.g. personality data). Advanced analyses, including cluster and factor analyses, will be used to identify children at risk at an early stage. Using the quantitative data, we have planned individual interviews with participants and focus groups with important stakeholders such as parents, teachers and public health professionals. A thematic content analysis will be used to analyse the qualitative data. The SIODO study will use a life-course perspective, the ICF-CY model to group the determinants and a mixed-methods design. In this respect, the SIODO study is innovative because it both broadens and deepens the study of health-related determinants of school dropout. It examines how these determinants contribute to socioeconomic and gender differences in health and contributes to the development of a tool that can be used in public health practice to tackle the problem of school dropout at its roots.

  5. Inclusion of surface gravity wave effects in vertical mixing parameterizations with application to Chesapeake Bay, USA

    NASA Astrophysics Data System (ADS)

    Fisher, A. W.; Sanford, L. P.; Scully, M. E.; Suttles, S. E.

    2016-02-01

    Enhancement of wind-driven mixing by Langmuir turbulence (LT) may have important implications for exchanges of mass and momentum in estuarine and coastal waters, but the transient nature of LT and observational constraints make quantifying its impact on vertical exchange difficult. Recent studies have shown that wind events can be of first order importance to circulation and mixing in estuaries, prompting this investigation into the ability of second-moment turbulence closure schemes to model wind-wave enhanced mixing in an estuarine environment. An instrumented turbulence tower was deployed in middle reaches of Chesapeake Bay in 2013 and collected observations of coherent structures consistent with LT that occurred under regions of breaking waves. Wave and turbulence measurements collected from a vertical array of Acoustic Doppler Velocimeters (ADVs) provided direct estimates of TKE, dissipation, turbulent length scale, and the surface wave field. Direct measurements of air-sea momentum and sensible heat fluxes were collected by a co-located ultrasonic anemometer deployed 3m above the water surface. Analyses of the data indicate that the combined presence of breaking waves and LT significantly influences air-sea momentum transfer, enhancing vertical mixing and acting to align stress in the surface mixed layer in the direction of Lagrangian shear. Here these observations are compared to the predictions of commonly used second-moment turbulence closures schemes, modified to account for the influence of wave breaking and LT. LT parameterizations are evaluated under neutrally stratified conditions and buoyancy damping parameterizations are evaluated under stably stratified conditions. We compare predicted turbulent quantities to observations for a variety of wind, wave, and stratification conditions. The effects of fetch-limited wave growth, surface buoyancy flux, and tidal distortion on wave mixing parameterizations will also be discussed.

  6. Lead Isotope Compositions of Acid Residues from Olivine-Phyric Shergottite Tissint: Implications for Heterogeneous Shergottite Source Reservoirs

    NASA Technical Reports Server (NTRS)

    Moriwaki, R.; Usui, T.; Yokoyama, T.; Simon, J. I.; Jones, J. H.

    2015-01-01

    Geochemical studies of shergottites suggest that their parental magmas reflect mixtures between at least two distinct geochemical source reservoirs, producing correlations between radiogenic isotope compositions and trace element abundances. These correlations have been interpreted as indicating the presence of a reduced, incompatible element- depleted reservoir and an oxidized, incompatible- element-enriched reservoir. The former is clearly a depleted mantle source, but there is ongoing debate regarding the origin of the enriched reservoir. Two contrasting models have been proposed regarding the location and mixing process of the two geochemical source reservoirs: (1) assimilation of oxidized crust by mantle derived, reduced magmas, or (2) mixing of two distinct mantle reservoirs during melting. The former requires the ancient Martian crust to be the enriched source (crustal assimilation), whereas the latter requires isolation of a long-lived enriched mantle domain that probably originated from residual melts formed during solidification of a magma ocean (heterogeneous mantle model). This study conducts Pb isotope and trace element concentration analyses of sequential acid-leaching fractions (leachates and the final residues) from the geochemically depleted olivine-phyric shergottite Tissint. The results suggest that the Tissint magma is not isotopically uniform and sampled at least two geochemical source reservoirs, implying that either crustal assimilation or magma mixing would have played a role in the Tissint petrogenesis.

  7. A new experimental design method to optimize formulations focusing on a lubricant for hydrophilic matrix tablets.

    PubMed

    Choi, Du Hyung; Shin, Sangmun; Khoa Viet Truong, Nguyen; Jeong, Seong Hoon

    2012-09-01

    A robust experimental design method was developed with the well-established response surface methodology and time series modeling to facilitate the formulation development process with magnesium stearate incorporated into hydrophilic matrix tablets. Two directional analyses and a time-oriented model were utilized to optimize the experimental responses. Evaluations of tablet gelation and drug release were conducted with two factors x₁ and x₂: one was a formulation factor (the amount of magnesium stearate) and the other was a processing factor (mixing time), respectively. Moreover, different batch sizes (100 and 500 tablet batches) were also evaluated to investigate an effect of batch size. The selected input control factors were arranged in a mixture simplex lattice design with 13 experimental runs. The obtained optimal settings of magnesium stearate for gelation were 0.46 g, 2.76 min (mixing time) for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The optimal settings for drug release were 0.33 g, 7.99 min for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The exact ratio and mixing time of magnesium stearate could be formulated according to the resulting hydrophilic matrix tablet properties. The newly designed experimental method provided very useful information for characterizing significant factors and hence to obtain optimum formulations allowing for a systematic and reliable experimental design method.

  8. Efficiency of RNA extraction from selected bacteria in the context of biogas production and metatranscriptomics.

    PubMed

    Stark, Lucy; Giersch, Tina; Wünschiers, Röbbe

    2014-10-01

    Understanding the microbial population in anaerobic digestion is an essential task to increase efficient substrate use and process stability. The metabolic state, represented e.g. by the transcriptome, of a fermenting system can help to find markers for monitoring industrial biogas production to prevent failures or to model the whole process. Advances in next-generation sequencing make transcriptomes accessible for large-scale analyses. In order to analyze the metatranscriptome of a mixed-species sample, isolation of high-quality RNA is the first step. However, different extraction methods may yield different efficiencies in different species. Especially in mixed-species environmental samples, unbiased isolation of transcripts is important for meaningful conclusions. We applied five different RNA-extraction protocols to nine taxonomic diverse bacterial species. Chosen methods are based on various lysis and extraction principles. We found that the extraction efficiency of different methods depends strongly on the target organism. RNA isolation of gram-positive bacteria was characterized by low yield whilst from gram-negative species higher concentrations can be obtained. Transferring our results to mixed-species investigations, such as metatranscriptomics with biofilms or biogas plants, leads to the conclusion that particular microorganisms might be over- or underrepresented depending on the method applied. Special care must be taken when using such metatranscriptomics data for, e.g. process modeling. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Use of non-linear mixed-effects modelling and regression analysis to predict the number of somatic coliphages by plaque enumeration after 3 hours of incubation.

    PubMed

    Mendez, Javier; Monleon-Getino, Antonio; Jofre, Juan; Lucena, Francisco

    2017-10-01

    The present study aimed to establish the kinetics of the appearance of coliphage plaques using the double agar layer titration technique to evaluate the feasibility of using traditional coliphage plaque forming unit (PFU) enumeration as a rapid quantification method. Repeated measurements of the appearance of plaques of coliphages titrated according to ISO 10705-2 at different times were analysed using non-linear mixed-effects regression to determine the most suitable model of their appearance kinetics. Although this model is adequate, to simplify its applicability two linear models were developed to predict the numbers of coliphages reliably, using the PFU counts as determined by the ISO after only 3 hours of incubation. One linear model, when the number of plaques detected was between 4 and 26 PFU after 3 hours, had a linear fit of: (1.48 × Counts 3 h + 1.97); and the other, values >26 PFU, had a fit of (1.18 × Counts 3 h + 2.95). If the number of plaques detected was <4 PFU after 3 hours, we recommend incubation for (18 ± 3) hours. The study indicates that the traditional coliphage plating technique has a reasonable potential to provide results in a single working day without the need to invest in additional laboratory equipment.

  10. A review of 241 subjects who were patch tested twice: could fragrance mix I cause active sensitization?

    PubMed

    White, J M L; McFadden, J P; White, I R

    2008-03-01

    Active patch test sensitization is an uncommon phenomenon which may have undesirable consequences for those undergoing this gold-standard investigation for contact allergy. To perform a retrospective analysis of the results of 241 subjects who were patch tested twice in a monocentre evaluating approximately 1500 subjects per year. Positivity to 11 common allergens in the recommended Baseline Series of contact allergens (European) was analysed: nickel sulphate; Myroxylon pereirae; fragrance mix I; para-phenylenediamine; colophonium; epoxy resin; neomycin; quaternium-15; thiuram mix; sesquiterpene lactone mix; and para-tert-butylphenol resin. Only fragrance mix I gave a statistically significant, increased rate of positivity on the second reading compared with the first (P=0.011). This trend was maintained when separately analysing a subgroup of 42 subjects who had been repeat patch tested within 1 year; this analysis was done to minimize the potential confounding factor of increased usage of fragrances with a wide interval between both tests. To reduce the confounding effect of age on our data, we calculated expected frequencies of positivity to fragrance mix I based on previously published data from our centre. This showed a marked excess of observed cases over predicted ones, particularly in women in the age range 40-60 years. We suspect that active sensitization to fragrance mix I may occur. Similar published analysis from another large group using standard methodology supports our data.

  11. Epidemiology of antibiotic-resistant wound infections from six countries in Africa

    PubMed Central

    Bebell, Lisa M; Meney, Carron; Valeri, Linda

    2017-01-01

    Introduction Little is known about the antimicrobial susceptibility of common bacteria responsible for wound infections from many countries in sub-Saharan Africa. Methods We performed a retrospective review of microbial isolates collected based on clinical suspicion of wound infection between 2004 and 2016 from Mercy Ships, a non-governmental organisation operating a single mobile surgical unit in Benin, Congo, Liberia, Madagascar, Sierra Leone and Togo. Antimicrobial resistant organisms of interest were defined as methicillin-resistant Staphylococcus aureus (MRSA) or Enterobacteriaceae resistant to third-generation cephalosporins. Generalised mixed-effects models accounting for repeated isolates in a patient, potential clustering by case mix for each field service, age, gender and country were used to test the hypothesis that rates of antimicrobial resistance differed between countries. Results 3145 isolates from repeated field services in six countries were reviewed. In univariate analyses, the highest proportion of MRSA was found in Benin (34.6%) and Congo (31.9%), while the lowest proportion was found in Togo (14.3%) and Madagascar (14.5%); country remained a significant predictor in multivariate analyses (P=0.002). In univariate analyses, the highest proportion of third-generation cephalosporin-resistant Enterobacteriaceae was found in Benin (35.8%) and lowest in Togo (14.3%) and Madagascar (16.3%). Country remained a significant predictor for antimicrobial-resistant isolates in multivariate analyses (P=0.009). Conclusion A significant proportion of isolates from wound cultures were resistant to first-line antimicrobials in each country. Though antimicrobial resistance isolates were not verified in a reference laboratory and these data may not be representative of all regions of the countries studied, differences in the proportion of antimicrobial-resistant isolates and resistance profiles between countries suggest site-specific surveillance should be a priority and local antimicrobial resistance profiles should be used to guide empiric antibiotic selection. PMID:29588863

  12. Chemical Mixing Model and K-Th-Ti Systematics and HED Meteorites for the Dawn Mission

    NASA Technical Reports Server (NTRS)

    Usui, T.; McSween, H. Y., Jr.; Mittlefehldt, D. W.; Prettyman, T. H.

    2009-01-01

    The Dawn mission will explore 4 Vesta, a large differentiated asteroid believed to be the parent body of the howardite, eucrite and diogenite (HED) meteorite suite. The Dawn spacecraft carries a gamma-ray and neutron detector (GRaND), which will measure the abundances of selected elements on the surface of Vesta. This study provides ways to leverage the large geochemical database on HED meteorites as a tool for interpreting chemical analyses by GRaND of mapped units on the surface of Vesta.

  13. Search for Dark Gauge Bosons Decaying into Displaced Lepton-Jets in Proton-Proton Collisions at √S = 13 TeV with the Atlas Detector

    NASA Astrophysics Data System (ADS)

    Diamond, Miriam

    The dark photon (A'), the gauge boson carrier of a hypothetical new force, has been proposed in a wide range of Beyond the Standard Model (BSM) theories, and could serve as our window to an entire dark sector. A massive A' could decay back to the Standard Model (SM) with a significant branching fraction, through kinetic mixing with the SM photon. If this A' can be produced from decays of a dark scalar that mixes with the SM Higgs boson, collider searches involving leptonic final states provide promising discovery prospects with rich phenomenology. This work presents the results of a search for dark photons in the mass range 0.2 ≤ mA' ≤ 10 GeV decaying into collimated jets of light leptons and mesons, so-called ``lepton-jets". It employs 3.57 fb-1 of data from proton--proton collisions at a centre-of-mass energy of √s =13 TeV, collected during 2015 with the ATLAS detector at the LHC. No deviations from SM expectations are observed. Limits on benchmark models predicting Higgs boson decays to A's are derived as a function of the A' lifetime; limits are also established in the parameter space of mA' vs. kinetic mixing parameter epsilon . These extend the limits obtained in a similar search previously performed during Run 1 of the LHC, to include dark photon masses 2 ≤ mA' ≤ 10 GeV and to cover higher epsilon values for 0.2 ≤ mA' ≤ 2 GeV, and are complementary to various other ATLAS A' searches. As data-taking continues at the LHC, the reach of lepton-jet analyses will continue to expand in model coverage and in parameter space.

  14. 3D simulation of the influence of internal mixing dynamics on the propagation of river plumes in Lake Constance

    NASA Astrophysics Data System (ADS)

    Pflugbeil, Thomas; Pöschke, Franziska; Noffke, Anna; Winde, Vera; Wolf, Thomas

    2017-04-01

    Lake Constance is one of most important drinking water resources in southern Germany. Furthermore, the lake and its catchment is a meaningful natural habitat as well as economical and cultural area. In this context, sustainable development and conservation of the lake ecosystem and drinking water quality is of high importance. However, anthropogenic pressures (e.g. waste water, land use, industry in catchment area) on the lake itself and its external inflows are high. The project "SeeZeichen" (ReWaM-project cluster by BMBF, funding number 02WRM1365) is investigating different immission pathways (groundwater, river, superficial inputs) and their impact on the water quality of Lake Constance. The investigation includes the direct inflow areas as well as the lake-wide context. The present simulation study investigates the mixing dynamics of Lake Constance and its impacts on river inflows and vice versa. It considers different seasonal (mixing and stratification periods), hydrological (flood events, average and low discharge) and transport conditions (sediment loads). The simulations are focused on two rivers: The River Alpenrhein delivers about 60 % of water and material input into Lake Constance. The River Schussen was chosen since it is highly anthropogenic influenced. For this purpose, a high-resolution three-dimensional hydrodynamic model of the Lake Constance is set up with Delft3D-Flow model system. The model is calibrated and validated with long term data sets of water levels, discharges and temperatures. The model results will be analysed for residence times of river water within the lake and particle distributions to evaluate potential impacts of river plume water constituents on the general water quality of the lake.

  15. Ghost interactions in MEG/EEG source space: A note of caution on inter-areal coupling measures.

    PubMed

    Palva, J Matias; Wang, Sheng H; Palva, Satu; Zhigalov, Alexander; Monto, Simo; Brookes, Matthew J; Schoffelen, Jan-Mathijs; Jerbi, Karim

    2018-06-01

    When combined with source modeling, magneto- (MEG) and electroencephalography (EEG) can be used to study long-range interactions among cortical processes non-invasively. Estimation of such inter-areal connectivity is nevertheless hindered by instantaneous field spread and volume conduction, which artificially introduce linear correlations and impair source separability in cortical current estimates. To overcome the inflating effects of linear source mixing inherent to standard interaction measures, alternative phase- and amplitude-correlation based connectivity measures, such as imaginary coherence and orthogonalized amplitude correlation have been proposed. Being by definition insensitive to zero-lag correlations, these techniques have become increasingly popular in the identification of correlations that cannot be attributed to field spread or volume conduction. We show here, however, that while these measures are immune to the direct effects of linear mixing, they may still reveal large numbers of spurious false positive connections through field spread in the vicinity of true interactions. This fundamental problem affects both region-of-interest-based analyses and all-to-all connectome mappings. Most importantly, beyond defining and illustrating the problem of spurious, or "ghost" interactions, we provide a rigorous quantification of this effect through extensive simulations. Additionally, we further show that signal mixing also significantly limits the separability of neuronal phase and amplitude correlations. We conclude that spurious correlations must be carefully considered in connectivity analyses in MEG/EEG source space even when using measures that are immune to zero-lag correlations. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Modeling optimal treatment strategies in a heterogeneous mixing model.

    PubMed

    Choe, Seoyun; Lee, Sunmi

    2015-11-25

    Many mathematical models assume random or homogeneous mixing for various infectious diseases. Homogeneous mixing can be generalized to mathematical models with multi-patches or age structure by incorporating contact matrices to capture the dynamics of the heterogeneously mixing populations. Contact or mixing patterns are difficult to measure in many infectious diseases including influenza. Mixing patterns are considered to be one of the critical factors for infectious disease modeling. A two-group influenza model is considered to evaluate the impact of heterogeneous mixing on the influenza transmission dynamics. Heterogeneous mixing between two groups with two different activity levels includes proportionate mixing, preferred mixing and like-with-like mixing. Furthermore, the optimal control problem is formulated in this two-group influenza model to identify the group-specific optimal treatment strategies at a minimal cost. We investigate group-specific optimal treatment strategies under various mixing scenarios. The characteristics of the two-group influenza dynamics have been investigated in terms of the basic reproduction number and the final epidemic size under various mixing scenarios. As the mixing patterns become proportionate mixing, the basic reproduction number becomes smaller; however, the final epidemic size becomes larger. This is due to the fact that the number of infected people increases only slightly in the higher activity level group, while the number of infected people increases more significantly in the lower activity level group. Our results indicate that more intensive treatment of both groups at the early stage is the most effective treatment regardless of the mixing scenario. However, proportionate mixing requires more treated cases for all combinations of different group activity levels and group population sizes. Mixing patterns can play a critical role in the effectiveness of optimal treatments. As the mixing becomes more like-with-like mixing, treating the higher activity group in the population is almost as effective as treating the entire populations since it reduces the number of disease cases effectively but only requires similar treatments. The gain becomes more pronounced as the basic reproduction number increases. This can be a critical issue which must be considered for future pandemic influenza interventions, especially when there are limited resources available.

  17. Subjective Social Status and Self-Reported Health Among US-born and Immigrant Latinos.

    PubMed

    Garza, Jeremiah R; Glenn, Beth A; Mistry, Rashmita S; Ponce, Ninez A; Zimmerman, Frederick J

    2017-02-01

    Subjective social status is associated with a range of health outcomes. Few studies have tested the relevance of subjective social status among Latinos in the U.S.; those that have yielded mixed results. Data come from the Latino subsample of the 2003 National Latino and Asian American Study (N = 2554). Regression models adjusted for socioeconomic and demographic factors. Stratified analyses tested whether nativity status modifies the effect of subjective social status on health. Subjective social status was associated with better health. Income and education mattered more for health than subjective social status among U.S.-born Latinos. However, the picture was mixed among immigrant Latinos, with subjective social status more strongly predictive than income but less so than education. Subjective social status may tap into stressful immigrant experiences that affect one's perceived self-worth and capture psychosocial consequences and social disadvantage left out by conventional socioeconomic measures.

  18. Kelvin-Helmholtz instability of counter-rotating discs

    NASA Astrophysics Data System (ADS)

    Quach, Dan; Dyda, Sergei; Lovelace, Richard V. E.

    2015-01-01

    Observations of galaxies and models of accreting systems point to the occurrence of counter-rotating discs where the inner part of the disc (r < r0) is corotating and the outer part is counter-rotating. This work analyses the linear stability of radially separated co- and counter-rotating thin discs. The strong instability found is the supersonic Kelvin-Helmholtz instability. The growth rates are of the order of or larger than the angular rotation rate at the interface. The instability is absent if there is no vertical dependence of the perturbation. That is, the instability is essentially three dimensional. The non-linear evolution of the instability is predicted to lead to a mixing of the two components, strong heating of the mixed gas, and vertical expansion of the gas, and annihilation of the angular momenta of the two components. As a result, the heated gas will free-fall towards the disc's centre over the surface of the inner disc.

  19. Bulk and surface properties of liquid Al-Cr and Cr-Ni alloys.

    PubMed

    Novakovic, R

    2011-06-15

    The energetics of mixing and structural arrangement in liquid Al-Cr and Cr-Ni alloys has been analysed through the study of surface properties (surface tension and surface segregation), dynamic properties (chemical diffusion) and microscopic functions (concentration fluctuations in the long-wavelength limit and chemical short-range order parameter) in the framework of statistical mechanical theory in conjunction with quasi-lattice theory. The Al-Cr phase diagram exhibits the existence of different intermetallic compounds in the solid state, while that of Cr-Ni is a simple eutectic-type phase diagram at high temperatures and includes the low-temperature peritectoid reaction in the range near a CrNi(2) composition. Accordingly, the mixing behaviour in Al-Cr and Cr-Ni alloy melts was studied using the complex formation model in the weak interaction approximation and by postulating Al(8)Cr(5) and CrNi(2) chemical complexes, respectively, as energetically favoured.

  20. Assessing power of large river fish monitoring programs to detect population changes: the Missouri River sturgeon example

    USGS Publications Warehouse

    Wildhaber, M.L.; Holan, S.H.; Bryan, J.L.; Gladish, D.W.; Ellersieck, M.

    2011-01-01

    In 2003, the US Army Corps of Engineers initiated the Pallid Sturgeon Population Assessment Program (PSPAP) to monitor pallid sturgeon and the fish community of the Missouri River. The power analysis of PSPAP presented here was conducted to guide sampling design and effort decisions. The PSPAP sampling design has a nested structure with multiple gear subsamples within a river bend. Power analyses were based on a normal linear mixed model, using a mixed cell means approach, with variance estimates from the original data. It was found that, at current effort levels, at least 20 years for pallid and 10 years for shovelnose sturgeon is needed to detect a 5% annual decline. Modified bootstrap simulations suggest power estimates from the original data are conservative due to excessive zero fish counts. In general, the approach presented is applicable to a wide array of animal monitoring programs.

  1. Patterns and partners for chiral symmetry restoration

    NASA Astrophysics Data System (ADS)

    Gómez Nicola, A.; Ruiz de Elvira, J.

    2018-04-01

    We present and analyze a new set of Ward Identities which shed light on the distinction between different patterns of chiral symmetry restoration in QCD, namely O (4 ) vs O (4 )×U (1 )A. The degeneracy of chiral partners for all scalar and pseudoscalar meson nonet members is studied through their corresponding correlators. Around chiral symmetry degeneration of O (4 ) partners, our analysis predicts that U (1 )A partners are also degenerated. Our analysis also leads to I =1 /2 scalar-pseudoscalar partner degeneration at exact chiral restoration and supports ideal mixing between the η - η' and the f0(500 )- f0(980 ) mesons at O (4 )×U (1 )A restoration, with a possible range where the pseudoscalar mixing vanishes if the two transitions are well separated. We test our results with lattice data and provide further relevant observables regarding chiral and U (1 )A restoration for future lattice and model analyses.

  2. The extent of lunar regolith mixing

    NASA Technical Reports Server (NTRS)

    Nishiizumi, K.; Imamura, M.; Kohl, C. P.; Murrell, M. T.; Arnold, J. R.; Russ, G. P., III

    1979-01-01

    The activity of solar cosmic-ray-produced Mn-53 measured as a function of depth in the upper 100 g/sq cm of lunar cores 60009-60010 and 12025-12028 is discussed. Analyses of samples from the Apollo 15 and 16 drill stems together with authors' previously published results (1974, 1976), and the Battelle Na-22 and Al-26 data, indicate that in three of the four cases studied the regolith was measurably disturbed within the last 10 m.y. Activities measured in the uppermost 2 g/sq cm indicate frequent mixing within this depth range. The Monte Carlo gardening model of Arnold (1975) was used to derive profiles for the gardened moon-wide average of Mn-53 and Al-26 as a function of depth. The Mn-53 and Al-26 experimental results agreed with theoretical predictions, but the calculated depths of disturbance appeared too low.

  3. Development of Tripropellant CFD Design Code

    NASA Technical Reports Server (NTRS)

    Farmer, Richard C.; Cheng, Gary C.; Anderson, Peter G.

    1998-01-01

    A tripropellant, such as GO2/H2/RP-1, CFD design code has been developed to predict the local mixing of multiple propellant streams as they are injected into a rocket motor. The code utilizes real fluid properties to account for the mixing and finite-rate combustion processes which occur near an injector faceplate, thus the analysis serves as a multi-phase homogeneous spray combustion model. Proper accounting of the combustion allows accurate gas-side temperature predictions which are essential for accurate wall heating analyses. The complex secondary flows which are predicted to occur near a faceplate cannot be quantitatively predicted by less accurate methodology. Test cases have been simulated to describe an axisymmetric tripropellant coaxial injector and a 3-dimensional RP-1/LO2 impinger injector system. The analysis has been shown to realistically describe such injector combustion flowfields. The code is also valuable to design meaningful future experiments by determining the critical location and type of measurements needed.

  4. The regolith at the Apollo 15 site and its stratigraphic implications

    USGS Publications Warehouse

    Carr, M.H.; Meyer, C.E.

    1974-01-01

    Regolith samples from the Apollo 15 landing site are described in terms of two major fractions, a homogeneous glass fraction and a non-homogeneous glass fraction. The proportions of different components in the homogeneous glass fraction were determined directly by chemical analyses of individual particles. They are mainly green glass, a mare-like glass, and different types of Fra Mauro and Highland type glasses. The proportions of various components in the remainder of each of the soils were determined indirectly by finding the mix of components that best fits their bulk compositions. The mixing model suggests that the Apennine Front consists mainly of rocks of low-K Fra Mauro basalt composition. These may overlie rocks with the composition of anorthositic gabbro. Green glass, which occurs widely throughout the site is believed to be derived from a green glass layer which darkens upland surfaces and lies beneath the local mare surface. ?? 1974.

  5. A Lagrangian mixing frequency model for transported PDF modeling

    NASA Astrophysics Data System (ADS)

    Turkeri, Hasret; Zhao, Xinyu

    2017-11-01

    In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.

  6. Revision of the LHCb limit on Majorana neutrinos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shuve, Brian; Peskin, Michael E.

    2016-12-16

    We revisit the recent limits from LHCb on a Majorana neutrino N in the mass range 250–5000 MeV [R. Aaij et al. (LHCb Collaboration), Phys. Rev. Lett. 112, 131802 (2014).]. These limits are among the best currently available, and they will be improved soon by the addition of data from Run 2 of the LHC. LHCb presented a model-independent constraint on the rate of like-sign leptonic decays, and then derived a constraint on the mixing angle V μ 4 based on a theoretical model for the B decay width to N and the N lifetime. The model used ismore » unfortunately unsound. We revise the conclusions of the paper based on a decay model similar to the one used for the τ lepton and provide formulas useful for future analyses.« less

  7. Women and Men Together in Recruit Training.

    PubMed

    Orme, Geoffrey J; Kehoe, E James

    2018-05-01

    Although men and women recruits to the Australian Army have trained in mixed-gender platoons since 1995, restrictions on women joining the combat arms were only removed in 2016. As part of a longitudinal study starting with recruit training, this article examined recruit records collected before 2016 with the aims of delineating (1) the relative performance of women versus men in mixed-gender platoons and (2) the relative performance of men in mixed-gender platoons versus all-male platoons. De-identified instructor ratings for 630 females and 4,505 males who completed training between 2011 and 2015 were obtained. Recruits were distributed across 128 platoons (averaging 41.6 members, SD = 8.3) of which 75% contained females, in proportions from 5% to 45%. These analyses were conducted under defense ethics approval DPR-LREP 069-15. Factor analyses revealed that instructor ratings generally loaded onto a single factor, accounting 77.2% of the variance. Consequently, a composite recruit performance score (range 1-5) was computed for 16 of 19 competencies. Analyses of the scores revealed that the distributions of the scores for females and males overlapped considerably. Observed effects were negligible to small in size. The distributions were all centered between 3.0 and 3.5. In mixed-gender platoons, 51% of the females and 52% of the males fell in this band, and 44% of recruits in all-male platoons had scores in this band. The lower three bands (1.0-3.0) contained a slightly greater proportion of females (18%) than males in either mixed-gender platoons (12%) or all-male platoons (12%). Conversely, the upper three bands (3.5-5.0) contained a slightly smaller percentage of females (31%) than males in either mixed-gender platoons (36%) or all-male platoons (44%). Although scores for females were reliably lower than those of males in mixed-gender platoons, χ2 (4) = 16.01, p < 0.01, the effect size (V = 0.07) did not reach the criterion for even a small effect (0.10). For male recruits, those in mixed-gender platoons had scores that were reliably lower than in all-male platoons, χ2 (4) = 48.38, p < 0.001; its effect size (V = 0.11) just exceeded the criterion for a small effect (0.10). Further analyses revealed that male scores had a near-zero correlation (r = -0.033) with the proportion of females in platoons (0-45%). This large-scale secondary analysis of instructor ratings of female and male recruits provides a platform for monitoring the integration of women into the combat arms. The analyses revealed nearly complete overlap in the performance of female versus male recruits. The detected gender-related differences were negligible to small in size. These small differences must be viewed with considerable caution. They may be artifacts of rater bias or other uncontrolled features of the rating system, which was designed for reporting individual recruit performance rather than aggregate analyses. Even with these limitations, this baseline snapshot of recruit performance suggests that, at recruit training, women and men are already working well together, which bodes well for their subsequent integration into the combat arms.

  8. The mixing effects for real gases and their mixtures

    NASA Astrophysics Data System (ADS)

    Gong, M. Q.; Luo, E. C.; Wu, J. F.

    2004-10-01

    The definitions of the adiabatic and isothermal mixing effects in the mixing processes of real gases were presented in this paper. Eight substances with boiling-point temperatures from cryogenic temperature to the ambient temperature were selected from the interest of low temperature refrigeration to study their binary and multicomponent mixing effects. Detailed analyses were made on the parameters of the mixing process to know their influences on mixing effects. Those parameters include the temperatures, pressures, and mole fraction ratios of pure substances before mixing. The results show that the maximum temperature variation occurs at the saturation state of each component in the mixing process. Those components with higher boiling-point temperatures have higher isothermal mixing effects. The maximum temperature variation which is defined as the adiabatic mixing effect can even reach up to 50 K, and the isothermal mixing effect can reach about 20 kJ/mol. The possible applications of the mixing cooling effect in both open cycle and closed cycle refrigeration systems were also discussed.

  9. Application of the Fokker-Planck molecular mixing model to turbulent scalar mixing using moment methods

    NASA Astrophysics Data System (ADS)

    Madadi-Kandjani, E.; Fox, R. O.; Passalacqua, A.

    2017-06-01

    An extended quadrature method of moments using the β kernel density function (β -EQMOM) is used to approximate solutions to the evolution equation for univariate and bivariate composition probability distribution functions (PDFs) of a passive scalar for binary and ternary mixing. The key element of interest is the molecular mixing term, which is described using the Fokker-Planck (FP) molecular mixing model. The direct numerical simulations (DNSs) of Eswaran and Pope ["Direct numerical simulations of the turbulent mixing of a passive scalar," Phys. Fluids 31, 506 (1988)] and the amplitude mapping closure (AMC) of Pope ["Mapping closures for turbulent mixing and reaction," Theor. Comput. Fluid Dyn. 2, 255 (1991)] are taken as reference solutions to establish the accuracy of the FP model in the case of binary mixing. The DNSs of Juneja and Pope ["A DNS study of turbulent mixing of two passive scalars," Phys. Fluids 8, 2161 (1996)] are used to validate the results obtained for ternary mixing. Simulations are performed with both the conditional scalar dissipation rate (CSDR) proposed by Fox [Computational Methods for Turbulent Reacting Flows (Cambridge University Press, 2003)] and the CSDR from AMC, with the scalar dissipation rate provided as input and obtained from the DNS. Using scalar moments up to fourth order, the ability of the FP model to capture the evolution of the shape of the PDF, important in turbulent mixing problems, is demonstrated. Compared to the widely used assumed β -PDF model [S. S. Girimaji, "Assumed β-pdf model for turbulent mixing: Validation and extension to multiple scalar mixing," Combust. Sci. Technol. 78, 177 (1991)], the β -EQMOM solution to the FP model more accurately describes the initial mixing process with a relatively small increase in computational cost.

  10. Non Linear Analyses for the Evaluation of Seismic Behavior of Mixed R.C.-Masonry Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liberatore, Laura; Tocci, Cesare; Masiani, Renato

    2008-07-08

    In this work the seismic behavior of masonry buildings with mixed structural system, consisting of perimeter masonry walls and internal r.c. frames, is studied by means of non linear static (pushover) analyses. Several aspects, like the distribution of seismic action between masonry and r.c. elements, the local and global behavior of the structure, the crisis of the connections and the attainment of the ultimate strength of the whole structure are examined. The influence of some parameters, such as the masonry compressive and tensile strength, on the structural behavior is investigated. The numerical analyses are also repeated on a building inmore » which the r.c. internal frames are replaced with masonry walls.« less

  11. Design and analysis of flow velocity distribution inside a raceway pond using computational fluid dynamics.

    PubMed

    Pandey, Ramakant; Premalatha, M

    2017-03-01

    Open raceway ponds are widely adopted for cultivating microalgae on a large scale. Working depth of the raceway pond is the major component to be analysed for increasing the volume to surface area ratio. The working depth is limited up to 5-15 cm in conventional ponds but in this analysis working depth of raceway pond is considered as 25 cm. In this work, positioning of the paddle wheel is analysed and corresponding Vertical Mixing Index are calculated using CFD. Flow pattern along the length of the raceway pond, at three different paddle wheel speeds are analysed for L/W ratio of 6, 8 and 10, respectively. Effect of clearance (C) between rotor blade tip and bottom surface is also analysed by taking four clearance conditions i.e. C = 2, 5, 10 and 15. Moving reference frame method of Fluent is used for the modeling of six blade paddle wheel and realizable k-ε model is used for capturing turbulence characteristics. Overall objective of this work is to analyse the required geometry for maintaining a minimum flow velocity to avoid settling of algae corresponding to 25 cm working depth. Geometry given in [13] is designed using ANSYS Design modular and CFD results are generated using ANSYS FLUENT for the purpose of validation. Good agreement of results is observed between CFD and experimental Particle image velocimetry results with the deviation of 7.23%.

  12. Association analysis for feet and legs disorders with whole-genome sequence variants in 3 dairy cattle breeds.

    PubMed

    Wu, Xiaoping; Guldbrandtsen, Bernt; Lund, Mogens Sandø; Sahana, Goutam

    2016-09-01

    Identification of genetic variants associated with feet and legs disorders (FLD) will aid in the genetic improvement of these traits by providing knowledge on genes that influence trait variations. In Denmark, FLD in cattle has been recorded since the 1990s. In this report, we used deregressed breeding values as response variables for a genome-wide association study. Bulls (5,334 Danish Holstein, 4,237 Nordic Red Dairy Cattle, and 1,180 Danish Jersey) with deregressed estimated breeding values were genotyped with the Illumina Bovine 54k single nucleotide polymorphism (SNP) genotyping array. Genotypes were imputed to whole-genome sequence variants, and then 22,751,039 SNP on 29 autosomes were used for an association analysis. A modified linear mixed-model approach (efficient mixed-model association eXpedited, EMMAX) and a linear mixed model were used for association analysis. We identified 5 (3,854 SNP), 3 (13,642 SNP), and 0 quantitative trait locus (QTL) regions associated with the FLD index in Danish Holstein, Nordic Red Dairy Cattle, and Danish Jersey populations, respectively. We did not identify any QTL that were common among the 3 breeds. In a meta-analysis of the 3 breeds, 4 QTL regions were significant, but no additional QTL region was identified compared with within-breed analyses. Comparison between top SNP locations within these QTL regions and known genes suggested that RASGRP1, LCORL, MOS, and MITF may be candidate genes for FLD in dairy cattle. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  13. A new approach to the solution of the linear mixing model for a single isotope: application to the case of an opportunistic predator.

    PubMed

    Hall-Aspland, S A; Hall, A P; Rogers, T L

    2005-03-01

    Mixing models are used to determine diets where the number of prey items are greater than one, however, the limitation of the linear mixing method is the lack of a unique solution when the number of potential sources is greater than the number (n) of isotopic signatures +1. Using the IsoSource program all possible combinations of each source contribution (0-100%) in preselected small increments can be examined and a range of values produced for each sample analysed. We propose the use of a Moore Penrose (M-P) pseudoinverse, which involves the inverse of a 2x2 matrix. This is easily generalized to the case of a single isotope with (p) prey sources and produces a specific solution. The Antarctic leopard seal (Hydrurga leptonyx) was used as a model species to test this method. This seal is an opportunistic predator, which preys on a wide range of species including seals, penguins, fish and krill. The M-P method was used to determine the contribution to diet from each of the four prey types based on blood and fur samples collected over three consecutive austral summers. The advantage of the M-P method was the production of a vector of fractions f for each predator isotopic value, allowing us to identify the relative variation in dietary proportions. Comparison of the calculated fractions from this method with 'means' from IsoSource allowed confidence in the new approach for the case of a single isotope, N.

  14. Using Solution- and Solid-State S K-edge X-ray Absorption Spectroscopy with Density Functional Theory to Evaluate M–S Bonding for MS42- (M = Cr, Mo, W) Dianions

    PubMed Central

    Olson, Angela C.; Keith, Jason M.; Batista, Enrique R.; Boland, Kevin S.; Daly, Scott R.; Kozimor, Stosh A.; MacInnes, Molly M.; Martin, Richard L.; Scott, Brian L.

    2014-01-01

    Herein, we have evaluated relative changes in M–S electronic structure and orbital mixing in Group 6 MS42- dianions using solid- and solution-phase S K-edge X-ray absorption spectroscopy (XAS; M = Mo, W), as well as density functional theory (DFT; M = Cr, Mo, W) and time-dependent density functional theory (TDDFT) calculations. To facilitate comparison with solution measurements (conducted in acetonitrile), theoretical models included gas-phase calculations as well as those that incorporated an acetonitrile dielectric, the latter of which provided better agreement with experiment. Two pre-edge features arising from S 1s → e* and t2* electron excitations were observed in the S K-edge XAS spectra and were reasonably assigned as 1A1 → 1T2 transitions. For MoS42-, both solution-phase pre-edge peak intensities were consistent with results from the solid-state spectra. For WS42-, solution- and solid-state pre-edge peak intensities for transitions involving e* were equivalent, while transitions involving the t2* orbitals were less intense in solution. Experimental and computational results have been presented in comparison to recent analyses of MO42- dianions, which allowed M–S and M–O orbital mixing to be evaluated as the principle quantum number (n) for the metal valence d orbitals increased (3d, 4d, 5d). Overall, the M–E (E = O, S) analyses revealed distinct trends in orbital mixing. For example, as the Group 6 triad was descended, e* (π*) orbital mixing remained constant in the M–S bonds, but increased appreciably for M–O interactions. For the t2* orbitals (σ* + π*), mixing decreased slightly for M–S bonding and increased only slightly for the M–O interactions. These results suggested that the metal and ligand valence orbital energies and radial extensions delicately influenced the orbital compositions for isoelectronic ME42- (E = O, S) dianions. PMID:25311904

  15. Using solution- and solid-state S K-edge X-ray absorption spectroscopy with density functional theory to evaluate M-S bonding for MS4(2-) (M = Cr, Mo, W) dianions.

    PubMed

    Olson, Angela C; Keith, Jason M; Batista, Enrique R; Boland, Kevin S; Daly, Scott R; Kozimor, Stosh A; MacInnes, Molly M; Martin, Richard L; Scott, Brian L

    2014-12-14

    Herein, we have evaluated relative changes in M-S electronic structure and orbital mixing in Group 6 MS4(2-) dianions using solid- and solution-phase S K-edge X-ray absorption spectroscopy (XAS; M = Mo, W), as well as density functional theory (DFT; M = Cr, Mo, W) and time-dependent density functional theory (TDDFT) calculations. To facilitate comparison with solution measurements (conducted in acetonitrile), theoretical models included gas-phase calculations as well as those that incorporated an acetonitrile dielectric, the latter of which provided better agreement with experiment. Two pre-edge features arising from S 1s → e* and t electron excitations were observed in the S K-edge XAS spectra and were reasonably assigned as (1)A1 → (1)T2 transitions. For MoS4(2-), both solution-phase pre-edge peak intensities were consistent with results from the solid-state spectra. For WS4(2-), solution- and solid-state pre-edge peak intensities for transitions involving e* were equivalent, while transitions involving the t orbitals were less intense in solution. Experimental and computational results have been presented in comparison to recent analyses of MO4(2-) dianions, which allowed M-S and M-O orbital mixing to be evaluated as the principle quantum number (n) for the metal valence d orbitals increased (3d, 4d, 5d). Overall, the M-E (E = O, S) analyses revealed distinct trends in orbital mixing. For example, as the Group 6 triad was descended, e* (π*) orbital mixing remained constant in the M-S bonds, but increased appreciably for M-O interactions. For the t orbitals (σ* + π*), mixing decreased slightly for M-S bonding and increased only slightly for the M-O interactions. These results suggested that the metal and ligand valence orbital energies and radial extensions delicately influenced the orbital compositions for isoelectronic ME4(2-) (E = O, S) dianions.

  16. A New Long-Term Care Facilities Model in Nova Scotia, Canada: Protocol for a Mixed Methods Study of Care by Design

    PubMed Central

    Boudreau, Michelle Anne; Jensen, Jan L; Edgecombe, Nancy; Clarke, Barry; Burge, Frederick; Archibald, Greg; Taylor, Anthony; Andrew, Melissa K

    2013-01-01

    Background Prior to the implementation of a new model of care in long-term care facilities in the Capital District Health Authority, Halifax, Nova Scotia, residents entering long-term care were responsible for finding their own family physician. As a result, care was provided by many family physicians responsible for a few residents leading to care coordination and continuity challenges. In 2009, Capital District Health Authority (CDHA) implemented a new model of long-term care called “Care by Design” which includes: a dedicated family physician per floor, 24/7 on-call physician coverage, implementation of a standardized geriatric assessment tool, and an interdisciplinary team approach to care. In addition, a new Emergency Health Services program was implemented shortly after, in which specially trained paramedics dedicated to long-term care responses are able to address urgent care needs. These changes were implemented to improve primary and emergency care for vulnerable residents. Here we describe a comprehensive mixed methods research study designed to assess the impact of these programs on care delivery and resident outcomes. The results of this research will be important to guide primary care policy for long-term care. Objective We aim to evaluate the impact of introducing a new model of a dedicated primary care physician and team approach to long-term care facilities in the CDHA using a mixed methods approach. As a mixed methods study, the quantitative and qualitative data findings will inform each other. Quantitatively we will measure a number of indicators of care in CDHA long-term care facilities pre and post-implementation of the new model. In the qualitative phase of the study we will explore the experience under the new model from the perspectives of stakeholders including family doctors, nurses, administration and staff as well as residents and family members. The proposed mixed method study seeks to evaluate and make policy recommendations related to primary care in long-term care facilities with a focus on end-of-life care and dementia. Methods This is a mixed methods study with concurrent quantitative and qualitative phases. In the quantitative phase, a retrospective time series study is being conducted. Planned analyses will measure indicators of clinical, system, and health outcomes across three time periods and assess the effect of Care by Design as a whole and its component parts. The qualitative methods explore the experiences of stakeholders (ie, physicians, nurses, paramedics, care assistants, administrators, residents, and family members) through focus groups and in depth individual interviews. Results Data collection will be completed in fall 2013. Conclusions This study will generate a considerable amount of outcome data with applications for care providers, health care systems, and applications for program evaluation and quality improvement. Using the mixed methods design, this study will provide important results for stakeholders, as well as other health systems considering similar programs. In addition, this study will advance methods used to research new multifaceted interdisciplinary health delivery models using multiple and varied data sources and contribute to the discussion on evidence based health policy and program development. PMID:24292200

  17. A new long-term care facilities model in nova scotia, Canada: protocol for a mixed methods study of care by design.

    PubMed

    Marshall, Emily Gard; Boudreau, Michelle Anne; Jensen, Jan L; Edgecombe, Nancy; Clarke, Barry; Burge, Frederick; Archibald, Greg; Taylor, Anthony; Andrew, Melissa K

    2013-11-29

    Prior to the implementation of a new model of care in long-term care facilities in the Capital District Health Authority, Halifax, Nova Scotia, residents entering long-term care were responsible for finding their own family physician. As a result, care was provided by many family physicians responsible for a few residents leading to care coordination and continuity challenges. In 2009, Capital District Health Authority (CDHA) implemented a new model of long-term care called "Care by Design" which includes: a dedicated family physician per floor, 24/7 on-call physician coverage, implementation of a standardized geriatric assessment tool, and an interdisciplinary team approach to care. In addition, a new Emergency Health Services program was implemented shortly after, in which specially trained paramedics dedicated to long-term care responses are able to address urgent care needs. These changes were implemented to improve primary and emergency care for vulnerable residents. Here we describe a comprehensive mixed methods research study designed to assess the impact of these programs on care delivery and resident outcomes. The results of this research will be important to guide primary care policy for long-term care. We aim to evaluate the impact of introducing a new model of a dedicated primary care physician and team approach to long-term care facilities in the CDHA using a mixed methods approach. As a mixed methods study, the quantitative and qualitative data findings will inform each other. Quantitatively we will measure a number of indicators of care in CDHA long-term care facilities pre and post-implementation of the new model. In the qualitative phase of the study we will explore the experience under the new model from the perspectives of stakeholders including family doctors, nurses, administration and staff as well as residents and family members. The proposed mixed method study seeks to evaluate and make policy recommendations related to primary care in long-term care facilities with a focus on end-of-life care and dementia. This is a mixed methods study with concurrent quantitative and qualitative phases. In the quantitative phase, a retrospective time series study is being conducted. Planned analyses will measure indicators of clinical, system, and health outcomes across three time periods and assess the effect of Care by Design as a whole and its component parts. The qualitative methods explore the experiences of stakeholders (ie, physicians, nurses, paramedics, care assistants, administrators, residents, and family members) through focus groups and in depth individual interviews. Data collection will be completed in fall 2013. This study will generate a considerable amount of outcome data with applications for care providers, health care systems, and applications for program evaluation and quality improvement. Using the mixed methods design, this study will provide important results for stakeholders, as well as other health systems considering similar programs. In addition, this study will advance methods used to research new multifaceted interdisciplinary health delivery models using multiple and varied data sources and contribute to the discussion on evidence based health policy and program development.

  18. Study on forced convective heat transfer of non-newtonian nanofluids

    NASA Astrophysics Data System (ADS)

    He, Yurong; Men, Yubin; Liu, Xing; Lu, Huilin; Chen, Haisheng; Ding, Yulong

    2009-03-01

    This paper is concerned with the forced convective heat transfer of dilute liquid suspensions of nanoparticles (nanofluids) flowing through a straight pipe under laminar conditions. Stable nanofluids are formulated by using the high shear mixing and ultrasonication methods. They are then characterised for their size, surface charge, thermal and rheological properties and tested for their convective heat transfer behaviour. Mathematical modelling is performed to simulate the convective heat transfer of nanofluids using a single phase flow model and considering nanofluids as both Newtonian and non-Newtonian fluid. Both experiments and mathematical modelling show that nanofluids can substantially enhance the convective heat transfer. Analyses of the results suggest that the non-Newtonian character of nanofluids influences the overall enhancement, especially for nanofluids with an obvious non-Newtonian character.

  19. The response of plasma density to breaking inertial gravity wave in the lower regions of ionosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Wenbo, E-mail: Wenbo.Tang@asu.edu; Mahalov, Alex, E-mail: Alex.Mahalov@asu.edu

    2014-04-15

    We present a three-dimensional numerical study for the E and lower F region ionosphere coupled with the neutral atmosphere dynamics. This model is developed based on a previous ionospheric model that examines the transport patterns of plasma density given a prescribed neutral atmospheric flow. Inclusion of neutral dynamics in the model allows us to examine the charge-neutral interactions over the full evolution cycle of an inertial gravity wave when the background flow spins up from rest, saturates and eventually breaks. Using Lagrangian analyses, we show the mixing patterns of the ionospheric responses and the formation of ionospheric layers. The correspondingmore » plasma density in this flow develops complex wave structures and small-scale patches during the gravity wave breaking event.« less

  20. Reconstructing the regulatory network controlling commitment and sporulation in Physarum polycephalum based on hierarchical Petri Net modelling and simulation.

    PubMed

    Marwan, Wolfgang; Sujatha, Arumugam; Starostzik, Christine

    2005-10-21

    We reconstruct the regulatory network controlling commitment and sporulation of Physarum polycephalum from experimental results using a hierarchical Petri Net-based modelling and simulation framework. The stochastic Petri Net consistently describes the structure and simulates the dynamics of the molecular network as analysed by genetic, biochemical and physiological experiments within a single coherent model. The Petri Net then is extended to simulate time-resolved somatic complementation experiments performed by mixing the cytoplasms of mutants altered in the sporulation response, to systematically explore the network structure and to probe its dynamics. This reverse engineering approach presumably can be employed to explore other molecular or genetic signalling systems where the activity of genes or their products can be experimentally controlled in a time-resolved manner.

  1. Laboratory-generated mixtures of mineral dust particles with biological substances: characterization of the particle mixing state and immersion freezing behavior

    NASA Astrophysics Data System (ADS)

    Augustin-Bauditz, Stefanie; Wex, Heike; Denjean, Cyrielle; Hartmann, Susan; Schneider, Johannes; Schmidt, Susann; Ebert, Martin; Stratmann, Frank

    2016-05-01

    Biological particles such as bacteria, fungal spores or pollen are known to be efficient ice nucleating particles. Their ability to nucleate ice is due to ice nucleation active macromolecules (INMs). It has been suggested that these INMs maintain their nucleating ability even when they are separated from their original carriers. This opens the possibility of an accumulation of such INMs in soils, resulting in an internal mixture of mineral dust and INMs. If particles from such soils which contain biological INMs are then dispersed into the atmosphere due to wind erosion or agricultural processes, they could induce ice nucleation at temperatures typical for biological substances, i.e., above -20 up to almost 0 °C, while they might be characterized as mineral dust particles due to a possibly low content of biological material. We conducted a study within the research unit INUIT (Ice Nucleation research UnIT), where we investigated the ice nucleation behavior of mineral dust particles internally mixed with INM. Specifically, we mixed a pure mineral dust sample (illite-NX) with ice active biological material (birch pollen washing water) and quantified the immersion freezing behavior of the resulting particles utilizing the Leipzig Aerosol Cloud Interaction Simulator (LACIS). A very important topic concerning the investigations presented here as well as for atmospheric application is the characterization of the mixing state of aerosol particles. In the present study we used different methods like single-particle aerosol mass spectrometry, Scanning Electron Microscopy (SEM), Energy Dispersive X-ray analysis (EDX), and a Volatility-Hygroscopicity Tandem Differential Mobility Analyser (VH-TDMA) to investigate the mixing state of our generated aerosol. Not all applied methods performed similarly well in detecting small amounts of biological material on the mineral dust particles. Measuring the hygroscopicity/volatility of the mixed particles with the VH-TDMA was the most sensitive method. We found that internally mixed particles, containing ice active biological material, follow the ice nucleation behavior observed for the pure biological particles. We verified this by modeling the freezing behavior of the mixed particles with the Soccerball model (SBM). It can be concluded that a single INM located on a mineral dust particle determines the freezing behavior of that particle with the result that freezing occurs at temperatures at which pure mineral dust particles are not yet ice active.

  2. Biomass viability: An experimental study and the development of an empirical mathematical model for submerged membrane bioreactor.

    PubMed

    Zuthi, M F R; Ngo, H H; Guo, W S; Nghiem, L D; Hai, F I; Xia, S Q; Zhang, Z Q; Li, J X

    2015-08-01

    This study investigates the influence of key biomass parameters on specific oxygen uptake rate (SOUR) in a sponge submerged membrane bioreactor (SSMBR) to develop mathematical models of biomass viability. Extra-cellular polymeric substances (EPS) were considered as a lumped parameter of bound EPS (bEPS) and soluble microbial products (SMP). Statistical analyses of experimental results indicate that the bEPS, SMP, mixed liquor suspended solids and volatile suspended solids (MLSS and MLVSS) have functional relationships with SOUR and their relative influence on SOUR was in the order of EPS>bEPS>SMP>MLVSS/MLSS. Based on correlations among biomass parameters and SOUR, two independent empirical models of biomass viability were developed. The models were validated using results of the SSMBR. However, further validation of the models for different operating conditions is suggested. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. On the Multilevel Nature of Meta-Analysis: A Tutorial, Comparison of Software Programs, and Discussion of Analytic Choices.

    PubMed

    Pastor, Dena A; Lazowski, Rory A

    2018-01-01

    The term "multilevel meta-analysis" is encountered not only in applied research studies, but in multilevel resources comparing traditional meta-analysis to multilevel meta-analysis. In this tutorial, we argue that the term "multilevel meta-analysis" is redundant since all meta-analysis can be formulated as a special kind of multilevel model. To clarify the multilevel nature of meta-analysis the four standard meta-analytic models are presented using multilevel equations and fit to an example data set using four software programs: two specific to meta-analysis (metafor in R and SPSS macros) and two specific to multilevel modeling (PROC MIXED in SAS and HLM). The same parameter estimates are obtained across programs underscoring that all meta-analyses are multilevel in nature. Despite the equivalent results, not all software programs are alike and differences are noted in the output provided and estimators available. This tutorial also recasts distinctions made in the literature between traditional and multilevel meta-analysis as differences between meta-analytic choices, not between meta-analytic models, and provides guidance to inform choices in estimators, significance tests, moderator analyses, and modeling sequence. The extent to which the software programs allow flexibility with respect to these decisions is noted, with metafor emerging as the most favorable program reviewed.

  4. Does case-mix based reimbursement stimulate the development of process-oriented care delivery?

    PubMed

    Vos, Leti; Dückers, Michel L A; Wagner, Cordula; van Merode, Godefridus G

    2010-11-01

    Reimbursement based on the total care of a patient during an acute episode of illness is believed to stimulate management and clinicians to reduce quality problems like waiting times and poor coordination of care delivery. Although many studies already show that this kind of case-mix based reimbursement leads to more efficiency, it remains unclear whether care coordination improved as well. This study aims to explore whether case-mix based reimbursement stimulates development of care coordination by the use of care programmes, and a process-oriented way of working. Data for this study were gathered during the winter of 2007/2008 in a survey involving all Dutch hospitals. Descriptive and structural equation modelling (SEM) analyses were conducted. SEM reveals that adoption of the case-mix reimbursement within hospitals' budgeting processes stimulates hospitals to establish care programmes by the use of process-oriented performance measures. However, the implementation of care programmes is not (yet) accompanied by a change in focus from function (the delivery of independent care activities) to process (the delivery of care activities as being connected to a chain of interdependent care activities). This study demonstrates that hospital management can stimulate the development of care programmes by the adoption of case-mix reimbursement within hospitals' budgeting processes. Future research is recommended to confirm this finding and to determine whether the establishment of care programmes will in time indeed lead to a more process-oriented view of professionals. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  5. Assumptions about footprint layer heights influence the quantification of emission sources: a case study for Cyprus

    NASA Astrophysics Data System (ADS)

    Hüser, Imke; Harder, Hartwig; Heil, Angelika; Kaiser, Johannes W.

    2017-09-01

    Lagrangian particle dispersion models (LPDMs) in backward mode are widely used to quantify the impact of transboundary pollution on downwind sites. Most LPDM applications count particles with a technique that introduces a so-called footprint layer (FL) with constant height, in which passing air tracer particles are assumed to be affected by surface emissions. The mixing layer dynamics are represented by the underlying meteorological model. This particle counting technique implicitly assumes that the atmosphere is well mixed in the FL. We have performed backward trajectory simulations with the FLEXPART model starting at Cyprus to calculate the sensitivity to emissions of upwind pollution sources. The emission sensitivity is used to quantify source contributions at the receptor and support the interpretation of ground measurements carried out during the CYPHEX campaign in July 2014. Here we analyse the effects of different constant and dynamic FL height assumptions. The results show that calculations with FL heights of 100 and 300 m yield similar but still discernible results. Comparison of calculations with FL heights constant at 300 m and dynamically following the planetary boundary layer (PBL) height exhibits systematic differences, with daytime and night-time sensitivity differences compensating for each other. The differences at daytime when a well-mixed PBL can be assumed indicate that residual inaccuracies in the representation of the mixing layer dynamics in the trajectories may introduce errors in the impact assessment on downwind sites. Emissions from vegetation fires are mixed up by pyrogenic convection which is not represented in FLEXPART. Neglecting this convection may lead to severe over- or underestimations of the downwind smoke concentrations. Introducing an extreme fire source from a different year in our study period and using fire-observation-based plume heights as reference, we find an overestimation of more than 60  % by the constant FL height assumptions used for surface emissions. Assuming a FL that follows the PBL may reproduce the peak of the smoke plume passing through but erroneously elevates the background for shallow stable PBL heights. It might thus be a reasonable assumption for open biomass burning emissions wherever observation-based injection heights are not available.

  6. Impact of case-mix on comparisons of patient-reported experience in NHS acute hospital trusts in England.

    PubMed

    Raleigh, Veena; Sizmur, Steve; Tian, Yang; Thompson, James

    2015-04-01

    To examine the impact of patient-mix on National Health Service (NHS) acute hospital trust scores in two national NHS patient surveys. Secondary analysis of 2012 patient survey data for 57,915 adult inpatients at 142 NHS acute hospital trusts and 45,263 adult emergency department attendees at 146 NHS acute hospital trusts in England. Changes in trust scores for selected questions, ranks, inter-trust variance and score-based performance bands were examined using three methods: no adjustment for case-mix; the current standardization method with weighting for age, sex and, for inpatients only, admission method; and a regression model adjusting in addition for ethnicity, presence of a long-term condition, proxy response (inpatients only) and previous emergency attendances (emergency department survey only). For both surveys, all the variables examined were associated with patients' responses and affected inter-trust variance in scores, although the direction and strength of impact differed between variables. Inter-trust variance was generally greatest for the unadjusted scores and lowest for scores derived from the full regression model. Although trust scores derived from the three methods were highly correlated (Kendall's tau coefficients 0.70-0.94), up to 14% of trusts had discordant ranks of when the standardization and regression methods were compared. Depending on the survey and question, up to 14 trusts changed performance bands when the regression model with its fuller case-mix adjustment was used rather than the current standardization method. More comprehensive case-mix adjustment of patient survey data than the current limited adjustment reduces performance variation between NHS acute hospital trusts and alters the comparative performance bands of some trusts. Given the use of these data for high-impact purposes such as performance assessment, regulation, commissioning, quality improvement and patient choice, a review of the long-standing method for analysing patient survey data would be timely, and could improve rigour and comparability across the NHS. Performance comparisons need to be perceived as fair and scientifically robust to maintain confidence in publicly reported data, and to support their use by both the public and the NHS. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  7. Predicting the multi-domain progression of Parkinson's disease: a Bayesian multivariate generalized linear mixed-effect model.

    PubMed

    Wang, Ming; Li, Zheng; Lee, Eun Young; Lewis, Mechelle M; Zhang, Lijun; Sterling, Nicholas W; Wagner, Daymond; Eslinger, Paul; Du, Guangwei; Huang, Xuemei

    2017-09-25

    It is challenging for current statistical models to predict clinical progression of Parkinson's disease (PD) because of the involvement of multi-domains and longitudinal data. Past univariate longitudinal or multivariate analyses from cross-sectional trials have limited power to predict individual outcomes or a single moment. The multivariate generalized linear mixed-effect model (GLMM) under the Bayesian framework was proposed to study multi-domain longitudinal outcomes obtained at baseline, 18-, and 36-month. The outcomes included motor, non-motor, and postural instability scores from the MDS-UPDRS, and demographic and standardized clinical data were utilized as covariates. The dynamic prediction was performed for both internal and external subjects using the samples from the posterior distributions of the parameter estimates and random effects, and also the predictive accuracy was evaluated based on the root of mean square error (RMSE), absolute bias (AB) and the area under the receiver operating characteristic (ROC) curve. First, our prediction model identified clinical data that were differentially associated with motor, non-motor, and postural stability scores. Second, the predictive accuracy of our model for the training data was assessed, and improved prediction was gained in particularly for non-motor (RMSE and AB: 2.89 and 2.20) compared to univariate analysis (RMSE and AB: 3.04 and 2.35). Third, the individual-level predictions of longitudinal trajectories for the testing data were performed, with ~80% observed values falling within the 95% credible intervals. Multivariate general mixed models hold promise to predict clinical progression of individual outcomes in PD. The data was obtained from Dr. Xuemei Huang's NIH grant R01 NS060722 , part of NINDS PD Biomarker Program (PDBP). All data was entered within 24 h of collection to the Data Management Repository (DMR), which is publically available ( https://pdbp.ninds.nih.gov/data-management ).

  8. Contributions of the atmosphere-land and ocean-sea ice model components to the tropical Atlantic SST bias in CESM1

    NASA Astrophysics Data System (ADS)

    Song, Z.; Lee, S. K.; Wang, C.; Kirtman, B. P.; Qiao, F.

    2016-02-01

    In order to identify and quantify intrinsic errors in the atmosphere-land and ocean-sea ice model components of the Community Earth System Model version 1 (CESM1) and their contributions to the tropical Atlantic sea surface temperature (SST) bias in CESM1, we propose a new method of diagnosis and apply it to a set of CESM1 simulations. Our analyses of the model simulations indicate that both the atmosphere-land and ocean-sea ice model components of CESM1 contain large errors in the tropical Atlantic. When the two model components are fully coupled, the intrinsic errors in the two components emerge quickly within a year with strong seasonality in their growth rates. In particular, the ocean-sea ice model contributes significantly in forcing the eastern equatorial Atlantic warm SST bias in early boreal summer. Further analysis shows that the upper thermocline water underneath the eastern equatorial Atlantic surface mixed layer is too warm in a stand-alone ocean-sea ice simulation of CESM1 forced with observed surface flux fields, suggesting that the mixed layer cooling associated with the entrainment of upper thermocline water is too weak in early boreal summer. Therefore, although we acknowledge the potential importance of the westerly wind bias in the western equatorial Atlantic and the low-level stratus cloud bias in the southeastern tropical Atlantic, both of which originate from the atmosphere-land model, we emphasize here that solving those problems in the atmosphere-land model alone does not resolve the equatorial Atlantic warm bias in CESM1.

  9. Acceptability and Feasibility of a Shared Decision-Making Model in Work Rehabilitation: A Mixed-Methods Study of Stakeholders' Perspectives.

    PubMed

    Coutu, Marie-France; Légaré, France; Durand, Marie-José; Stacey, Dawn; Labrecque, Marie-Elise; Corbière, Marc; Bainbridge, Lesley

    2018-04-16

    Purpose To establish the acceptability and feasibility of implementing a shared decision-making (SDM) model in work rehabilitation. Methods We used a sequential mixed-methods design with diverse stakeholder groups (representatives of private and public employers, insurers, and unions, as well as workers having participated in a work rehabilitation program). First, a survey using a self-administered questionnaire enabled stakeholders to rate their level of agreement with the model's acceptability and feasibility and propose modifications, if necessary. Second, eight focus groups representing key stakeholders (n = 34) and four one-on-one interviews with workers were conducted, based on the questionnaire results. For each stakeholder group, we computed the percentage of agreement with the model's acceptability and feasibility and performed thematic analyses of the transcripts. Results Less than 50% of each stakeholder group initially agreed with the overall acceptability and feasibility of the model. Stakeholders proposed 37 modifications to the objectives, 17 to the activities, and 39 to improve the model's feasibility. Based on in-depth analysis of the transcripts, indicators were added to one objective, an interview guide was added as proposed by insurers to ensure compliance of the SDM process with insurance contract requirements, and one objective was reformulated. Conclusion Despite initially low agreement with the model's acceptability on the survey, subsequent discussions led to three minor changes and contributed to the model's ultimate acceptability and feasibility. Later steps will involve assessing the extent of implementation of the model in real rehabilitation settings to see if other modifications are necessary before assessing its impact.

  10. Mixed models and reduced/selective integration displacement models for nonlinear analysis of curved beams

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Peters, J. M.

    1981-01-01

    Simple mixed models are developed for use in the geometrically nonlinear analysis of deep arches. A total Lagrangian description of the arch deformation is used, the analytical formulation being based on a form of the nonlinear deep arch theory with the effects of transverse shear deformation included. The fundamental unknowns comprise the six internal forces and generalized displacements of the arch, and the element characteristic arrays are obtained by using Hellinger-Reissner mixed variational principle. The polynomial interpolation functions employed in approximating the forces are one degree lower than those used in approximating the displacements, and the forces are discontinuous at the interelement boundaries. Attention is given to the equivalence between the mixed models developed herein and displacement models based on reduced integration of both the transverse shear and extensional energy terms. The advantages of mixed models over equivalent displacement models are summarized. Numerical results are presented to demonstrate the high accuracy and effectiveness of the mixed models developed and to permit a comparison of their performance with that of other mixed models reported in the literature.

  11. Estimating the numerical diapycnal mixing in an eddy-permitting ocean model

    NASA Astrophysics Data System (ADS)

    Megann, Alex

    2018-01-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, having attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimates have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is a recent ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre. It forms the ocean component of the GC2 climate model, and is closely related to the ocean component of the UKESM1 Earth System Model, the UK's contribution to the CMIP6 model intercomparison. GO5.0 uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. An approach to quantifying the numerical diapycnal mixing in this model, based on the isopycnal watermass analysis of Lee et al. (2002), is described, and the estimates thereby obtained of the effective diapycnal diffusivity in GO5.0 are compared with the values of the explicit diffusivity used by the model. It is shown that the effective mixing in this model configuration is up to an order of magnitude higher than the explicit mixing in much of the ocean interior, implying that mixing in the model below the mixed layer is largely dominated by numerical mixing. This is likely to have adverse consequences for the representation of heat uptake in climate models intended for decadal climate projections, and in particular is highly relevant to the interpretation of the CMIP6 class of climate models, many of which use constant-depth ocean models at ¼° resolution

  12. Isotopic signatures of vegetation change on northern mixed grass prairie

    USDA-ARS?s Scientific Manuscript database

    National analyses have shown invasion of northern mixed-grass prairie by nonnative grasses such as Kentucky bluegrass (Poa pratensis L.). Invasion of native prairie by nonnative grasses may compromise ecosystem function and limit potential ecosystem services. Recent data from a long-term (100 year) ...

  13. Investigating Learning with an Interactive Tutorial: A Mixed-Methods Strategy

    ERIC Educational Resources Information Center

    de Villiers, M. R.; Becker, Daphne

    2017-01-01

    From the perspective of parallel mixed-methods research, this paper describes interactivity research that employed usability-testing technology to analyse cognitive learning processes; personal learning styles and times; and errors-and-recovery of learners using an interactive e-learning tutorial called "Relations." "Relations"…

  14. Modeling molecular mixing in a spatially inhomogeneous turbulent flow

    NASA Astrophysics Data System (ADS)

    Meyer, Daniel W.; Deb, Rajdeep

    2012-02-01

    Simulations of spatially inhomogeneous turbulent mixing in decaying grid turbulence with a joint velocity-concentration probability density function (PDF) method were conducted. The inert mixing scenario involves three streams with different compositions. The mixing model of Meyer ["A new particle interaction mixing model for turbulent dispersion and turbulent reactive flows," Phys. Fluids 22(3), 035103 (2010)], the interaction by exchange with the mean (IEM) model and its velocity-conditional variant, i.e., the IECM model, were applied. For reference, the direct numerical simulation data provided by Sawford and de Bruyn Kops ["Direct numerical simulation and lagrangian modeling of joint scalar statistics in ternary mixing," Phys. Fluids 20(9), 095106 (2008)] was used. It was found that velocity conditioning is essential to obtain accurate concentration PDF predictions. Moreover, the model of Meyer provides significantly better results compared to the IECM model at comparable computational expense.

  15. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    PubMed

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  16. MixSIAR: A Bayesian stable isotope mixing model for characterizing intrapopulation niche variation

    EPA Science Inventory

    Background/Question/Methods The science of stable isotope mixing models has tended towards the development of modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances or syntheses of the current state of the art are published in parity with software packa...

  17. Ionic conductivity and mixed-ion effect in mixed alkali metaphosphate glasses.

    PubMed

    Tsuchida, Jefferson Esquina; Ferri, Fabio Aparecido; Pizani, Paulo Sergio; Martins Rodrigues, Ana Candida; Kundu, Swarup; Schneider, José Fabián; Zanotto, Edgar Dutra

    2017-03-01

    In this work, mixed alkali metaphosphate glasses based on K-Na, Rb-Na, Rb-Li, Cs-Na and Cs-Li combinations were studied by differential scanning calorimetry (DSC), complex impedance spectroscopy, and Raman spectroscopy. DSC analyses show that both the glass transition (T g ) and melting temperatures (T m ) exhibit a clear mixed-ion effect. The ionic conductivity shows a strong mixed-ion effect and decreases by more than six orders of magnitude at room temperature for Rb-Na or Cs-Li alkali pairs. This study confirms that the mixed-ion effect may be explained as a natural consequence of random ion mixing because ion transport is favoured between well-matched energy sites and is impeded due to the structural mismatch between neighbouring sites for dissimilar ions.

  18. Assessing vaccination as a control strategy in an ongoing epidemic: Bovine tuberculosis in African buffalo

    USGS Publications Warehouse

    Cross, Paul C.; Getz, W.M.

    2006-01-01

    Bovine tuberculosis (BTB) is an exotic disease invading the buffalo population (Syncerus caffer) of the Kruger National Park (KNP), South Africa. We used a sex and age-structured epidemiological model to assess the effectiveness of a vaccination program and define important research directions. The model allows for dispersal between a focal herd and background population and was parameterized with a combination of published data and analyses of over 130 radio-collared buffalo in the central region of the KNP. Radio-tracking data indicated that all sex and age categories move between mixed herds, and males over 8 years old had higher mortality and dispersal rates than any other sex or age category. In part due to the high dispersal rates of buffalo, sensitivity analyses indicate that disease prevalence in the background population accounts for the most variability in the BTB prevalence and quasi-eradication within the focal herd. Vaccination rate and the transmission coefficient were the second and third most important parameters of the sensitivity analyses. Further analyses of the model without dispersal suggest that the amount of vaccination necessary for quasi-eradication (i.e. prevalence < 5%) depends upon the duration that a vaccine grants protection. Vaccination programs are more efficient (i.e. fewer wasted doses) when they focus on younger individuals. However, even with a lifelong vaccine and a closed population, the model suggests that >70% of the calf population would have to be vaccinated every year to reduce the prevalence to less than 1%. If the half-life of the vaccine is less than 5 years, even vaccinating every calf for 50 years may not eradicate BTB. Thus, although vaccination provides a means of controlling BTB prevalence it should be combined with other control measures if eradication is the objective.

  19. Hot HB Stars in Globular Clusters: Physical Parameters and Consequences for Theory. 5; Radiative Levitation Versus Helium Mixing

    NASA Technical Reports Server (NTRS)

    Moehler, S.; Sweigart, A. V.; Landsman, W. B.; Heber, U.

    2000-01-01

    Atmospheric parameters (T(sub eff), log g), masses and helium abundances are derived for 42 hot horizontal branch (HB) stars in the globular cluster NGC6752. For 19 stars we derive magnesium and iron abundances as well and find that iron is enriched by a factor of 50 on average with respect to the cluster abundance whereas the magnesium abundances are consistent with the cluster abundance. Radiation pressure may levitate heavy elements like iron to the surface of the star in a diffusive process. Taking into account the enrichment of heavy elements in our spectroscopic analyses we find that high iron abundances can explain part, but not all, of the problem of anomalously low gravities along the blue HB. The blue HB stars cooler than about 15,100 K and the sdB stars (T(sub eff) greater than or = 20,000 K) agree well with canonical theory when analysed with metal-rich ([M/H] = +0.5) model atmospheres, but the stars in between these two groups remain offset towards lower gravities and masses. Deep Mixing in the red giant progenitor phase is discussed as another mechanism that may influence the position of the blue HB stars in the (T(sub eff), log g)-plane but not their masses.

  20. Phenotypic and Genetic Divergence among Poison Frog Populations in a Mimetic Radiation

    PubMed Central

    Twomey, Evan; Yeager, Justin; Brown, Jason Lee; Morales, Victor; Cummings, Molly; Summers, Kyle

    2013-01-01

    The evolution of Müllerian mimicry is, paradoxically, associated with high levels of diversity in color and pattern. In a mimetic radiation, different populations of a species evolve to resemble different models, which can lead to speciation. Yet there are circumstances under which initial selection for divergence under mimicry may be reversed. Here we provide evidence for the evolution of extensive phenotypic divergence in a mimetic radiation in Ranitomeya imitator, the mimic poison frog, in Peru. Analyses of color hue (spectral reflectance) and pattern reveal substantial divergence between morphs. However, we also report that there is a “transition-zone” with mixed phenotypes. Analyses of genetic structure using microsatellite variation reveals some differentiation between populations, but this does not strictly correspond to color pattern divergence. Analyses of gene flow between populations suggest that, while historical levels of gene flow were low, recent levels are high in some cases, including substantial gene flow between some color pattern morphs. We discuss possible explanations for these observations. PMID:23405150

Top