Sample records for mixed model based

  1. Modeling of Mixing Behavior in a Combined Blowing Steelmaking Converter with a Filter-Based Euler-Lagrange Model

    NASA Astrophysics Data System (ADS)

    Li, Mingming; Li, Lin; Li, Qiang; Zou, Zongshu

    2018-05-01

    A filter-based Euler-Lagrange multiphase flow model is used to study the mixing behavior in a combined blowing steelmaking converter. The Euler-based volume of fluid approach is employed to simulate the top blowing, while the Lagrange-based discrete phase model that embeds the local volume change of rising bubbles for the bottom blowing. A filter-based turbulence method based on the local meshing resolution is proposed aiming to improve the modeling of turbulent eddy viscosities. The model validity is verified through comparison with physical experiments in terms of mixing curves and mixing times. The effects of the bottom gas flow rate on bath flow and mixing behavior are investigated and the inherent reasons for the mixing result are clarified in terms of the characteristics of bottom-blowing plumes, the interaction between plumes and top-blowing jets, and the change of bath flow structure.

  2. Progress Report on SAM Reduced-Order Model Development for Thermal Stratification and Mixing during Reactor Transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, R.

    This report documents the initial progress on the reduced-order flow model developments in SAM for thermal stratification and mixing modeling. Two different modeling approaches are pursued. The first one is based on one-dimensional fluid equations with additional terms accounting for the thermal mixing from both flow circulations and turbulent mixing. The second approach is based on three-dimensional coarse-grid CFD approach, in which the full three-dimensional fluid conservation equations are modeled with closure models to account for the effects of turbulence.

  3. Three novel approaches to structural identifiability analysis in mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2016-05-06

    Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. GAMBIT: A Parameterless Model-Based Evolutionary Algorithm for Mixed-Integer Problems.

    PubMed

    Sadowski, Krzysztof L; Thierens, Dirk; Bosman, Peter A N

    2018-01-01

    Learning and exploiting problem structure is one of the key challenges in optimization. This is especially important for black-box optimization (BBO) where prior structural knowledge of a problem is not available. Existing model-based Evolutionary Algorithms (EAs) are very efficient at learning structure in both the discrete, and in the continuous domain. In this article, discrete and continuous model-building mechanisms are integrated for the Mixed-Integer (MI) domain, comprising discrete and continuous variables. We revisit a recently introduced model-based evolutionary algorithm for the MI domain, the Genetic Algorithm for Model-Based mixed-Integer opTimization (GAMBIT). We extend GAMBIT with a parameterless scheme that allows for practical use of the algorithm without the need to explicitly specify any parameters. We furthermore contrast GAMBIT with other model-based alternatives. The ultimate goal of processing mixed dependences explicitly in GAMBIT is also addressed by introducing a new mechanism for the explicit exploitation of mixed dependences. We find that processing mixed dependences with this novel mechanism allows for more efficient optimization. We further contrast the parameterless GAMBIT with Mixed-Integer Evolution Strategies (MIES) and other state-of-the-art MI optimization algorithms from the General Algebraic Modeling System (GAMS) commercial algorithm suite on problems with and without constraints, and show that GAMBIT is capable of solving problems where variable dependences prevent many algorithms from successfully optimizing them.

  5. An improved NSGA - II algorithm for mixed model assembly line balancing

    NASA Astrophysics Data System (ADS)

    Wu, Yongming; Xu, Yanxia; Luo, Lifei; Zhang, Han; Zhao, Xudong

    2018-05-01

    Aiming at the problems of assembly line balancing and path optimization for material vehicles in mixed model manufacturing system, a multi-objective mixed model assembly line (MMAL), which is based on optimization objectives, influencing factors and constraints, is established. According to the specific situation, an improved NSGA-II algorithm based on ecological evolution strategy is designed. An environment self-detecting operator, which is used to detect whether the environment changes, is adopted in the algorithm. Finally, the effectiveness of proposed model and algorithm is verified by examples in a concrete mixing system.

  6. [Primary branch size of Pinus koraiensis plantation: a prediction based on linear mixed effect model].

    PubMed

    Dong, Ling-Bo; Liu, Zhao-Gang; Li, Feng-Ri; Jiang, Li-Chun

    2013-09-01

    By using the branch analysis data of 955 standard branches from 60 sampled trees in 12 sampling plots of Pinus koraiensis plantation in Mengjiagang Forest Farm in Heilongjiang Province of Northeast China, and based on the linear mixed-effect model theory and methods, the models for predicting branch variables, including primary branch diameter, length, and angle, were developed. Considering tree effect, the MIXED module of SAS software was used to fit the prediction models. The results indicated that the fitting precision of the models could be improved by choosing appropriate random-effect parameters and variance-covariance structure. Then, the correlation structures including complex symmetry structure (CS), first-order autoregressive structure [AR(1)], and first-order autoregressive and moving average structure [ARMA(1,1)] were added to the optimal branch size mixed-effect model. The AR(1) improved the fitting precision of branch diameter and length mixed-effect model significantly, but all the three structures didn't improve the precision of branch angle mixed-effect model. In order to describe the heteroscedasticity during building mixed-effect model, the CF1 and CF2 functions were added to the branch mixed-effect model. CF1 function improved the fitting effect of branch angle mixed model significantly, whereas CF2 function improved the fitting effect of branch diameter and length mixed model significantly. Model validation confirmed that the mixed-effect model could improve the precision of prediction, as compare to the traditional regression model for the branch size prediction of Pinus koraiensis plantation.

  7. Mixed models and reduced/selective integration displacement models for nonlinear analysis of curved beams

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Peters, J. M.

    1981-01-01

    Simple mixed models are developed for use in the geometrically nonlinear analysis of deep arches. A total Lagrangian description of the arch deformation is used, the analytical formulation being based on a form of the nonlinear deep arch theory with the effects of transverse shear deformation included. The fundamental unknowns comprise the six internal forces and generalized displacements of the arch, and the element characteristic arrays are obtained by using Hellinger-Reissner mixed variational principle. The polynomial interpolation functions employed in approximating the forces are one degree lower than those used in approximating the displacements, and the forces are discontinuous at the interelement boundaries. Attention is given to the equivalence between the mixed models developed herein and displacement models based on reduced integration of both the transverse shear and extensional energy terms. The advantages of mixed models over equivalent displacement models are summarized. Numerical results are presented to demonstrate the high accuracy and effectiveness of the mixed models developed and to permit a comparison of their performance with that of other mixed models reported in the literature.

  8. Prediction of stock markets by the evolutionary mix-game model

    NASA Astrophysics Data System (ADS)

    Chen, Fang; Gou, Chengling; Guo, Xiaoqian; Gao, Jieping

    2008-06-01

    This paper presents the efforts of using the evolutionary mix-game model, which is a modified form of the agent-based mix-game model, to predict financial time series. Here, we have carried out three methods to improve the original mix-game model by adding the abilities of strategy evolution to agents, and then applying the new model referred to as the evolutionary mix-game model to forecast the Shanghai Stock Exchange Composite Index. The results show that these modifications can improve the accuracy of prediction greatly when proper parameters are chosen.

  9. VISUALIZATION-BASED ANALYSIS FOR A MIXED-INHIBITION BINARY PBPK MODEL: DETERMINATION OF INHIBITION MECHANISM

    EPA Science Inventory

    A physiologically-based pharmacokinetic (PBPK) model incorporating mixed enzyme inhibition was used to determine the mechanism of metabolic interactions occurring during simultaneous exposures to the organic solvents chloroform and trichloroethylene (TCE). Visualization-based se...

  10. Correlations and risk contagion between mixed assets and mixed-asset portfolio VaR measurements in a dynamic view: An application based on time varying copula models

    NASA Astrophysics Data System (ADS)

    Han, Yingying; Gong, Pu; Zhou, Xiang

    2016-02-01

    In this paper, we apply time varying Gaussian and SJC copula models to study the correlations and risk contagion between mixed assets: financial (stock), real estate and commodity (gold) assets in China firstly. Then we study the dynamic mixed-asset portfolio risk through VaR measurement based on the correlations computed by the time varying copulas. This dynamic VaR-copula measurement analysis has never been used on mixed-asset portfolios. The results show the time varying estimations fit much better than the static models, not only for the correlations and risk contagion based on time varying copulas, but also for the VaR-copula measurement. The time varying VaR-SJC copula models are more accurate than VaR-Gaussian copula models when measuring more risky portfolios with higher confidence levels. The major findings suggest that real estate and gold play a role on portfolio risk diversification and there exist risk contagion and flight to quality between mixed-assets when extreme cases happen, but if we take different mixed-asset portfolio strategies with the varying of time and environment, the portfolio risk will be reduced.

  11. CONVERTING ISOTOPE RATIOS TO DIET COMPOSITION - THE USE OF MIXING MODELS

    EPA Science Inventory

    Investigations of wildlife foraging ecology with stable isotope analysis are increasing. Converting isotope values to proportions of different foods in a consumer's diet requires the use of mixing models. Simple mixing models based on mass balance equations have been used for d...

  12. Joint physical and numerical modeling of water distribution networks.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmerman, Adam; O'Hern, Timothy John; Orear, Leslie Jr.

    2009-01-01

    This report summarizes the experimental and modeling effort undertaken to understand solute mixing in a water distribution network conducted during the last year of a 3-year project. The experimental effort involves measurement of extent of mixing within different configurations of pipe networks, measurement of dynamic mixing in a single mixing tank, and measurement of dynamic solute mixing in a combined network-tank configuration. High resolution analysis of turbulence mixing is carried out via high speed photography as well as 3D finite-volume based Large Eddy Simulation turbulence models. Macroscopic mixing rules based on flow momentum balance are also explored, and in somemore » cases, implemented in EPANET. A new version EPANET code was developed to yield better mixing predictions. The impact of a storage tank on pipe mixing in a combined pipe-tank network during diurnal fill-and-drain cycles is assessed. Preliminary comparison between dynamic pilot data and EPANET-BAM is also reported.« less

  13. VISUALIZATION-BASED ANALYSIS FOR A MIXED-INHIBITION BINARY PBPK MODEL: DETERMINATION OF INHIBITION MECHANISM

    EPA Science Inventory

    A physiologically-based pharmacokinetic (PBPK) model incorporating mixed enzyme inhibition was used to determine mechanism of the metabolic interactions occurring during simultaneous inhalation exposures to the organic solvents chloroform and trichloroethylene (TCE).

    V...

  14. ATLAS - A new Lagrangian transport and mixing model with detailed stratospheric chemistry

    NASA Astrophysics Data System (ADS)

    Wohltmann, I.; Rex, M.; Lehmann, R.

    2009-04-01

    We present a new global Chemical Transport Model (CTM) with full stratospheric chemistry and Lagrangian transport and mixing called ATLAS. Lagrangian models have some crucial advantages over Eulerian grid-box based models, like no numerical diffusion, no limitation of the time step of the model by the CFL criterion, conservation of mixing ratios by design and easy parallelization of code. The transport module is based on a trajectory code developed at the Alfred Wegener Institute. The horizontal and vertical resolution, the vertical coordinate system (pressure, potential temperature, hybrid coordinate) and the time step of the model are flexible, so that the model can be used both for process studies and long-time runs over several decades. Mixing of the Lagrangian air parcels is parameterized based on the local shear and strain of the flow with a method similar to that used in the CLaMS model, but with some modifications like a triangulation that introduces no vertical layers. The stratospheric chemistry module was developed at the Institute and includes 49 species and 170 reactions and a detailed treatment of heterogenous chemistry on polar stratospheric clouds. We present an overview over the model architecture, the transport and mixing concept and some validation results. Comparison of model results with tracer data from flights of the ER2 aircraft in the stratospheric polar vortex in 1999/2000 which are able to resolve fine tracer filaments show that excellent agreement with observed tracer structures can be achieved with a suitable mixing parameterization.

  15. A flavor symmetry model for bilarge leptonic mixing and the lepton masses

    NASA Astrophysics Data System (ADS)

    Ohlsson, Tommy; Seidl, Gerhart

    2002-11-01

    We present a model for leptonic mixing and the lepton masses based on flavor symmetries and higher-dimensional mass operators. The model predicts bilarge leptonic mixing (i.e., the mixing angles θ12 and θ23 are large and the mixing angle θ13 is small) and an inverted hierarchical neutrino mass spectrum. Furthermore, it approximately yields the experimental hierarchical mass spectrum of the charged leptons. The obtained values for the leptonic mixing parameters and the neutrino mass squared differences are all in agreement with atmospheric neutrino data, the Mikheyev-Smirnov-Wolfenstein large mixing angle solution of the solar neutrino problem, and consistent with the upper bound on the reactor mixing angle. Thus, we have a large, but not close to maximal, solar mixing angle θ12, a nearly maximal atmospheric mixing angle θ23, and a small reactor mixing angle θ13. In addition, the model predicts θ 12≃ {π}/{4}-θ 13.

  16. Estimating the numerical diapycnal mixing in an eddy-permitting ocean model

    NASA Astrophysics Data System (ADS)

    Megann, Alex

    2018-01-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, having attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimates have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is a recent ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre. It forms the ocean component of the GC2 climate model, and is closely related to the ocean component of the UKESM1 Earth System Model, the UK's contribution to the CMIP6 model intercomparison. GO5.0 uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. An approach to quantifying the numerical diapycnal mixing in this model, based on the isopycnal watermass analysis of Lee et al. (2002), is described, and the estimates thereby obtained of the effective diapycnal diffusivity in GO5.0 are compared with the values of the explicit diffusivity used by the model. It is shown that the effective mixing in this model configuration is up to an order of magnitude higher than the explicit mixing in much of the ocean interior, implying that mixing in the model below the mixed layer is largely dominated by numerical mixing. This is likely to have adverse consequences for the representation of heat uptake in climate models intended for decadal climate projections, and in particular is highly relevant to the interpretation of the CMIP6 class of climate models, many of which use constant-depth ocean models at ¼° resolution

  17. Advancing Physically-Based Flow Simulations of Alluvial Systems Through Atmospheric Noble Gases and the Novel 37Ar Tracer Method

    NASA Astrophysics Data System (ADS)

    Schilling, Oliver S.; Gerber, Christoph; Partington, Daniel J.; Purtschert, Roland; Brennwald, Matthias S.; Kipfer, Rolf; Hunkeler, Daniel; Brunner, Philip

    2017-12-01

    To provide a sound understanding of the sources, pathways, and residence times of groundwater water in alluvial river-aquifer systems, a combined multitracer and modeling experiment was carried out in an important alluvial drinking water wellfield in Switzerland. 222Rn, 3H/3He, atmospheric noble gases, and the novel 37Ar-method were used to quantify residence times and mixing ratios of water from different sources. With a half-life of 35.1 days, 37Ar allowed to successfully close a critical observational time gap between 222Rn and 3H/3He for residence times of weeks to months. Covering the entire range of residence times of groundwater in alluvial systems revealed that, to quantify the fractions of water from different sources in such systems, atmospheric noble gases and helium isotopes are tracers suited for end-member mixing analysis. A comparison between the tracer-based mixing ratios and mixing ratios simulated with a fully-integrated, physically-based flow model showed that models, which are only calibrated against hydraulic heads, cannot reliably reproduce mixing ratios or residence times of alluvial river-aquifer systems. However, the tracer-based mixing ratios allowed the identification of an appropriate flow model parametrization. Consequently, for alluvial systems, we recommend the combination of multitracer studies that cover all relevant residence times with fully-coupled, physically-based flow modeling to better characterize the complex interactions of river-aquifer systems.

  18. Multi-gas interaction modeling on decorated semiconductor interfaces: A novel Fermi distribution-based response isotherm and the inverse hard/soft acid/base concept

    NASA Astrophysics Data System (ADS)

    Laminack, William; Gole, James

    2015-12-01

    A unique MEMS/NEMS approach is presented for the modeling of a detection platform for mixed gas interactions. Mixed gas analytes interact with nanostructured decorating metal oxide island sites supported on a microporous silicon substrate. The Inverse Hard/Soft acid/base (IHSAB) concept is used to assess a diversity of conductometric responses for mixed gas interactions as a function of these nanostructured metal oxides. The analyte conductometric responses are well represented using a combination diffusion/absorption-based model for multi-gas interactions where a newly developed response absorption isotherm, based on the Fermi distribution function is applied. A further coupling of this model with the IHSAB concept describes the considerations in modeling of multi-gas mixed analyte-interface, and analyte-analyte interactions. Taking into account the molecular electronic interaction of both the analytes with each other and an extrinsic semiconductor interface we demonstrate how the presence of one gas can enhance or diminish the reversible interaction of a second gas with the extrinsic semiconductor interface. These concepts demonstrate important considerations in the array-based formats for multi-gas sensing and its applications.

  19. Investigation of Compressibility Effect for Aeropropulsive Shear Flows

    NASA Technical Reports Server (NTRS)

    Balasubramanyam, M. S.; Chen, C. P.

    2005-01-01

    Rocket Based Combined Cycle (RBCC) engines operate within a wide range of Mach numbers and altitudes. Fundamental fluid dynamic mechanisms involve complex choking, mass entrainment, stream mixing and wall interactions. The Propulsion Research Center at the University of Alabama in Huntsville is involved in an on- going experimental and numerical modeling study of non-axisymmetric ejector-based combined cycle propulsion systems. This paper attempts to address the modeling issues related to mixing, shear layer/wall interaction in a supersonic Strutjet/ejector flow field. Reynolds Averaged Navier-Stokes (RANS) solutions incorporating turbulence models are sought and compared to experimental measurements to characterize detailed flow dynamics. The effect of compressibility on fluids mixing and wall interactions were investigated using an existing CFD methodology. The compressibility correction to conventional incompressible two- equation models is found to be necessary for the supersonic mixing aspect of the ejector flows based on 2-D simulation results. 3-D strut-base flows involving flow separations were also investigated.

  20. Characteristics of the mixing volume model with the interactions among spatially distributed particles for Lagrangian simulations of turbulent mixing

    NASA Astrophysics Data System (ADS)

    Watanabe, Tomoaki; Nagata, Koji

    2016-11-01

    The mixing volume model (MVM), which is a mixing model for molecular diffusion in Lagrangian simulations of turbulent mixing problems, is proposed based on the interactions among spatially distributed particles in a finite volume. The mixing timescale in the MVM is derived by comparison between the model and the subgrid scale scalar variance equation. A-priori test of the MVM is conducted based on the direct numerical simulations of planar jets. The MVM is shown to predict well the mean effects of the molecular diffusion under various conditions. However, a predicted value of the molecular diffusion term is positively correlated to the exact value in the DNS only when the number of the mixing particles is larger than two. Furthermore, the MVM is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (ILES/LPS). The ILES/LPS with the present mixing model predicts well the decay of the scalar variance in planar jets. This work was supported by JSPS KAKENHI Nos. 25289030 and 16K18013. The numerical simulations presented in this manuscript were carried out on the high performance computing system (NEC SX-ACE) in the Japan Agency for Marine-Earth Science and Technology.

  1. Modelling, fabrication and characterization of a polymeric micromixer based on sequential segmentation.

    PubMed

    Nguyen, Nam-Trung; Huang, Xiaoyang

    2006-06-01

    Effective and fast mixing is important for many microfluidic applications. In many cases, mixing is limited by molecular diffusion due to constrains of the laminar flow in the microscale regime. According to scaling law, decreasing the mixing path can shorten the mixing time and enhance mixing quality. One of the techniques for reducing mixing path is sequential segmentation. This technique divides solvent and solute into segments in axial direction. The so-called Taylor-Aris dispersion can improve axial transport by three orders of magnitudes. The mixing path can be controlled by the switching frequency and the mean velocity of the flow. Mixing ratio can be controlled by pulse width modulation of the switching signal. This paper first presents a simple time-dependent one-dimensional analytical model for sequential segmentation. The model considers an arbitrary mixing ratio between solute and solvent as well as the axial Taylor-Aris dispersion. Next, a micromixer was designed and fabricated based on polymeric micromachining. The micromixer was formed by laminating four polymer layers. The layers are micro machined by a CO(2) laser. Switching of the fluid flows was realized by two piezoelectric valves. Mixing experiments were evaluated optically. The concentration profile along the mixing channel agrees qualitatively well with the analytical model. Furthermore, mixing results at different switching frequencies were investigated. Due to the dynamic behavior of the valves and the fluidic system, mixing quality decreases with increasing switching frequency.

  2. Mixed effects versus fixed effects modelling of binary data with inter-subject variability.

    PubMed

    Murphy, Valda; Dunne, Adrian

    2005-04-01

    The question of whether or not a mixed effects model is required when modelling binary data with inter-subject variability and within subject correlation was reported in this journal by Yano et al. (J. Pharmacokin. Pharmacodyn. 28:389-412 [2001]). That report used simulation experiments to demonstrate that, under certain circumstances, the use of a fixed effects model produced more accurate estimates of the fixed effect parameters than those produced by a mixed effects model. The Laplace approximation to the likelihood was used when fitting the mixed effects model. This paper repeats one of those simulation experiments, with two binary observations recorded for every subject, and uses both the Laplace and the adaptive Gaussian quadrature approximations to the likelihood when fitting the mixed effects model. The results show that the estimates produced using the Laplace approximation include a small number of extreme outliers. This was not the case when using the adaptive Gaussian quadrature approximation. Further examination of these outliers shows that they arise in situations in which the Laplace approximation seriously overestimates the likelihood in an extreme region of the parameter space. It is also demonstrated that when the number of observations per subject is increased from two to three, the estimates based on the Laplace approximation no longer include any extreme outliers. The root mean squared error is a combination of the bias and the variability of the estimates. Increasing the sample size is known to reduce the variability of an estimator with a consequent reduction in its root mean squared error. The estimates based on the fixed effects model are inherently biased and this bias acts as a lower bound for the root mean squared error of these estimates. Consequently, it might be expected that for data sets with a greater number of subjects the estimates based on the mixed effects model would be more accurate than those based on the fixed effects model. This is borne out by the results of a further simulation experiment with an increased number of subjects in each set of data. The difference in the interpretation of the parameters of the fixed and mixed effects models is discussed. It is demonstrated that the mixed effects model and parameter estimates can be used to estimate the parameters of the fixed effects model but not vice versa.

  3. Marketing for a Web-Based Master's Degree Program in Light of Marketing Mix Model

    ERIC Educational Resources Information Center

    Pan, Cheng-Chang

    2012-01-01

    The marketing mix model was applied with a focus on Web media to re-strategize a Web-based Master's program in a southern state university in U.S. The program's existing marketing strategy was examined using the four components of the model: product, price, place, and promotion, in hopes to repackage the program (product) to prospective students…

  4. Coding response to a case-mix measurement system based on multiple diagnoses.

    PubMed

    Preyra, Colin

    2004-08-01

    To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post.

  5. Development of a Mixed Methods Investigation of Process and Outcomes of Community-Based Participatory Research.

    PubMed

    Lucero, Julie; Wallerstein, Nina; Duran, Bonnie; Alegria, Margarita; Greene-Moton, Ella; Israel, Barbara; Kastelic, Sarah; Magarati, Maya; Oetzel, John; Pearson, Cynthia; Schulz, Amy; Villegas, Malia; White Hat, Emily R

    2018-01-01

    This article describes a mixed methods study of community-based participatory research (CBPR) partnership practices and the links between these practices and changes in health status and disparities outcomes. Directed by a CBPR conceptual model and grounded in indigenous-transformative theory, our nation-wide, cross-site study showcases the value of a mixed methods approach for better understanding the complexity of CBPR partnerships across diverse community and research contexts. The article then provides examples of how an iterative, integrated approach to our mixed methods analysis yielded enriched understandings of two key constructs of the model: trust and governance. Implications and lessons learned while using mixed methods to study CBPR are provided.

  6. Extending existing structural identifiability analysis methods to mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2018-01-01

    The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. System equivalent model mixing

    NASA Astrophysics Data System (ADS)

    Klaassen, Steven W. B.; van der Seijs, Maarten V.; de Klerk, Dennis

    2018-05-01

    This paper introduces SEMM: a method based on Frequency Based Substructuring (FBS) techniques that enables the construction of hybrid dynamic models. With System Equivalent Model Mixing (SEMM) frequency based models, either of numerical or experimental nature, can be mixed to form a hybrid model. This model follows the dynamic behaviour of a predefined weighted master model. A large variety of applications can be thought of, such as the DoF-space expansion of relatively small experimental models using numerical models, or the blending of different models in the frequency spectrum. SEMM is outlined, both mathematically and conceptually, based on a notation commonly used in FBS. A critical physical interpretation of the theory is provided next, along with a comparison to similar techniques; namely DoF expansion techniques. SEMM's concept is further illustrated by means of a numerical example. It will become apparent that the basic method of SEMM has some shortcomings which warrant a few extensions to the method. One of the main applications is tested in a practical case, performed on a validated benchmark structure; it will emphasize the practicality of the method.

  8. Estimating the Numerical Diapycnal Mixing in the GO5.0 Ocean Model

    NASA Astrophysics Data System (ADS)

    Megann, A.; Nurser, G.

    2014-12-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, and have attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimations have been made of the magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is the latest ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre (Megann et al, 2014), and forms part of the GC1 and GC2 climate models. It uses version 3.4 of the NEMO model, on the ORCA025 ¼° global tripolar grid. We describe various approaches to quantifying the numerical diapycnal mixing in this model, and present results from analysis of the GO5.0 model based on the isopycnal watermass analysis of Lee et al (2002) that indicate that numerical mixing does indeed form a significant component of the watermass transformation in the ocean interior.

  9. Development of a Medicaid Behavioral Health Case-Mix Model

    ERIC Educational Resources Information Center

    Robst, John

    2009-01-01

    Many Medicaid programs have either fully or partially carved out mental health services. The evaluation of carve-out plans requires a case-mix model that accounts for differing health status across Medicaid managed care plans. This article develops a diagnosis-based case-mix adjustment system specific to Medicaid behavioral health care. Several…

  10. A Lagrangian mixing frequency model for transported PDF modeling

    NASA Astrophysics Data System (ADS)

    Turkeri, Hasret; Zhao, Xinyu

    2017-11-01

    In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.

  11. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    NASA Astrophysics Data System (ADS)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  12. [Three-dimensional finite analysis of the stress in first mandibular molar with composite class I restoration when various mixing ratios of bases were used].

    PubMed

    Zhou, Lan; Yang, Jin-Bo; Liu, Dan; Liu, Zhan; Chen, Ying; Gao, Bo

    2008-06-01

    To analyze the possible damage to the remaining tooth and composite restorations when various mixing ratios of bases were used. Testing elastic modulus and poission's ratio of glass-ionomer Vitrebond and self-cured calcium hydroxide Dycal with mixing ratios of 1:1, 3:4, 4:3. Micro-CT was used to scan the first mandibular molar, and the three-dimensional finite element model of the first permanent mandibular molar with class I cavity was established. Analyzing the stress of tooth structure, composite and base cement under physical load when different mixing ratios of base cement were used. The elastic modulus of base cement in various mixing ratios was different, which had the statistic significance. The magnitude and location of stress in restored tooth made no differences when the mixing ratios of Vitrebond and Dycal were changed. The peak stress and spreading area in the model with Dycal was more than that with Vitrebond. Changing the best mixing ratio of base cement can partially influence the mechanistic character, but make no differences on the magnitude and location of stress in restored tooth. During the treatment of deep caries, the base cement of the elastic modulus which is proximal to the dentin and restoration should be chosen to avoid the fracture of tooth or restoration.

  13. Model Selection with the Linear Mixed Model for Longitudinal Data

    ERIC Educational Resources Information Center

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  14. Development of a Mixed Methods Investigation of Process and Outcomes of Community-Based Participatory Research

    PubMed Central

    Lucero, Julie; Wallerstein, Nina; Duran, Bonnie; Alegria, Margarita; Greene-Moton, Ella; Israel, Barbara; Kastelic, Sarah; Magarati, Maya; Oetzel, John; Pearson, Cynthia; Schulz, Amy; Villegas, Malia; White Hat, Emily R.

    2017-01-01

    This article describes a mixed methods study of community-based participatory research (CBPR) partnership practices and the links between these practices and changes in health status and disparities outcomes. Directed by a CBPR conceptual model and grounded in indigenous-transformative theory, our nation-wide, cross-site study showcases the value of a mixed methods approach for better understanding the complexity of CBPR partnerships across diverse community and research contexts. The article then provides examples of how an iterative, integrated approach to our mixed methods analysis yielded enriched understandings of two key constructs of the model: trust and governance. Implications and lessons learned while using mixed methods to study CBPR are provided. PMID:29230152

  15. An Investigation of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee

    2009-01-01

    The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…

  16. Coding Response to a Case-Mix Measurement System Based on Multiple Diagnoses

    PubMed Central

    Preyra, Colin

    2004-01-01

    Objective To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Data Sources Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Study Design Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Principal Findings Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Conclusions Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post. PMID:15230940

  17. Computation of turbulent high speed mixing layers using a two-equation turbulence model

    NASA Technical Reports Server (NTRS)

    Narayan, J. R.; Sekar, B.

    1991-01-01

    A two-equation turbulence model was extended to be applicable for compressible flows. A compressibility correction based on modelling the dilational terms in the Reynolds stress equations were included in the model. The model is used in conjunction with the SPARK code for the computation of high speed mixing layers. The observed trend of decreasing growth rate with increasing convective Mach number in compressible mixing layers is well predicted by the model. The predictions agree well with the experimental data and the results from a compressible Reynolds stress model. The present model appears to be well suited for the study of compressible free shear flows. Preliminary results obtained for the reacting mixing layers are included.

  18. An adjoint-based framework for maximizing mixing in binary fluids

    NASA Astrophysics Data System (ADS)

    Eggl, Maximilian; Schmid, Peter

    2017-11-01

    Mixing in the inertial, but laminar parameter regime is a common application in a wide range of industries. Enhancing the efficiency of mixing processes thus has a fundamental effect on product quality, material homogeneity and, last but not least, production costs. In this project, we address mixing efficiency in the above mentioned regime (Reynolds number Re = 1000 , Peclet number Pe = 1000) by developing and demonstrating an algorithm based on nonlinear adjoint looping that minimizes the variance of a passive scalar field which models our binary Newtonian fluids. The numerical method is based on the FLUSI code (Engels et al. 2016), a Fourier pseudo-spectral code, which we modified and augmented by scalar transport and adjoint equations. Mixing is accomplished by moving stirrers which are numerically modeled using a penalization approach. In our two-dimensional simulations we consider rotating circular and elliptic stirrers and extract optimal mixing strategies from the iterative scheme. The case of optimizing shape and rotational speed of the stirrers will be demonstrated.

  19. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    PubMed

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  20. Prevalence of Mixed-Methods Sampling Designs in Social Science Research

    ERIC Educational Resources Information Center

    Collins, Kathleen M. T.

    2006-01-01

    The purpose of this mixed-methods study was to document the prevalence of sampling designs utilised in mixed-methods research and to examine the interpretive consistency between interpretations made in mixed-methods studies and the sampling design used. Classification of studies was based on a two-dimensional mixed-methods sampling model. This…

  1. Comparing the Construct and Criterion-Related Validity of Ability-Based and Mixed-Model Measures of Emotional Intelligence

    ERIC Educational Resources Information Center

    Livingstone, Holly A.; Day, Arla L.

    2005-01-01

    Despite the popularity of the concept of emotional intelligence(EI), there is much controversy around its definition, measurement, and validity. Therefore, the authors examined the construct and criterion-related validity of an ability-based EI measure (Mayer Salovey Caruso Emotional Intelligence Test [MSCEIT]) and a mixed-model EI measure…

  2. How we compute N matters to estimates of mixing in stratified flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arthur, Robert S.; Venayagamoorthy, Subhas K.; Koseff, Jeffrey R.

    We know that most commonly used models for turbulent mixing in the ocean rely on a background stratification against which turbulence must work to stir the fluid. While this background stratification is typically well defined in idealized numerical models, it is more difficult to capture in observations. Here, a potential discrepancy in ocean mixing estimates due to the chosen calculation of the background stratification is explored using direct numerical simulation data of breaking internal waves on slopes. There are two different methods for computing the buoyancy frequencymore » $N$$, one based on a three-dimensionally sorted density field (often used in numerical models) and the other based on locally sorted vertical density profiles (often used in the field), are used to quantify the effect of$$N$$on turbulence quantities. It is shown that how$$N$$is calculated changes not only the flux Richardson number$$R_{f}$$, which is often used to parameterize turbulent mixing, but also the turbulence activity number or the Gibson number$$Gi$$, leading to potential errors in estimates of the mixing efficiency using$$Gi$-based parameterizations.« less

  3. How we compute N matters to estimates of mixing in stratified flows

    DOE PAGES

    Arthur, Robert S.; Venayagamoorthy, Subhas K.; Koseff, Jeffrey R.; ...

    2017-10-13

    We know that most commonly used models for turbulent mixing in the ocean rely on a background stratification against which turbulence must work to stir the fluid. While this background stratification is typically well defined in idealized numerical models, it is more difficult to capture in observations. Here, a potential discrepancy in ocean mixing estimates due to the chosen calculation of the background stratification is explored using direct numerical simulation data of breaking internal waves on slopes. There are two different methods for computing the buoyancy frequencymore » $N$$, one based on a three-dimensionally sorted density field (often used in numerical models) and the other based on locally sorted vertical density profiles (often used in the field), are used to quantify the effect of$$N$$on turbulence quantities. It is shown that how$$N$$is calculated changes not only the flux Richardson number$$R_{f}$$, which is often used to parameterize turbulent mixing, but also the turbulence activity number or the Gibson number$$Gi$$, leading to potential errors in estimates of the mixing efficiency using$$Gi$-based parameterizations.« less

  4. A size-composition resolved aerosol model for simulating the dynamics of externally mixed particles: SCRAM (v 1.0)

    NASA Astrophysics Data System (ADS)

    Zhu, S.; Sartelet, K. N.; Seigneur, C.

    2015-06-01

    The Size-Composition Resolved Aerosol Model (SCRAM) for simulating the dynamics of externally mixed atmospheric particles is presented. This new model classifies aerosols by both composition and size, based on a comprehensive combination of all chemical species and their mass-fraction sections. All three main processes involved in aerosol dynamics (coagulation, condensation/evaporation and nucleation) are included. The model is first validated by comparison with a reference solution and with results of simulations using internally mixed particles. The degree of mixing of particles is investigated in a box model simulation using data representative of air pollution in Greater Paris. The relative influence on the mixing state of the different aerosol processes (condensation/evaporation, coagulation) and of the algorithm used to model condensation/evaporation (bulk equilibrium, dynamic) is studied.

  5. Model for compressible turbulence in hypersonic wall boundary and high-speed mixing layers

    NASA Astrophysics Data System (ADS)

    Bowersox, Rodney D. W.; Schetz, Joseph A.

    1994-07-01

    The most common approach to Navier-Stokes predictions of turbulent flows is based on either the classical Reynolds-or Favre-averaged Navier-Stokes equations or some combination. The main goal of the current work was to numerically assess the effects of the compressible turbulence terms that were experimentaly found to be important. The compressible apparent mass mixing length extension (CAMMLE) model, which was based on measured experimental data, was found to produce accurate predictions of the measured compressible turbulence data for both the wall bounded and free mixing layer. Hence, that model was incorporated into a finite volume Navier-Stokes code.

  6. Mixing-controlled reactive transport on travel times in heterogeneous media

    NASA Astrophysics Data System (ADS)

    Luo, J.; Cirpka, O.

    2008-05-01

    Modeling mixing-controlled reactive transport using traditional spatial discretization of the domain requires identifying the spatial distributions of hydraulic and reactive parameters including mixing-related quantities such as dispersivities and kinetic mass-transfer coefficients. In most applications, breakthrough curves of conservative and reactive compounds are measured at only a few locations and models are calibrated by matching these breakthrough curves, which is an ill posed inverse problem. By contrast, travel-time based transport models avoid costly aquifer characterization. By considering breakthrough curves measured on different scales, one can distinguish between mixing, which is a prerequisite for reactions, and spreading, which per se does not foster reactions. In the travel-time based framework, the breakthrough curve of a solute crossing an observation plane, or ending in a well, is interpreted as the weighted average of concentrations in an ensemble of non-interacting streamtubes, each of which is characterized by a distinct travel-time value. Mixing is described by longitudinal dispersion and/or kinetic mass transfer along individual streamtubes, whereas spreading is characterized by the distribution of travel times which also determines the weights associated to each stream tube. Key issues in using the travel-time based framework include the description of mixing mechanisms and the estimation of the travel-time distribution. In this work, we account for both apparent longitudinal dispersion and kinetic mass transfer as mixing mechanisms, thus generalizing the stochastic-convective model with or without inter-phase mass transfer and the advective-dispersive streamtube model. We present a nonparametric approach of determining the travel-time distribution, given a breakthrough curve integrated over an observation plane and estimated mixing parameters. The latter approach is superior to fitting parametric models in cases where the true travel-time distribution exhibits multiple peaks or long tails. It is demonstrated that there is freedom for the combinations of mixing parameters and travel-time distributions to fit conservative breakthrough curves and describe the tailing. Reactive transport cases with a bimolecular instantaneous irreversible reaction and a dual Michaelis-Menten problem demonstrate that the mixing introduced by local dispersion and mass transfer may be described by apparent mean mass transfer with coefficients evaluated by local breakthrough curves.

  7. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.

    PubMed

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-03-15

    Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.

  8. Model's sparse representation based on reduced mixed GMsFE basis methods

    NASA Astrophysics Data System (ADS)

    Jiang, Lijian; Li, Qiuqi

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.

  9. Model's sparse representation based on reduced mixed GMsFE basis methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less

  10. Impacts of Subgrid Heterogeneous Mixing between Cloud Liquid and Ice on the Wegner-Bergeron-Findeisen Process and Mixed-phase Clouds in NCAR CAM5

    NASA Astrophysics Data System (ADS)

    Liu, X.; Zhang, M.; Zhang, D.; Wang, Z.; Wang, Y.

    2017-12-01

    Mixed-phase clouds are persistently observed over the Arctic and the phase partitioning between cloud liquid and ice hydrometeors in mixed-phase clouds has important impacts on the surface energy budget and Arctic climate. In this study, we test the NCAR Community Atmosphere Model Version 5 (CAM5) with the single-column and weather forecast configurations and evaluate the model performance against observation data from the DOE Atmospheric Radiation Measurement (ARM) Program's M-PACE field campaign in October 2004 and long-term ground-based multi-sensor remote sensing measurements. Like most global climate models, we find that CAM5 also poorly simulates the phase partitioning in mixed-phase clouds by significantly underestimating the cloud liquid water content. Assuming pocket structures in the distribution of cloud liquid and ice in mixed-phase clouds as suggested by in situ observations provides a plausible solution to improve the model performance by reducing the Wegner-Bergeron-Findeisen (WBF) process rate. In this study, the modification of the WBF process in the CAM5 model has been achieved with applying a stochastic perturbation to the time scale of the WBF process relevant to both ice and snow to account for the heterogeneous mixture of cloud liquid and ice. Our results show that this modification of WBF process improves the modeled phase partitioning in the mixed-phase clouds. The seasonal variation of mixed-phase cloud properties is also better reproduced in the model in comparison with the long-term ground-based remote sensing observations. Furthermore, the phase partitioning is insensitive to the reassignment time step of perturbations.

  11. The numerical modelling of mixing phenomena of nanofluids in passive micromixers

    NASA Astrophysics Data System (ADS)

    Milotin, R.; Lelea, D.

    2018-01-01

    The paper deals with the rapid mixing phenomena in micro-mixing devices with four tangential injections and converging tube, considering nanoparticles and water as the base fluid. Several parameters like Reynolds number (Re = 6 - 284) or fluid temperature are considered in order to optimize the process and obtain fundamental insight in mixing phenomena. The set of partial differential equations is considered based on conservation of momentum and species. Commercial package software Ansys-Fluent is used for solution of differential equations, based on a finite volume method. The results reveal that mixing index and mixing process is strongly dependent both on Reynolds number and heat flux. Moreover there is a certain Reynolds number when flow instabilities are generated that intensify the mixing process due to the tangential injections of the fluids.

  12. Modelling rainfall amounts using mixed-gamma model for Kuantan district

    NASA Astrophysics Data System (ADS)

    Zakaria, Roslinazairimah; Moslim, Nor Hafizah

    2017-05-01

    An efficient design of flood mitigation and construction of crop growth models depend upon good understanding of the rainfall process and characteristics. Gamma distribution is usually used to model nonzero rainfall amounts. In this study, the mixed-gamma model is applied to accommodate both zero and nonzero rainfall amounts. The mixed-gamma model presented is for the independent case. The formulae of mean and variance are derived for the sum of two and three independent mixed-gamma variables, respectively. Firstly, the gamma distribution is used to model the nonzero rainfall amounts and the parameters of the distribution (shape and scale) are estimated using the maximum likelihood estimation method. Then, the mixed-gamma model is defined for both zero and nonzero rainfall amounts simultaneously. The formulae of mean and variance for the sum of two and three independent mixed-gamma variables derived are tested using the monthly rainfall amounts from rainfall stations within Kuantan district in Pahang Malaysia. Based on the Kolmogorov-Smirnov goodness of fit test, the results demonstrate that the descriptive statistics of the observed sum of rainfall amounts is not significantly different at 5% significance level from the generated sum of independent mixed-gamma variables. The methodology and formulae demonstrated can be applied to find the sum of more than three independent mixed-gamma variables.

  13. Toward topology-based characterization of small-scale mixing in compressible turbulence

    NASA Astrophysics Data System (ADS)

    Suman, Sawan; Girimaji, Sharath

    2011-11-01

    Turbulent mixing rate at small scales of motion (molecular mixing) is governed by the steepness of the scalar-gradient field which in turn is dependent upon the prevailing velocity gradients. Thus motivated, we propose a velocity-gradient topology-based approach for characterizing small-scale mixing in compressible turbulence. We define a mixing efficiency metric that is dependent upon the topology of the solenoidal and dilatational deformation rates of a fluid element. The mixing characteristics of solenoidal and dilatational velocity fluctuations are clearly delineated. We validate this new approach by employing mixing data from direct numerical simulations (DNS) of compressible decaying turbulence with passive scalar. For each velocity-gradient topology, we compare the mixing efficiency predicted by the topology-based model with the corresponding conditional scalar variance obtained from DNS. The new mixing metric accurately distinguishes good and poor mixing topologies and indeed reasonably captures the numerical values. The results clearly demonstrate the viability of the proposed approach for characterizing and predicting mixing in compressible flows.

  14. Transition mixing study empirical model report

    NASA Technical Reports Server (NTRS)

    Srinivasan, R.; White, C.

    1988-01-01

    The empirical model developed in the NASA Dilution Jet Mixing Program has been extended to include the curvature effects of transition liners. This extension is based on the results of a 3-D numerical model generated under this contract. The empirical model results agree well with the numerical model results for all tests cases evaluated. The empirical model shows faster mixing rates compared to the numerical model. Both models show drift of jets toward the inner wall of a turning duct. The structure of the jets from the inner wall does not exhibit the familiar kidney-shaped structures observed for the outer wall jets or for jets injected in rectangular ducts.

  15. Bayesian quantile regression-based partially linear mixed-effects joint models for longitudinal data with multiple features.

    PubMed

    Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara

    2017-01-01

    In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.

  16. Investigation of micromixing by acoustically oscillated sharp-edges

    PubMed Central

    Nama, Nitesh; Huang, Po-Hsun; Huang, Tony Jun; Costanzo, Francesco

    2016-01-01

    Recently, acoustically oscillated sharp-edges have been utilized to achieve rapid and homogeneous mixing in microchannels. Here, we present a numerical model to investigate acoustic mixing inside a sharp-edge-based micromixer in the presence of a background flow. We extend our previously reported numerical model to include the mixing phenomena by using perturbation analysis and the Generalized Lagrangian Mean (GLM) theory in conjunction with the convection-diffusion equation. We divide the flow variables into zeroth-order, first-order, and second-order variables. This results in three sets of equations representing the background flow, acoustic response, and the time-averaged streaming flow, respectively. These equations are then solved successively to obtain the mean Lagrangian velocity which is combined with the convection-diffusion equation to predict the concentration profile. We validate our numerical model via a comparison of the numerical results with the experimentally obtained values of the mixing index for different flow rates. Further, we employ our model to study the effect of the applied input power and the background flow on the mixing performance of the sharp-edge-based micromixer. We also suggest potential design changes to the previously reported sharp-edge-based micromixer to improve its performance. Finally, we investigate the generation of a tunable concentration gradient by a linear arrangement of the sharp-edge structures inside the microchannel. PMID:27158292

  17. Investigation of micromixing by acoustically oscillated sharp-edges.

    PubMed

    Nama, Nitesh; Huang, Po-Hsun; Huang, Tony Jun; Costanzo, Francesco

    2016-03-01

    Recently, acoustically oscillated sharp-edges have been utilized to achieve rapid and homogeneous mixing in microchannels. Here, we present a numerical model to investigate acoustic mixing inside a sharp-edge-based micromixer in the presence of a background flow. We extend our previously reported numerical model to include the mixing phenomena by using perturbation analysis and the Generalized Lagrangian Mean (GLM) theory in conjunction with the convection-diffusion equation. We divide the flow variables into zeroth-order, first-order, and second-order variables. This results in three sets of equations representing the background flow, acoustic response, and the time-averaged streaming flow, respectively. These equations are then solved successively to obtain the mean Lagrangian velocity which is combined with the convection-diffusion equation to predict the concentration profile. We validate our numerical model via a comparison of the numerical results with the experimentally obtained values of the mixing index for different flow rates. Further, we employ our model to study the effect of the applied input power and the background flow on the mixing performance of the sharp-edge-based micromixer. We also suggest potential design changes to the previously reported sharp-edge-based micromixer to improve its performance. Finally, we investigate the generation of a tunable concentration gradient by a linear arrangement of the sharp-edge structures inside the microchannel.

  18. A mixed model for the relationship between climate and human cranial form.

    PubMed

    Katz, David C; Grote, Mark N; Weaver, Timothy D

    2016-08-01

    We expand upon a multivariate mixed model from quantitative genetics in order to estimate the magnitude of climate effects in a global sample of recent human crania. In humans, genetic distances are correlated with distances based on cranial form, suggesting that population structure influences both genetic and quantitative trait variation. Studies controlling for this structure have demonstrated significant underlying associations of cranial distances with ecological distances derived from climate variables. However, to assess the biological importance of an ecological predictor, estimates of effect size and uncertainty in the original units of measurement are clearly preferable to significance claims based on units of distance. Unfortunately, the magnitudes of ecological effects are difficult to obtain with distance-based methods, while models that produce estimates of effect size generally do not scale to high-dimensional data like cranial shape and form. Using recent innovations that extend quantitative genetics mixed models to highly multivariate observations, we estimate morphological effects associated with a climate predictor for a subset of the Howells craniometric dataset. Several measurements, particularly those associated with cranial vault breadth, show a substantial linear association with climate, and the multivariate model incorporating a climate predictor is preferred in model comparison. Previous studies demonstrated the existence of a relationship between climate and cranial form. The mixed model quantifies this relationship concretely. Evolutionary questions that require population structure and phylogeny to be disentangled from potential drivers of selection may be particularly well addressed by mixed models. Am J Phys Anthropol 160:593-603, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  19. An S 4 model inspired from self-complementary neutrino mixing

    NASA Astrophysics Data System (ADS)

    Zhang, Xinyi

    2018-03-01

    We build an S 4 model for neutrino masses and mixings based on the self-complementary (SC) neutrino mixing pattern. The SC mixing is constructed from the self-complementarity relation plus {δ }CP}=-\\tfrac{π }{2}. We elaborately construct the model at a percent level of accuracy to reproduce the structure given by the SC mixing. After performing a numerical study on the model’s parameter space, we find that in the case of normal ordering, the model can give predictions on the observables that are compatible with their 3σ ranges, and give predictions for the not-yet observed quantities like the lightest neutrino mass m 1 ∈ [0.003, 0.010] eV and the Dirac CP violating phase {δ }CP}\\in [256.72^\\circ ,283.33^\\circ ].

  20. Case-Mix Adjusting Performance Measures in a Veteran Population: Pharmacy- and Diagnosis-Based Approaches

    PubMed Central

    Liu, Chuan-Fen; Sales, Anne E; Sharp, Nancy D; Fishman, Paul; Sloan, Kevin L; Todd-Stenberg, Jeff; Nichol, W Paul; Rosen, Amy K; Loveland, Susan

    2003-01-01

    Objective To compare the rankings for health care utilization performance measures at the facility level in a Veterans Health Administration (VHA) health care delivery network using pharmacy- and diagnosis-based case-mix adjustment measures. Data Sources/Study Setting The study included veterans who used inpatient or outpatient services in Veterans Integrated Service Network (VISN) 20 during fiscal year 1998 (October 1997 to September 1998; N=126,076). Utilization and pharmacy data were extracted from VHA national databases and the VISN 20 data warehouse. Study Design We estimated concurrent regression models using pharmacy or diagnosis information in the base year (FY1998) to predict health service utilization in the same year. Utilization measures included bed days of care for inpatient care and provider visits for outpatient care. Principal Findings Rankings of predicted utilization measures across facilities vary by case-mix adjustment measure. There is greater consistency within the diagnosis-based models than between the diagnosis- and pharmacy-based models. The eight facilities were ranked differently by the diagnosis- and pharmacy-based models. Conclusions Choice of case-mix adjustment measure affects rankings of facilities on performance measures, raising concerns about the validity of profiling practices. Differences in rankings may reflect differences in comparability of data capture across facilities between pharmacy and diagnosis data sources, and unstable estimates due to small numbers of patients in a facility. PMID:14596393

  1. Modeling of hot-mix asphalt compaction : a thermodynamics-based compressible viscoelastic model

    DOT National Transportation Integrated Search

    2010-12-01

    Compaction is the process of reducing the volume of hot-mix asphalt (HMA) by the application of external forces. As a result of compaction, the volume of air voids decreases, aggregate interlock increases, and interparticle friction increases. The qu...

  2. Enthalpy of Mixing in Al–Tb Liquid

    DOE PAGES

    Zhou, Shihuai; Tackes, Carl; Napolitano, Ralph

    2017-06-21

    The liquid-phase enthalpy of mixing for Al$-$Tb alloys is measured for 3, 5, 8, 10, and 20 at% Tb at selected temperatures in the range from 1364 to 1439 K. Methods include isothermal solution calorimetry and isoperibolic electromagnetic levitation drop calorimetry. Mixing enthalpy is determined relative to the unmixed pure (Al and Tb) components. The required formation enthalpy for the Al3Tb phase is computed from first-principles calculations. Finally, based on our measurements, three different semi-empirical solution models are offered for the excess free energy of the liquid, including regular, subregular, and associate model formulations. These models are also compared withmore » the Miedema model prediction of mixing enthalpy.« less

  3. Physician-owned Surgical Hospitals Outperform Other Hospitals in the Medicare Value-based Purchasing Program

    PubMed Central

    Ramirez, Adriana G; Tracci, Margaret C; Stukenborg, George J; Turrentine, Florence E; Kozower, Benjamin D; Jones, R Scott

    2016-01-01

    Background The Hospital Value-Based Purchasing Program measures value of care provided by participating Medicare hospitals while creating financial incentives for quality improvement and fostering increased transparency. Limited information is available comparing hospital performance across healthcare business models. Study Design 2015 hospital Value-Based Purchasing Program results were used to examine hospital performance by business model. General linear modeling assessed differences in mean total performance score, hospital case mix index, and differences after adjustment for differences in hospital case mix index. Results Of 3089 hospitals with Total Performance Scores (TPS), categories of representative healthcare business models included 104 Physician-owned Surgical Hospitals (POSH), 111 University HealthSystem Consortium (UHC), 14 US News & World Report Honor Roll (USNWR) Hospitals, 33 Kaiser Permanente, and 124 Pioneer Accountable Care Organization affiliated hospitals. Estimated mean TPS for POSH (64.4, 95% CI 61.83, 66.38) and Kaiser (60.79, 95% CI 56.56, 65.03) were significantly higher compared to all remaining hospitals while UHC members (36.8, 95% CI 34.51, 39.17) performed below the mean (p < 0.0001). Significant differences in mean hospital case mix index included POSH (mean 2.32, p<0.0001), USNWR honorees (mean 2.24, p 0.0140) and UHC members (mean =1.99, p<0.0001) while Kaiser Permanente hospitals had lower case mix value (mean =1.54, p<0.0001). Re-estimation of TPS did not change the original results after adjustment for differences in hospital case mix index. Conclusions The Hospital Value-Based Purchasing Program revealed superior hospital performance associated with business model. Closer inspection of high-value hospitals may guide value improvement and policy-making decisions for all Medicare Value-Based Purchasing Program Hospitals. PMID:27502368

  4. Adaptive mixed finite element methods for Darcy flow in fractured porous media

    NASA Astrophysics Data System (ADS)

    Chen, Huangxin; Salama, Amgad; Sun, Shuyu

    2016-10-01

    In this paper, we propose adaptive mixed finite element methods for simulating the single-phase Darcy flow in two-dimensional fractured porous media. The reduced model that we use for the simulation is a discrete fracture model coupling Darcy flows in the matrix and the fractures, and the fractures are modeled by one-dimensional entities. The Raviart-Thomas mixed finite element methods are utilized for the solution of the coupled Darcy flows in the matrix and the fractures. In order to improve the efficiency of the simulation, we use adaptive mixed finite element methods based on novel residual-based a posteriori error estimators. In addition, we develop an efficient upscaling algorithm to compute the effective permeability of the fractured porous media. Several interesting examples of Darcy flow in the fractured porous media are presented to demonstrate the robustness of the algorithm.

  5. A generalized nonlinear model-based mixed multinomial logit approach for crash data analysis.

    PubMed

    Zeng, Ziqiang; Zhu, Wenbo; Ke, Ruimin; Ash, John; Wang, Yinhai; Xu, Jiuping; Xu, Xinxin

    2017-02-01

    The mixed multinomial logit (MNL) approach, which can account for unobserved heterogeneity, is a promising unordered model that has been employed in analyzing the effect of factors contributing to crash severity. However, its basic assumption of using a linear function to explore the relationship between the probability of crash severity and its contributing factors can be violated in reality. This paper develops a generalized nonlinear model-based mixed MNL approach which is capable of capturing non-monotonic relationships by developing nonlinear predictors for the contributing factors in the context of unobserved heterogeneity. The crash data on seven Interstate freeways in Washington between January 2011 and December 2014 are collected to develop the nonlinear predictors in the model. Thirteen contributing factors in terms of traffic characteristics, roadway geometric characteristics, and weather conditions are identified to have significant mixed (fixed or random) effects on the crash density in three crash severity levels: fatal, injury, and property damage only. The proposed model is compared with the standard mixed MNL model. The comparison results suggest a slight superiority of the new approach in terms of model fit measured by the Akaike Information Criterion (12.06 percent decrease) and Bayesian Information Criterion (9.11 percent decrease). The predicted crash densities for all three levels of crash severities of the new approach are also closer (on average) to the observations than the ones predicted by the standard mixed MNL model. Finally, the significance and impacts of the contributing factors are analyzed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Estimating the numerical diapycnal mixing in the GO5.0 ocean model

    NASA Astrophysics Data System (ADS)

    Megann, Alex; Nurser, George

    2014-05-01

    Constant-depth (or "z-coordinate") ocean models such as MOM and NEMO have become the de facto workhorse in climate applications, and have attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes (e.g. Hofmann and Maqueda, 2006), and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimations have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is the latest ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre (Megann et al, 2013). It uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. Two approaches to quantifying the numerical diapycnal mixing in this model are described: the first is based on the isopycnal watermass analysis of Lee et al (2002), while the second uses a passive tracer to diagnose mixing across density surfaces. Results from these two methods will be compared and contrasted. Hofmann, M. and Maqueda, M. A. M., 2006. Performance of a second-order moments advection scheme in an ocean general circulation model. JGR-Oceans, 111(C5). Lee, M.-M., Coward, A.C., Nurser, A.G., 2002. Spurious diapycnal mixing of deep waters in an eddy-permitting global ocean model. JPO 32, 1522-1535 Megann, A., Storkey, D., Aksenov, Y., Alderson, S., Calvert, D., Graham, T., Hyder, P., Siddorn, J., and Sinha, B., 2013: GO5.0: The joint NERC-Met Office NEMO global ocean model for use in coupled and forced applications, Geosci. Model Dev. Discuss., 6, 5747-5799,.

  7. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    PubMed

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with the estimation process rendered results from the BLQ model questionable. Importantly, accounting for heterogeneous variance enhanced inferential precision as the breadth of the confidence interval for the mean breakpoint decreased by approximately 44%. In summary, the article illustrates the use of linear and nonlinear mixed models for dose-response relationships accounting for heterogeneous residual variances, discusses important diagnostics and their implications for inference, and provides practical recommendations for computational troubleshooting.

  8. Significance of the model considering mixed grain-size for inverse analysis of turbidites

    NASA Astrophysics Data System (ADS)

    Nakao, K.; Naruse, H.; Tokuhashi, S., Sr.

    2016-12-01

    A method for inverse analysis of turbidity currents is proposed for application to field observations. Estimation of initial condition of the catastrophic events from field observations has been important for sedimentological researches. For instance, there are various inverse analyses to estimate hydraulic conditions from topography observations of pyroclastic flows (Rossano et al., 1996), real-time monitored debris-flow events (Fraccarollo and Papa, 2000), tsunami deposits (Jaffe and Gelfenbaum, 2007) and ancient turbidites (Falcini et al., 2009). These inverse analyses need forward models and the most turbidity current models employ uniform grain-size particles. The turbidity currents, however, are the best characterized by variation of grain-size distribution. Though there are numerical models of mixed grain-sized particles, the models have difficulty in feasibility of application to natural examples because of calculating costs (Lesshaft et al., 2011). Here we expand the turbidity current model based on the non-steady 1D shallow-water equation at low calculation costs for mixed grain-size particles and applied the model to the inverse analysis. In this study, we compared two forward models considering uniform and mixed grain-size particles respectively. We adopted inverse analysis based on the Simplex method that optimizes the initial conditions (thickness, depth-averaged velocity and depth-averaged volumetric concentration of a turbidity current) with multi-point start and employed the result of the forward model [h: 2.0 m, U: 5.0 m/s, C: 0.01%] as reference data. The result shows that inverse analysis using the mixed grain-size model found the known initial condition of reference data even if the condition where the optimization started is deviated from the true solution, whereas the inverse analysis using the uniform grain-size model requires the condition in which the starting parameters for optimization must be in quite narrow range near the solution. The uniform grain-size model often reaches to local optimum condition that is significantly different from true solution. In conclusion, we propose a method of optimization based on the model considering mixed grain-size particles, and show its application to examples of turbidites in the Kiyosumi Formation, Boso Peninsula, Japan.

  9. Modeling of molecular diffusion and thermal conduction with multi-particle interaction in compressible turbulence

    NASA Astrophysics Data System (ADS)

    Tai, Y.; Watanabe, T.; Nagata, K.

    2018-03-01

    A mixing volume model (MVM) originally proposed for molecular diffusion in incompressible flows is extended as a model for molecular diffusion and thermal conduction in compressible turbulence. The model, established for implementation in Lagrangian simulations, is based on the interactions among spatially distributed notional particles within a finite volume. The MVM is tested with the direct numerical simulation of compressible planar jets with the jet Mach number ranging from 0.6 to 2.6. The MVM well predicts molecular diffusion and thermal conduction for a wide range of the size of mixing volume and the number of mixing particles. In the transitional region of the jet, where the scalar field exhibits a sharp jump at the edge of the shear layer, a smaller mixing volume is required for an accurate prediction of mean effects of molecular diffusion. The mixing time scale in the model is defined as the time scale of diffusive effects at a length scale of the mixing volume. The mixing time scale is well correlated for passive scalar and temperature. Probability density functions of the mixing time scale are similar for molecular diffusion and thermal conduction when the mixing volume is larger than a dissipative scale because the mixing time scale at small scales is easily affected by different distributions of intermittent small-scale structures between passive scalar and temperature. The MVM with an assumption of equal mixing time scales for molecular diffusion and thermal conduction is useful in the modeling of the thermal conduction when the modeling of the dissipation rate of temperature fluctuations is difficult.

  10. Traveltime-based descriptions of transport and mixing in heterogeneous domains

    NASA Astrophysics Data System (ADS)

    Luo, Jian; Cirpka, Olaf A.

    2008-09-01

    Modeling mixing-controlled reactive transport using traditional spatial discretization of the domain requires identifying the spatial distributions of hydraulic and reactive parameters including mixing-related quantities such as dispersivities and kinetic mass transfer coefficients. In most applications, breakthrough curves (BTCs) of conservative and reactive compounds are measured at only a few locations and spatially explicit models are calibrated by matching these BTCs. A common difficulty in such applications is that the individual BTCs differ too strongly to justify the assumption of spatial homogeneity, whereas the number of observation points is too small to identify the spatial distribution of the decisive parameters. The key objective of the current study is to characterize physical transport by the analysis of conservative tracer BTCs and predict the macroscopic BTCs of compounds that react upon mixing from the interpretation of conservative tracer BTCs and reactive parameters determined in the laboratory. We do this in the framework of traveltime-based transport models which do not require spatially explicit, costly aquifer characterization. By considering BTCs of a conservative tracer measured on different scales, one can distinguish between mixing, which is a prerequisite for reactions, and spreading, which per se does not foster reactions. In the traveltime-based framework, the BTC of a solute crossing an observation plane, or ending in a well, is interpreted as the weighted average of concentrations in an ensemble of non-interacting streamtubes, each of which is characterized by a distinct traveltime value. Mixing is described by longitudinal dispersion and/or kinetic mass transfer along individual streamtubes, whereas spreading is characterized by the distribution of traveltimes, which also determines the weights associated with each stream tube. Key issues in using the traveltime-based framework include the description of mixing mechanisms and the estimation of the traveltime distribution. In this work, we account for both apparent longitudinal dispersion and kinetic mass transfer as mixing mechanisms, thus generalizing the stochastic-convective model with or without inter-phase mass transfer and the advective-dispersive streamtube model. We present a nonparametric approach of determining the traveltime distribution, given a BTC integrated over an observation plane and estimated mixing parameters. The latter approach is superior to fitting parametric models in cases wherein the true traveltime distribution exhibits multiple peaks or long tails. It is demonstrated that there is freedom for the combinations of mixing parameters and traveltime distributions to fit conservative BTCs and describe the tailing. A reactive transport case of a dual Michaelis-Menten problem demonstrates that the reactive mixing introduced by local dispersion and mass transfer may be described by apparent mean mass transfer with coefficients evaluated by local BTCs.

  11. Different Trophic Tracers Give Different Answers for the Same Bugs - Comparing a Stable Isotope and Fatty Acid Based Analysis of Resource Utilization in a Marine Isopod

    NASA Astrophysics Data System (ADS)

    Galloway, A. W. E.; Eisenlord, M. E.; Brett, M. T.

    2016-02-01

    Stable isotope (SI) based mixing models are the most common approach used to infer resource pathways in consumers. However, SI based analyses are often underdetermined, and consumer SI fractionation is usually unknown. The use of fatty acid (FA) tracers in mixing models offers an alternative approach that can resolve the underdetermined constraint. A limitation to both methods is the considerable uncertainty about consumer `trophic modification' (TM) of dietary FA or SI, which occurs as consumers transform dietary resources into tissues. We tested the utility of SI and FA approaches for inferring the diets of the marine benthic isopod (Idotea wosnesenskii) fed various marine macroalgae in controlled feeding trials. Our analyses quantified how the accuracy and precision of Bayesian mixing models was influenced by choice of algorithm (SIAR vs MixSIR), fractionation (assumed or known), and whether the model was under or overdetermined (seven sources and two vs 26 tracers) for cases where isopods were fed an exclusive diet of one of the seven different macroalgae. Using the conventional approach (i.e., 2 SI with assumed TM) resulted in average model outputs, i.e., the contribution from the exclusive resource = 0.20 ± 0.23 (0.00-0.79), mean ± SD (95% credible interval), that only differed slightly from the prior assumption. Using the FA based approach with known TM greatly improved model performance, i.e., the contribution from the exclusive resource = 0.91 ± 0.10 (0.58-0.99). The choice of algorithm only made a difference when fractionation was known and the model was overdetermined (FA approach). In this case SIAR and MixSIR had outputs of 0.86 ± 0.11 (0.48-0.96) and 0.96 ± 0.05 (0.79-1.00), respectively. This analysis shows the choice of dietary tracers and the assumption of consumer trophic modification greatly influence the performance of mixing model dietary reconstructions, and ultimately our understanding of what resources actually support aquatic consumers.

  12. Converting isotope ratios to diet composition - the use of mixing models - June 2010

    EPA Science Inventory

    One application of stable isotope analysis is to reconstruct diet composition based on isotopic mass balance. The isotopic value of a consumer’s tissue reflects the isotopic values of its food sources proportional to their dietary contributions. Isotopic mixing models are used ...

  13. Study of the 190Hg Nucleus: Testing the Existence of U(5) Symmetry

    NASA Astrophysics Data System (ADS)

    Jahangiri Tazekand, Z.; Mohseni, M.; Mohammadi, M. A.; Sabri, H.

    2018-06-01

    In this paper, we have considered the energy spectra, quadrupole transition probabilities, energy surface, charge radii, and quadrupole moment of the190Hg nucleus to describe the interplay between phase transitions and configuration mixing of intruder excitations. To this aim, we have used four different formalisms: (i) interacting boson model including configuration mixing, (ii) Z(5) critical symmetry, (iii) U(6)-based transitional Hamiltonian, and (iv) a transitional interacting boson model Hamiltonian in both interacting boson model (IBM)-1 and IBM-2 versions which are based on affine \\widehat{SU(1,1)} Lie algebra. Results show the advantages of configuration mixing and transitional Hamiltonians, in particular IBM-2 formalism, to reproduce the experimental counterparts when the weight of spherical symmetry increased.

  14. Probe-specific mixed-model approach to detect copy number differences using multiplex ligation-dependent probe amplification (MLPA)

    PubMed Central

    González, Juan R; Carrasco, Josep L; Armengol, Lluís; Villatoro, Sergi; Jover, Lluís; Yasui, Yutaka; Estivill, Xavier

    2008-01-01

    Background MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample. Results Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace. Conclusion Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed. PMID:18522760

  15. Estimation and application of a growth and yield model for uneven-aged mixed conifer stands in California.

    Treesearch

    Jingjing Liang; J. Buongiorno; R.A. Monserud

    2005-01-01

    A growth model for uneven-aged mixed-conifer stands in California was developed with data from 205 permanent plots. The model predicts the number of softwood and hardwood trees in nineteen diameter classes, based on equations for diameter growth rates, mortality arid recruitment. The model gave unbiased predictions of the expected number of trees by diameter class and...

  16. Finite element modeling and analysis of tires

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Andersen, C. M.

    1983-01-01

    Predicting the response of tires under various loading conditions using finite element technology is addressed. Some of the recent advances in finite element technology which have high potential for application to tire modeling problems are reviewed. The analysis and modeling needs for tires are identified. Reduction methods for large-scale nonlinear analysis, with particular emphasis on treatment of combined loads, displacement-dependent and nonconservative loadings; development of simple and efficient mixed finite element models for shell analysis, identification of equivalent mixed and purely displacement models, and determination of the advantages of using mixed models; and effective computational models for large-rotation nonlinear problems, based on a total Lagrangian description of the deformation are included.

  17. Detecting treatment-subgroup interactions in clustered data with generalized linear mixed-effects model trees.

    PubMed

    Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H

    2017-10-25

    Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.

  18. Experiment Analysis and Modelling of Compaction Behaviour of Ag60Cu30Sn10 Mixed Metal Powders

    NASA Astrophysics Data System (ADS)

    Zhou, Mengcheng; Huang, Shangyu; Liu, Wei; Lei, Yu; Yan, Shiwei

    2018-03-01

    A novel process method combines powder compaction and sintering was employed to fabricate thin sheets of cadmium-free silver based filler metals, the compaction densification behaviour of Ag60Cu30Sn10 mixed metal powders was investigated experimentally. Based on the equivalent density method, the density-dependent Drucker-Prager Cap (DPC) model was introduced to model the powder compaction behaviour. Various experiment procedures were completed to determine the model parameters. The friction coefficients in lubricated and unlubricated die were experimentally determined. The determined material parameters were validated by experiments and numerical simulation of powder compaction process using a user subroutine (USDFLD) in ABAQUS/Standard. The good agreement between the simulated and experimental results indicates that the determined model parameters are able to describe the compaction behaviour of the multicomponent mixed metal powders, which can be further used for process optimization simulations.

  19. Systematic analysis of the unique band gap modulation of mixed halide perovskites.

    PubMed

    Kim, Jongseob; Lee, Sung-Hoon; Chung, Choong-Heui; Hong, Ki-Ha

    2016-02-14

    Solar cells based on organic-inorganic hybrid metal halide perovskites have been proven to be one of the most promising candidates for the next generation thin film photovoltaic cells. Mixing Br or Cl into I-based perovskites has been frequently tried to enhance the cell efficiency and stability. One of the advantages of mixed halides is the modulation of band gap by controlling the composition of the incorporated halides. However, the reported band gap transition behavior has not been resolved yet. Here a theoretical model is presented to understand the electronic structure variation of metal mixed-halide perovskites through hybrid density functional theory. Comparative calculations in this work suggest that the band gap correction including spin-orbit interaction is essential to describe the band gap changes of mixed halides. In our model, both the lattice variation and the orbital interactions between metal and halides play key roles to determine band gap changes and band alignments of mixed halides. It is also presented that the band gap of mixed halide thin films can be significantly affected by the distribution of halide composition.

  20. Growing Chlorella sp. on meat processing wastewater for nutrient removal and biomass production.

    PubMed

    Lu, Qian; Zhou, Wenguang; Min, Min; Ma, Xiaochen; Chandra, Ceria; Doan, Yen T T; Ma, Yiwei; Zheng, Hongli; Cheng, Sibo; Griffith, Richard; Chen, Paul; Chen, Chi; Urriola, Pedro E; Shurson, Gerald C; Gislerød, Hans R; Ruan, Roger

    2015-12-01

    In this work, Chlorella sp. (UM6151) was selected to treat meat processing wastewater for nutrient removal and biomass production. To balance the nutrient profile and improve biomass yield at low cost, an innovative algae cultivation model based on wastewater mixing was developed. The result showed that biomass yield (0.675-1.538 g/L) of algae grown on mixed wastewater was much higher than that on individual wastewater and artificial medium. Wastewater mixing eased the bottleneck for algae growth and contributed to the improved biomass yield. Furthermore, in mixed wastewater with sufficient nitrogen, ammonia nitrogen removal efficiencies (68.75-90.38%) and total nitrogen removal efficiencies (30.06-50.94%) were improved. Wastewater mixing also promoted the synthesis of protein in algal cells. Protein content of algae growing on mixed wastewater reached 60.87-68.65%, which is much higher than that of traditional protein source. Algae cultivation model based on wastewater mixing is an efficient and economical way to improve biomass yield. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. A Mixed Learning Approach in Mechatronics Education

    ERIC Educational Resources Information Center

    Yilmaz, O.; Tuncalp, K.

    2011-01-01

    This study aims to investigate the effect of a Web-based mixed learning approach model on mechatronics education. The model combines different perception methods such as reading, listening, and speaking and practice methods developed in accordance with the vocational background of students enrolled in the course Electromechanical Systems in…

  2. MixHMM: Inferring Copy Number Variation and Allelic Imbalance Using SNP Arrays and Tumor Samples Mixed with Stromal Cells

    PubMed Central

    Schulz, Vincent; Chen, Min; Tuck, David

    2010-01-01

    Background Genotyping platforms such as single nucleotide polymorphism (SNP) arrays are powerful tools to study genomic aberrations in cancer samples. Allele specific information from SNP arrays provides valuable information for interpreting copy number variation (CNV) and allelic imbalance including loss-of-heterozygosity (LOH) beyond that obtained from the total DNA signal available from array comparative genomic hybridization (aCGH) platforms. Several algorithms based on hidden Markov models (HMMs) have been designed to detect copy number changes and copy-neutral LOH making use of the allele information on SNP arrays. However heterogeneity in clinical samples, due to stromal contamination and somatic alterations, complicates analysis and interpretation of these data. Methods We have developed MixHMM, a novel hidden Markov model using hidden states based on chromosomal structural aberrations. MixHMM allows CNV detection for copy numbers up to 7 and allows more complete and accurate description of other forms of allelic imbalance, such as increased copy number LOH or imbalanced amplifications. MixHMM also incorporates a novel sample mixing model that allows detection of tumor CNV events in heterogeneous tumor samples, where cancer cells are mixed with a proportion of stromal cells. Conclusions We validate MixHMM and demonstrate its advantages with simulated samples, clinical tumor samples and a dilution series of mixed samples. We have shown that the CNVs of cancer cells in a tumor sample contaminated with up to 80% of stromal cells can be detected accurately using Illumina BeadChip and MixHMM. Availability The MixHMM is available as a Python package provided with some other useful tools at http://genecube.med.yale.edu:8080/MixHMM. PMID:20532221

  3. The Mixed Instrumental Controller: Using Value of Information to Combine Habitual Choice and Mental Simulation

    PubMed Central

    Pezzulo, Giovanni; Rigoli, Francesco; Chersi, Fabian

    2013-01-01

    Instrumental behavior depends on both goal-directed and habitual mechanisms of choice. Normative views cast these mechanisms in terms of model-free and model-based methods of reinforcement learning, respectively. An influential proposal hypothesizes that model-free and model-based mechanisms coexist and compete in the brain according to their relative uncertainty. In this paper we propose a novel view in which a single Mixed Instrumental Controller produces both goal-directed and habitual behavior by flexibly balancing and combining model-based and model-free computations. The Mixed Instrumental Controller performs a cost-benefits analysis to decide whether to chose an action immediately based on the available “cached” value of actions (linked to model-free mechanisms) or to improve value estimation by mentally simulating the expected outcome values (linked to model-based mechanisms). Since mental simulation entails cognitive effort and increases the reward delay, it is activated only when the associated “Value of Information” exceeds its costs. The model proposes a method to compute the Value of Information, based on the uncertainty of action values and on the distance of alternative cached action values. Overall, the model by default chooses on the basis of lighter model-free estimates, and integrates them with costly model-based predictions only when useful. Mental simulation uses a sampling method to produce reward expectancies, which are used to update the cached value of one or more actions; in turn, this updated value is used for the choice. The key predictions of the model are tested in different settings of a double T-maze scenario. Results are discussed in relation with neurobiological evidence on the hippocampus – ventral striatum circuit in rodents, which has been linked to goal-directed spatial navigation. PMID:23459512

  4. The mixed instrumental controller: using value of information to combine habitual choice and mental simulation.

    PubMed

    Pezzulo, Giovanni; Rigoli, Francesco; Chersi, Fabian

    2013-01-01

    Instrumental behavior depends on both goal-directed and habitual mechanisms of choice. Normative views cast these mechanisms in terms of model-free and model-based methods of reinforcement learning, respectively. An influential proposal hypothesizes that model-free and model-based mechanisms coexist and compete in the brain according to their relative uncertainty. In this paper we propose a novel view in which a single Mixed Instrumental Controller produces both goal-directed and habitual behavior by flexibly balancing and combining model-based and model-free computations. The Mixed Instrumental Controller performs a cost-benefits analysis to decide whether to chose an action immediately based on the available "cached" value of actions (linked to model-free mechanisms) or to improve value estimation by mentally simulating the expected outcome values (linked to model-based mechanisms). Since mental simulation entails cognitive effort and increases the reward delay, it is activated only when the associated "Value of Information" exceeds its costs. The model proposes a method to compute the Value of Information, based on the uncertainty of action values and on the distance of alternative cached action values. Overall, the model by default chooses on the basis of lighter model-free estimates, and integrates them with costly model-based predictions only when useful. Mental simulation uses a sampling method to produce reward expectancies, which are used to update the cached value of one or more actions; in turn, this updated value is used for the choice. The key predictions of the model are tested in different settings of a double T-maze scenario. Results are discussed in relation with neurobiological evidence on the hippocampus - ventral striatum circuit in rodents, which has been linked to goal-directed spatial navigation.

  5. Real longitudinal data analysis for real people: building a good enough mixed model.

    PubMed

    Cheng, Jing; Edwards, Lloyd J; Maldonado-Molina, Mildred M; Komro, Kelli A; Muller, Keith E

    2010-02-20

    Mixed effects models have become very popular, especially for the analysis of longitudinal data. One challenge is how to build a good enough mixed effects model. In this paper, we suggest a systematic strategy for addressing this challenge and introduce easily implemented practical advice to build mixed effects models. A general discussion of the scientific strategies motivates the recommended five-step procedure for model fitting. The need to model both the mean structure (the fixed effects) and the covariance structure (the random effects and residual error) creates the fundamental flexibility and complexity. Some very practical recommendations help to conquer the complexity. Centering, scaling, and full-rank coding of all the predictor variables radically improve the chances of convergence, computing speed, and numerical accuracy. Applying computational and assumption diagnostics from univariate linear models to mixed model data greatly helps to detect and solve the related computational problems. Applying computational and assumption diagnostics from the univariate linear models to the mixed model data can radically improve the chances of convergence, computing speed, and numerical accuracy. The approach helps to fit more general covariance models, a crucial step in selecting a credible covariance model needed for defensible inference. A detailed demonstration of the recommended strategy is based on data from a published study of a randomized trial of a multicomponent intervention to prevent young adolescents' alcohol use. The discussion highlights a need for additional covariance and inference tools for mixed models. The discussion also highlights the need for improving how scientists and statisticians teach and review the process of finding a good enough mixed model. (c) 2009 John Wiley & Sons, Ltd.

  6. A Robust Wireless Sensor Network Localization Algorithm in Mixed LOS/NLOS Scenario.

    PubMed

    Li, Bing; Cui, Wei; Wang, Bin

    2015-09-16

    Localization algorithms based on received signal strength indication (RSSI) are widely used in the field of target localization due to its advantages of convenient application and independent from hardware devices. Unfortunately, the RSSI values are susceptible to fluctuate under the influence of non-line-of-sight (NLOS) in indoor space. Existing algorithms often produce unreliable estimated distances, leading to low accuracy and low effectiveness in indoor target localization. Moreover, these approaches require extra prior knowledge about the propagation model. As such, we focus on the problem of localization in mixed LOS/NLOS scenario and propose a novel localization algorithm: Gaussian mixed model based non-metric Multidimensional (GMDS). In GMDS, the RSSI is estimated using a Gaussian mixed model (GMM). The dissimilarity matrix is built to generate relative coordinates of nodes by a multi-dimensional scaling (MDS) approach. Finally, based on the anchor nodes' actual coordinates and target's relative coordinates, the target's actual coordinates can be computed via coordinate transformation. Our algorithm could perform localization estimation well without being provided with prior knowledge. The experimental verification shows that GMDS effectively reduces NLOS error and is of higher accuracy in indoor mixed LOS/NLOS localization and still remains effective when we extend single NLOS to multiple NLOS.

  7. Mixed Model Association with Family-Biased Case-Control Ascertainment.

    PubMed

    Hayeck, Tristan J; Loh, Po-Ru; Pollack, Samuela; Gusev, Alexander; Patterson, Nick; Zaitlen, Noah A; Price, Alkes L

    2017-01-05

    Mixed models have become the tool of choice for genetic association studies; however, standard mixed model methods may be poorly calibrated or underpowered under family sampling bias and/or case-control ascertainment. Previously, we introduced a liability threshold-based mixed model association statistic (LTMLM) to address case-control ascertainment in unrelated samples. Here, we consider family-biased case-control ascertainment, where case and control subjects are ascertained non-randomly with respect to family relatedness. Previous work has shown that this type of ascertainment can severely bias heritability estimates; we show here that it also impacts mixed model association statistics. We introduce a family-based association statistic (LT-Fam) that is robust to this problem. Similar to LTMLM, LT-Fam is computed from posterior mean liabilities (PML) under a liability threshold model; however, LT-Fam uses published narrow-sense heritability estimates to avoid the problem of biased heritability estimation, enabling correct calibration. In simulations with family-biased case-control ascertainment, LT-Fam was correctly calibrated (average χ 2 = 1.00-1.02 for null SNPs), whereas the Armitage trend test (ATT), standard mixed model association (MLM), and case-control retrospective association test (CARAT) were mis-calibrated (e.g., average χ 2 = 0.50-1.22 for MLM, 0.89-2.65 for CARAT). LT-Fam also attained higher power than other methods in some settings. In 1,259 type 2 diabetes-affected case subjects and 5,765 control subjects from the CARe cohort, downsampled to induce family-biased ascertainment, LT-Fam was correctly calibrated whereas ATT, MLM, and CARAT were again mis-calibrated. Our results highlight the importance of modeling family sampling bias in case-control datasets with related samples. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  8. EuroForMix: An open source software based on a continuous model to evaluate STR DNA profiles from a mixture of contributors with artefacts.

    PubMed

    Bleka, Øyvind; Storvik, Geir; Gill, Peter

    2016-03-01

    We have released a software named EuroForMix to analyze STR DNA profiles in a user-friendly graphical user interface. The software implements a model to explain the allelic peak height on a continuous scale in order to carry out weight-of-evidence calculations for profiles which could be from a mixture of contributors. Through a properly parameterized model we are able to do inference on mixture proportions, the peak height properties, stutter proportion and degradation. In addition, EuroForMix includes models for allele drop-out, allele drop-in and sub-population structure. EuroForMix supports two inference approaches for likelihood ratio calculations. The first approach uses maximum likelihood estimation of the unknown parameters. The second approach is Bayesian based which requires prior distributions to be specified for the parameters involved. The user may specify any number of known and unknown contributors in the model, however we find that there is a practical computing time limit which restricts the model to a maximum of four unknown contributors. EuroForMix is the first freely open source, continuous model (accommodating peak height, stutter, drop-in, drop-out, population substructure and degradation), to be reported in the literature. It therefore serves an important purpose to act as an unrestricted platform to compare different solutions that are available. The implementation of the continuous model used in the software showed close to identical results to the R-package DNAmixtures, which requires a HUGIN Expert license to be used. An additional feature in EuroForMix is the ability for the user to adapt the Bayesian inference framework by incorporating their own prior information. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Statistical models of global Langmuir mixing

    NASA Astrophysics Data System (ADS)

    Li, Qing; Fox-Kemper, Baylor; Breivik, Øyvind; Webb, Adrean

    2017-05-01

    The effects of Langmuir mixing on the surface ocean mixing may be parameterized by applying an enhancement factor which depends on wave, wind, and ocean state to the turbulent velocity scale in the K-Profile Parameterization. Diagnosing the appropriate enhancement factor online in global climate simulations is readily achieved by coupling with a prognostic wave model, but with significant computational and code development expenses. In this paper, two alternatives that do not require a prognostic wave model, (i) a monthly mean enhancement factor climatology, and (ii) an approximation to the enhancement factor based on the empirical wave spectra, are explored and tested in a global climate model. Both appear to reproduce the Langmuir mixing effects as estimated using a prognostic wave model, with nearly identical and substantial improvements in the simulated mixed layer depth and intermediate water ventilation over control simulations, but significantly less computational cost. Simpler approaches, such as ignoring Langmuir mixing altogether or setting a globally constant Langmuir number, are found to be deficient. Thus, the consequences of Stokes depth and misaligned wind and waves are important.

  10. Development and validation of a turbulent-mix model for variable-density and compressible flows.

    PubMed

    Banerjee, Arindam; Gore, Robert A; Andrews, Malcolm J

    2010-10-01

    The modeling of buoyancy driven turbulent flows is considered in conjunction with an advanced statistical turbulence model referred to as the BHR (Besnard-Harlow-Rauenzahn) k-S-a model. The BHR k-S-a model is focused on variable-density and compressible flows such as Rayleigh-Taylor (RT), Richtmyer-Meshkov (RM), and Kelvin-Helmholtz (KH) driven mixing. The BHR k-S-a turbulence mix model has been implemented in the RAGE hydro-code, and model constants are evaluated based on analytical self-similar solutions of the model equations. The results are then compared with a large test database available from experiments and direct numerical simulations (DNS) of RT, RM, and KH driven mixing. Furthermore, we describe research to understand how the BHR k-S-a turbulence model operates over a range of moderate to high Reynolds number buoyancy driven flows, with a goal of placing the modeling of buoyancy driven turbulent flows at the same level of development as that of single phase shear flows.

  11. CLUSTERING SOUTH AFRICAN HOUSEHOLDS BASED ON THEIR ASSET STATUS USING LATENT VARIABLE MODELS

    PubMed Central

    McParland, Damien; Gormley, Isobel Claire; McCormick, Tyler H.; Clark, Samuel J.; Kabudula, Chodziwadziwa Whiteson; Collinson, Mark A.

    2014-01-01

    The Agincourt Health and Demographic Surveillance System has since 2001 conducted a biannual household asset survey in order to quantify household socio-economic status (SES) in a rural population living in northeast South Africa. The survey contains binary, ordinal and nominal items. In the absence of income or expenditure data, the SES landscape in the study population is explored and described by clustering the households into homogeneous groups based on their asset status. A model-based approach to clustering the Agincourt households, based on latent variable models, is proposed. In the case of modeling binary or ordinal items, item response theory models are employed. For nominal survey items, a factor analysis model, similar in nature to a multinomial probit model, is used. Both model types have an underlying latent variable structure—this similarity is exploited and the models are combined to produce a hybrid model capable of handling mixed data types. Further, a mixture of the hybrid models is considered to provide clustering capabilities within the context of mixed binary, ordinal and nominal response data. The proposed model is termed a mixture of factor analyzers for mixed data (MFA-MD). The MFA-MD model is applied to the survey data to cluster the Agincourt households into homogeneous groups. The model is estimated within the Bayesian paradigm, using a Markov chain Monte Carlo algorithm. Intuitive groupings result, providing insight to the different socio-economic strata within the Agincourt region. PMID:25485026

  12. Simulation of particle diversity and mixing state over Greater Paris: a model-measurement inter-comparison.

    PubMed

    Zhu, Shupeng; Sartelet, Karine N; Healy, Robert M; Wenger, John C

    2016-07-18

    Air quality models are used to simulate and forecast pollutant concentrations, from continental scales to regional and urban scales. These models usually assume that particles are internally mixed, i.e. particles of the same size have the same chemical composition, which may vary in space and time. Although this assumption may be realistic for continental-scale simulations, where particles originating from different sources have undergone sufficient mixing to achieve a common chemical composition for a given model grid cell and time, it may not be valid for urban-scale simulations, where particles from different sources interact on shorter time scales. To investigate the role of the mixing state assumption on the formation of particles, a size-composition resolved aerosol model (SCRAM) was developed and coupled to the Polyphemus air quality platform. Two simulations, one with the internal mixing hypothesis and another with the external mixing hypothesis, have been carried out for the period 15 January to 11 February 2010, when the MEGAPOLI winter field measurement campaign took place in Paris. The simulated bulk concentrations of chemical species and the concentrations of individual particle classes are compared with the observations of Healy et al. (Atmos. Chem. Phys., 2013, 13, 9479-9496) for the same period. The single particle diversity and the mixing-state index are computed based on the approach developed by Riemer et al. (Atmos. Chem. Phys., 2013, 13, 11423-11439), and they are compared to the measurement-based analyses of Healy et al. (Atmos. Chem. Phys., 2014, 14, 6289-6299). The average value of the single particle diversity, which represents the average number of species within each particle, is consistent between simulation and measurement (2.91 and 2.79 respectively). Furthermore, the average value of the mixing-state index is also well represented in the simulation (69% against 59% from the measurements). The spatial distribution of the mixing-state index shows that the particles are not mixed in urban areas, while they are well mixed in rural areas. This indicates that the assumption of internal mixing traditionally used in transport chemistry models is well suited to rural areas, but this assumption is less realistic for urban areas close to emission sources.

  13. Validation of ACG Case-mix for equitable resource allocation in Swedish primary health care.

    PubMed

    Zielinski, Andrzej; Kronogård, Maria; Lenhoff, Håkan; Halling, Anders

    2009-09-18

    Adequate resource allocation is an important factor to ensure equity in health care. Previous reimbursement models have been based on age, gender and socioeconomic factors. An explanatory model based on individual need of primary health care (PHC) has not yet been used in Sweden to allocate resources. The aim of this study was to examine to what extent the ACG case-mix system could explain concurrent costs in Swedish PHC. Diagnoses were obtained from electronic PHC records of inhabitants in Blekinge County (approx. 150,000) listed with public PHC (approx. 120,000) for three consecutive years, 2004-2006. The inhabitants were then classified into six different resource utilization bands (RUB) using the ACG case-mix system. The mean costs for primary health care were calculated for each RUB and year. Using linear regression models and log-cost as dependent variable the adjusted R2 was calculated in the unadjusted model (gender) and in consecutive models where age, listing with specific PHC and RUB were added. In an additional model the ACG groups were added. Gender, age and listing with specific PHC explained 14.48-14.88% of the variance in individual costs for PHC. By also adding information on level of co-morbidity, as measured by the ACG case-mix system, to specific PHC the adjusted R2 increased to 60.89-63.41%. The ACG case-mix system explains patient costs in primary care to a high degree. Age and gender are important explanatory factors, but most of the variance in concurrent patient costs was explained by the ACG case-mix system.

  14. AUTOMATED ANALYSIS OF QUANTITATIVE IMAGE DATA USING ISOMORPHIC FUNCTIONAL MIXED MODELS, WITH APPLICATION TO PROTEOMICS DATA.

    PubMed

    Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard

    2011-01-01

    Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method frequently correspond to subregions of visible spots that may represent post-translational modifications or co-migrating proteins that cannot be visually resolved from adjacent, more abundant proteins on the gel image. Thus, it is possible that this image-based approach may actually improve the realized resolution of the gel, revealing differentially expressed proteins that would not have even been detected as spots by modern spot-based analyses.

  15. Competitive adsorption from mixed hen egg-white lysozyme/surfactant solutions at the air-water interface studied by tensiometry, ellipsometry, and surface dilational rheology.

    PubMed

    Alahverdjieva, V S; Grigoriev, D O; Fainerman, V B; Aksenenko, E V; Miller, R; Möhwald, H

    2008-02-21

    The competitive adsorption at the air-water interface from mixed adsorption layers of hen egg-white lysozyme with a non-ionic surfactant (C10DMPO) was studied and compared to the mixture with an ionic surfactant (SDS) using bubble and drop shape analysis tensiometry, ellipsometry, and surface dilational rheology. The set of equilibrium and kinetic data of the mixed solutions is described by a thermodynamic model developed recently. The theoretical description of the mixed system is based on the model parameters for the individual components.

  16. Estimation of the contribution of private providers in tuberculosis case notification and treatment outcome in Pakistan.

    PubMed

    Chughtai, A A; Qadeer, E; Khan, W; Hadi, H; Memon, I A

    2013-03-01

    To improve involvement of the private sector in the national tuberculosis (TB) programme in Pakistan various public-private mix projects were set up between 2004 and 2009. A retrospective analysis of data was made to study 6 different public-private mix models for TB control in Pakistan and estimate the contribution of the various private providers to TB case notification and treatment outcome. The number of TB cases notified through the private sector increased significantly from 77 cases in 2004 to 37,656 in 2009. Among the models, the nongovernmental organization model made the greatest contribution to case notification (58.3%), followed by the hospital-based model (18.9%). Treatment success was highest for the district-led model (94.1%) and lowest for the hospital-based model (74.2%). The private sector made an important contribution to the national data through the various public-private mix projects. Issues of sustainability and the lack of treatment supporters are discussed as reasons for lack of success of some projects.

  17. Machine learning to construct reduced-order models and scaling laws for reactive-transport applications

    NASA Astrophysics Data System (ADS)

    Mudunuru, M. K.; Karra, S.; Vesselinov, V. V.

    2017-12-01

    The efficiency of many hydrogeological applications such as reactive-transport and contaminant remediation vastly depends on the macroscopic mixing occurring in the aquifer. In the case of remediation activities, it is fundamental to enhancement and control of the mixing through impact of the structure of flow field which is impacted by groundwater pumping/extraction, heterogeneity, and anisotropy of the flow medium. However, the relative importance of these hydrogeological parameters to understand mixing process is not well studied. This is partially because to understand and quantify mixing, one needs to perform multiple runs of high-fidelity numerical simulations for various subsurface model inputs. Typically, high-fidelity simulations of existing subsurface models take hours to complete on several thousands of processors. As a result, they may not be feasible to study the importance and impact of model inputs on mixing. Hence, there is a pressing need to develop computationally efficient models to accurately predict the desired QoIs for remediation and reactive-transport applications. An attractive way to construct computationally efficient models is through reduced-order modeling using machine learning. These approaches can substantially improve our capabilities to model and predict remediation process. Reduced-Order Models (ROMs) are similar to analytical solutions or lookup tables. However, the method in which ROMs are constructed is different. Here, we present a physics-informed ML framework to construct ROMs based on high-fidelity numerical simulations. First, random forests, F-test, and mutual information are used to evaluate the importance of model inputs. Second, SVMs are used to construct ROMs based on these inputs. These ROMs are then used to understand mixing under perturbed vortex flows. Finally, we construct scaling laws for certain important QoIs such as degree of mixing and product yield. Scaling law parameters dependence on model inputs are evaluated using cluster analysis. We demonstrate application of the developed method for model analyses of reactive-transport and contaminant remediation at the Los Alamos National Laboratory (LANL) chromium contamination sites. The developed method is directly applicable for analyses of alternative site remediation scenarios.

  18. Modelling and simulation of passive Lab-on-a-Chip (LoC) based micromixer for clinical application

    NASA Astrophysics Data System (ADS)

    Saikat, Chakraborty; Sharath, M.; Srujana, M.; Narayan, K.; Pattnaik, Prasant Kumar

    2016-03-01

    In biomedical application, micromixer is an important component because of many processes requires rapid and efficient mixing. At micro scale, the flow is Laminar due to small channel size which enables controlled rapid mixing. The reduction in analysis time along with high throughput can be achieved with the help of rapid mixing. In LoC application, micromixer is used for mixing of fluids especially for the devices which requires efficient mixing. Micromixer of this type of microfluidic devices with a rapid mixing is useful in application such as DNA/RNA synthesis, drug delivery system & biological agent detection. In this work, we design and simulate a microfluidic based passive rapid micromixer for lab-on-a-chip application.

  19. Large scale shell model study of the evolution of mixed-symmetry states in chains of nuclei around 132Sn

    NASA Astrophysics Data System (ADS)

    Lo Iudice, N.; Bianco, D.; Andreozzi, F.; Porrino, A.; Knapp, F.

    2012-10-01

    Large scale shell model calculations based on a new diagonalization algorithm are performed in order to investigate the mixed symmetry states in chains of nuclei in the proximity of N=82. The resulting spectra and transitions are in agreement with the experiments and consistent with the scheme provided by the interacting boson model.

  20. Wave–turbulence interaction-induced vertical mixing and its effects in ocean and climate models

    PubMed Central

    Qiao, Fangli; Yuan, Yeli; Deng, Jia; Dai, Dejun; Song, Zhenya

    2016-01-01

    Heated from above, the oceans are stably stratified. Therefore, the performance of general ocean circulation models and climate studies through coupled atmosphere–ocean models depends critically on vertical mixing of energy and momentum in the water column. Many of the traditional general circulation models are based on total kinetic energy (TKE), in which the roles of waves are averaged out. Although theoretical calculations suggest that waves could greatly enhance coexisting turbulence, no field measurements on turbulence have ever validated this mechanism directly. To address this problem, a specially designed field experiment has been conducted. The experimental results indicate that the wave–turbulence interaction-induced enhancement of the background turbulence is indeed the predominant mechanism for turbulence generation and enhancement. Based on this understanding, we propose a new parametrization for vertical mixing as an additive part to the traditional TKE approach. This new result reconfirmed the past theoretical model that had been tested and validated in numerical model experiments and field observations. It firmly establishes the critical role of wave–turbulence interaction effects in both general ocean circulation models and atmosphere–ocean coupled models, which could greatly improve the understanding of the sea surface temperature and water column properties distributions, and hence model-based climate forecasting capability. PMID:26953182

  1. Assessment of RANS and LES Turbulence Modeling for Buoyancy-Aided/Opposed Forced and Mixed Convection

    NASA Astrophysics Data System (ADS)

    Clifford, Corey; Kimber, Mark

    2017-11-01

    Over the last 30 years, an industry-wide shift within the nuclear community has led to increased utilization of computational fluid dynamics (CFD) to supplement nuclear reactor safety analyses. One such area that is of particular interest to the nuclear community, specifically to those performing loss-of-flow accident (LOFA) analyses for next-generation very-high temperature reactors (VHTR), is the capacity of current computational models to predict heat transfer across a wide range of buoyancy conditions. In the present investigation, a critical evaluation of Reynolds-averaged Navier-Stokes (RANS) and large-eddy simulation (LES) turbulence modeling techniques is conducted based on CFD validation data collected from the Rotatable Buoyancy Tunnel (RoBuT) at Utah State University. Four different experimental flow conditions are investigated: (1) buoyancy-aided forced convection; (2) buoyancy-opposed forced convection; (3) buoyancy-aided mixed convection; (4) buoyancy-opposed mixed convection. Overall, good agreement is found for both forced convection-dominated scenarios, but an overly-diffusive prediction of the normal Reynolds stress is observed for the RANS-based turbulence models. Low-Reynolds number RANS models perform adequately for mixed convection, while higher-order RANS approaches underestimate the influence of buoyancy on the production of turbulence.

  2. Estimation of oceanic subsurface mixing under a severe cyclonic storm using a coupled atmosphere-ocean-wave model

    NASA Astrophysics Data System (ADS)

    Prakash, Kumar Ravi; Nigam, Tanuja; Pant, Vimlesh

    2018-04-01

    A coupled atmosphere-ocean-wave model was used to examine mixing in the upper-oceanic layers under the influence of a very severe cyclonic storm Phailin over the Bay of Bengal (BoB) during 10-14 October 2013. The coupled model was found to improve the sea surface temperature over the uncoupled model. Model simulations highlight the prominent role of cyclone-induced near-inertial oscillations in subsurface mixing up to the thermocline depth. The inertial mixing introduced by the cyclone played a central role in the deepening of the thermocline and mixed layer depth by 40 and 15 m, respectively. For the first time over the BoB, a detailed analysis of inertial oscillation kinetic energy generation, propagation, and dissipation was carried out using an atmosphere-ocean-wave coupled model during a cyclone. A quantitative estimate of kinetic energy in the oceanic water column, its propagation, and its dissipation mechanisms were explained using the coupled atmosphere-ocean-wave model. The large shear generated by the inertial oscillations was found to overcome the stratification and initiate mixing at the base of the mixed layer. Greater mixing was found at the depths where the eddy kinetic diffusivity was large. The baroclinic current, holding a larger fraction of kinetic energy than the barotropic current, weakened rapidly after the passage of the cyclone. The shear induced by inertial oscillations was found to decrease rapidly with increasing depth below the thermocline. The dampening of the mixing process below the thermocline was explained through the enhanced dissipation rate of turbulent kinetic energy upon approaching the thermocline layer. The wave-current interaction and nonlinear wave-wave interaction were found to affect the process of downward mixing and cause the dissipation of inertial oscillations.

  3. Simulating Runoff from a Grid Based Mercury Model: Flow Comparisons

    EPA Science Inventory

    Several mercury cycling models, including general mass balance approaches, mixed-batch reactors in streams or lakes, or regional process-based models, exist to assess the ecological exposure risks associated with anthropogenically increased atmospheric mercury (Hg) deposition, so...

  4. Climate-induced changes in lake ecosystem structure inferred from coupled neo- and paleoecological approaches

    USGS Publications Warehouse

    Saros, Jasmine E.; Stone, Jeffery R.; Pederson, Gregory T.; Slemmons, Krista; Spanbauer, Trisha; Schliep, Anna; Cahl, Douglas; Williamson, Craig E.; Engstrom, Daniel R.

    2015-01-01

    Over the 20th century, surface water temperatures have increased in many lake ecosystems around the world, but long-term trends in the vertical thermal structure of lakes remain unclear, despite the strong control that thermal stratification exerts on the biological response of lakes to climate change. Here we used both neo- and paleoecological approaches to develop a fossil-based inference model for lake mixing depths and thereby refine understanding of lake thermal structure change. We focused on three common planktonic diatom taxa, the distributions of which previous research suggests might be affected by mixing depth. Comparative lake surveys and growth rate experiments revealed that these species respond to lake thermal structure when nitrogen is sufficient, with species optima ranging from shallower to deeper mixing depths. The diatom-based mixing depth model was applied to sedimentary diatom profiles extending back to 1750 AD in two lakes with moderate nitrate concentrations but differing climate settings. Thermal reconstructions were consistent with expected changes, with shallower mixing depths inferred for an alpine lake where treeline has advanced, and deeper mixing depths inferred for a boreal lake where wind strength has increased. The inference model developed here provides a new tool to expand and refine understanding of climate-induced changes in lake ecosystems.

  5. Physician-Owned Surgical Hospitals Outperform Other Hospitals in Medicare Value-Based Purchasing Program.

    PubMed

    Ramirez, Adriana G; Tracci, Margaret C; Stukenborg, George J; Turrentine, Florence E; Kozower, Benjamin D; Jones, R Scott

    2016-10-01

    The Hospital Value-Based Purchasing Program measures value of care provided by participating Medicare hospitals and creates financial incentives for quality improvement and fosters increased transparency. Limited information is available comparing hospital performance across health care business models. The 2015 Hospital Value-Based Purchasing Program results were used to examine hospital performance by business model. General linear modeling assessed differences in mean total performance score, hospital case mix index, and differences after adjustment for differences in hospital case mix index. Of 3,089 hospitals with total performance scores, categories of representative health care business models included 104 physician-owned surgical hospitals, 111 University HealthSystem Consortium, 14 US News & World Report Honor Roll hospitals, 33 Kaiser Permanente, and 124 Pioneer accountable care organization affiliated hospitals. Estimated mean total performance scores for physician-owned surgical hospitals (64.4; 95% CI, 61.83-66.38) and Kaiser Permanente (60.79; 95% CI, 56.56-65.03) were significantly higher compared with all remaining hospitals, and University HealthSystem Consortium members (36.8; 95% CI, 34.51-39.17) performed below the mean (p < 0.0001). Significant differences in mean hospital case mix index included physician-owned surgical hospitals (mean 2.32; p < 0.0001), US News & World Report honorees (mean 2.24; p = 0.0140), and University HealthSystem Consortium members (mean 1.99; p < 0.0001), and Kaiser Permanente hospitals had lower case mix value (mean 1.54; p < 0.0001). Re-estimation of total performance scores did not change the original results after adjustment for differences in hospital case mix index. The Hospital Value-Based Purchasing Program revealed superior hospital performance associated with business model. Closer inspection of high-value hospitals can guide value improvement and policy-making decisions for all Medicare Value-Based Purchasing Program Hospitals. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  6. Interpretable inference on the mixed effect model with the Box-Cox transformation.

    PubMed

    Maruo, K; Yamaguchi, Y; Noma, H; Gosho, M

    2017-07-10

    We derived results for inference on parameters of the marginal model of the mixed effect model with the Box-Cox transformation based on the asymptotic theory approach. We also provided a robust variance estimator of the maximum likelihood estimator of the parameters of this model in consideration of the model misspecifications. Using these results, we developed an inference procedure for the difference of the model median between treatment groups at the specified occasion in the context of mixed effects models for repeated measures analysis for randomized clinical trials, which provided interpretable estimates of the treatment effect. From simulation studies, it was shown that our proposed method controlled type I error of the statistical test for the model median difference in almost all the situations and had moderate or high performance for power compared with the existing methods. We illustrated our method with cluster of differentiation 4 (CD4) data in an AIDS clinical trial, where the interpretability of the analysis results based on our proposed method is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Numerical analysis of mixing by sharp-edge-based acoustofluidic micromixer

    NASA Astrophysics Data System (ADS)

    Nama, Nitesh; Huang, Po-Hsun; Jun Huang, Tony; Costanzo, Francesco

    2015-11-01

    Recently, acoustically oscillated sharp-edges have been employed to realize rapid and homogeneous mixing at microscales (Huang, Lab on a Chip, 13, 2013). Here, we present a numerical model, qualitatively validated by experimental results, to analyze the acoustic mixing inside a sharp-edge-based micromixer. We extend our previous numerical model (Nama, Lab on a Chip, 14, 2014) to combine the Generalized Lagrangian Mean (GLM) theory with the convection-diffusion equation, while also allowing for the presence of a background flow as observed in a typical sharp-edge-based micromixer. We employ a perturbation approach to divide the flow variables into zeroth-, first- and second-order fields which are successively solved to obtain the Lagrangian mean velocity. The Langrangian mean velocity and the background flow velocity are further employed with the convection-diffusion equation to obtain the concentration profile. We characterize the effects of various operational and geometrical parameters to suggest potential design changes for improving the mixing performance of the sharp-edge-based micromixer. Lastly, we investigate the possibility of generation of a spatio-temporally controllable concentration gradient by placing sharp-edge structures inside the microchannel.

  8. Experimental testing and modeling analysis of solute mixing at water distribution pipe junctions.

    PubMed

    Shao, Yu; Jeffrey Yang, Y; Jiang, Lijie; Yu, Tingchao; Shen, Cheng

    2014-06-01

    Flow dynamics at a pipe junction controls particle trajectories, solute mixing and concentrations in downstream pipes. The effect can lead to different outcomes of water quality modeling and, hence, drinking water management in a distribution network. Here we have investigated solute mixing behavior in pipe junctions of five hydraulic types, for which flow distribution factors and analytical equations for network modeling are proposed. First, based on experiments, the degree of mixing at a cross is found to be a function of flow momentum ratio that defines a junction flow distribution pattern and the degree of departure from complete mixing. Corresponding analytical solutions are also validated using computational-fluid-dynamics (CFD) simulations. Second, the analytical mixing model is further extended to double-Tee junctions. Correspondingly the flow distribution factor is modified to account for hydraulic departure from a cross configuration. For a double-Tee(A) junction, CFD simulations show that the solute mixing depends on flow momentum ratio and connection pipe length, whereas the mixing at double-Tee(B) is well represented by two independent single-Tee junctions with a potential water stagnation zone in between. Notably, double-Tee junctions differ significantly from a cross in solute mixing and transport. However, it is noted that these pipe connections are widely, but incorrectly, simplified as cross junctions of assumed complete solute mixing in network skeletonization and water quality modeling. For the studied pipe junction types, analytical solutions are proposed to characterize the incomplete mixing and hence may allow better water quality simulation in a distribution network. Published by Elsevier Ltd.

  9. Kinetics of changes in shelf life parameters during storage of pearl millet based kheer mix and development of a shelf life prediction model.

    PubMed

    Bunkar, Durga Shankar; Jha, Alok; Mahajan, Ankur; Unnikrishnan, V S

    2014-12-01

    Pearl millet, dairy whitener and sugar powder were blended for preparing pearl millet kheer mix. Pearl millet based kheer mix samples were stored at 8, 25, 37 and 45 °C under nitrogen flushing environment. Changes in HMF and TBA formation in the dry mix and sensory changes in reconstituted kheer were studied upto 180 days. In fresh dry mix, the average value of HMF recorded was 4.87 μmol/g which increased to 11.23, 13.67, 18.13, and 21.43 μmol/g at 8, 25, 37 and 45 °C, respectively after 180 days of storage. From an initial value of 0.067, the TBA value increased to 0.219, 0.311, 0.432 and 0.613 at 532 nm at 8, 25, 37 and 45 °C, respectively after 180 days of storage. Data generated from the chemical kinetics of HMF and TBA development that progressed during storage of pearl millet kheer mix were modeled using Arrhenius equations to predict the shelf life of the product. Changes in HMF and TBA followed first order reaction kinetics. It was found that the potential shelf life of the pearl millet based kheer mix was 396 days at 8 and 288 days at 25 °C, respectively.

  10. Mixed-order phase transition in a minimal, diffusion-based spin model.

    PubMed

    Fronczak, Agata; Fronczak, Piotr

    2016-07-01

    In this paper we exactly solve, within the grand canonical ensemble, a minimal spin model with the hybrid phase transition. We call the model diffusion based because its Hamiltonian can be recovered from a simple dynamic procedure, which can be seen as an equilibrium statistical mechanics representation of a biased random walk. We outline the derivation of the phase diagram of the model, in which the triple point has the hallmarks of the hybrid transition: discontinuity in the average magnetization and algebraically diverging susceptibilities. At this point, two second-order transition curves meet in equilibrium with the first-order curve, resulting in a prototypical mixed-order behavior.

  11. Study on stress-strain response of multi-phase TRIP steel under cyclic loading

    NASA Astrophysics Data System (ADS)

    Dan, W. J.; Hu, Z. G.; Zhang, W. G.; Li, S. H.; Lin, Z. Q.

    2013-12-01

    The stress-strain response of multi-phase TRIP590 sheet steel is studied in cyclic loading condition at room temperature based on a cyclic phase transformation model and a multi-phase mixed kinematic hardening model. The cyclic martensite transformation model is proposed based on the shear-band intersection, where the repeat number, strain amplitude and cyclic frequency are used to control the phase transformation process. The multi-phase mixed kinematic hardening model is developed based on the non-linear kinematic hardening rule of per-phase. The parameters of transformation model are identified with the relationship between the austenite volume fraction and the repeat number. The parameters in Kinematic hardening model are confirmed by the experimental hysteresis loops in different strain amplitude conditions. The responses of hysteresis loop and stress amplitude are evaluated by tension-compression data.

  12. Single- and mixture toxicity of three organic UV-filters, ethylhexyl methoxycinnamate, octocrylene, and avobenzone on Daphnia magna.

    PubMed

    Park, Chang-Beom; Jang, Jiyi; Kim, Sanghun; Kim, Young Jun

    2017-03-01

    In freshwater environments, aquatic organisms are generally exposed to mixtures of various chemical substances. In this study, we tested the toxicity of three organic UV-filters (ethylhexyl methoxycinnamate, octocrylene, and avobenzone) to Daphnia magna in order to evaluate the combined toxicity of these substances when in they occur in a mixture. The values of effective concentrations (ECx) for each UV-filter were calculated by concentration-response curves; concentration-combinations of three different UV-filters in a mixture were determined by the fraction of components based on EC 25 values predicted by concentration addition (CA) model. The interaction between the UV-filters were also assessed by model deviation ratio (MDR) using observed and predicted toxicity values obtained from mixture-exposure tests and CA model. The results from this study indicated that observed ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values obtained from mixture-exposure tests were higher than predicted ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values calculated by CA model. MDR values were also less than a factor of 1.0 in a mixtures of three different UV-filters. Based on these results, we suggest for the first time a reduction of toxic effects in the mixtures of three UV-filters, caused by antagonistic action of the components. Our findings from this study will provide important information for hazard or risk assessment of organic UV-filters, when they existed together in the aquatic environment. To better understand the mixture toxicity and the interaction of components in a mixture, further studies for various combinations of mixture components are also required. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Investigation of nutrient feeding strategies in a countercurrent mixed-acid multi-staged fermentation: development of segregated-nitrogen model.

    PubMed

    Smith, Aaron D; Holtzapple, Mark T

    2010-12-01

    The MixAlco process is a biorefinery based on the production of carboxylic acids via mixed-culture fermentation. Nitrogen is essential for microbial growth and metabolism, and may exist in soluble (e.g., ammonia) or insoluble forms (e.g., cells). Understanding the dynamics of nitrogen flow in a countercurrent fermentation is necessary to develop control strategies to maximize performance. To estimate nitrogen concentration profiles in a four-stage fermentation train, a mass balance-based segregated-nitrogen model was developed, which uses separate balances for solid- and liquid-phase nitrogen with nitrogen reaction flux between phases assumed to be zero. Comparison of predictions with measured nitrogen profiles from five trains, each with a different nutrient contacting pattern, shows the segregated-nitrogen model captures basic behavior and is a reasonable tool for estimating nitrogen profiles. The segregated-nitrogen model may be used to (1) estimate optimal nitrogen loading patterns, (2) develop a reaction-based model, (3) understand influence of model inputs (e.g., operating parameters, feedstock properties, nutrient loading pattern) on the steady-state nitrogen profile, and (4) determine the direction of the nitrogen reaction flux between liquid and solid phases. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  14. Re-resection rates after breast-conserving surgery as a performance indicator: introduction of a case-mix model to allow comparison between Dutch hospitals.

    PubMed

    Talsma, A K; Reedijk, A M J; Damhuis, R A M; Westenend, P J; Vles, W J

    2011-04-01

    Re-resection rate after breast-conserving surgery (BCS) has been introduced as an indicator of quality of surgical treatment in international literature. The present study aims to develop a case-mix model for re-resection rates and to evaluate its performance in comparing results between hospitals. Electronic records of eligible patients diagnosed with in-situ and invasive breast cancer in 2006 and 2007 were derived from 16 hospitals in the Rotterdam Cancer Registry (RCR) (n = 961). A model was built in which prognostic factors for re-resections after BCS were identified and expected re-resection rate could be assessed for hospitals based on their case mix. To illustrate the opportunities of monitoring re-resections over time, after risk adjustment for patient profile, a VLAD chart was drawn for patients in one hospital. In general three out of every ten women had re-surgery; in about 50% this meant an additive mastectomy. Independent prognostic factors of re-resection after multivariate analysis were histological type, sublocalisation, tumour size, lymph node involvement and multifocal disease. After correction for case mix, one hospital was performing significantly less re-resections compared to the reference hospital. On the other hand, two were performing significantly more re-resections than was expected based on their patient mix. Our population-based study confirms earlier reports that re-resection is frequently required after an initial breast-conserving operation. Case-mix models such as the one we constructed can be used to correct for variation between hospitals performances. VLAD charts are valuable tools to monitor quality of care within individual hospitals. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Complex networks generated by the Penna bit-string model: Emergence of small-world and assortative mixing

    NASA Astrophysics Data System (ADS)

    Li, Chunguang; Maini, Philip K.

    2005-10-01

    The Penna bit-string model successfully encompasses many phenomena of population evolution, including inheritance, mutation, evolution, and aging. If we consider social interactions among individuals in the Penna model, the population will form a complex network. In this paper, we first modify the Verhulst factor to control only the birth rate, and introduce activity-based preferential reproduction of offspring in the Penna model. The social interactions among individuals are generated by both inheritance and activity-based preferential increase. Then we study the properties of the complex network generated by the modified Penna model. We find that the resulting complex network has a small-world effect and the assortative mixing property.

  16. Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery

    PubMed Central

    Fuerst, Bernhard; Tateno, Keisuke; Johnson, Alex; Fotouhi, Javad; Osgood, Greg; Tombari, Federico; Navab, Nassir

    2017-01-01

    Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion. PMID:29184659

  17. Dynamic Latent Trait Models with Mixed Hidden Markov Structure for Mixed Longitudinal Outcomes.

    PubMed

    Zhang, Yue; Berhane, Kiros

    2016-01-01

    We propose a general Bayesian joint modeling approach to model mixed longitudinal outcomes from the exponential family for taking into account any differential misclassification that may exist among categorical outcomes. Under this framework, outcomes observed without measurement error are related to latent trait variables through generalized linear mixed effect models. The misclassified outcomes are related to the latent class variables, which represent unobserved real states, using mixed hidden Markov models (MHMM). In addition to enabling the estimation of parameters in prevalence, transition and misclassification probabilities, MHMMs capture cluster level heterogeneity. A transition modeling structure allows the latent trait and latent class variables to depend on observed predictors at the same time period and also on latent trait and latent class variables at previous time periods for each individual. Simulation studies are conducted to make comparisons with traditional models in order to illustrate the gains from the proposed approach. The new approach is applied to data from the Southern California Children Health Study (CHS) to jointly model questionnaire based asthma state and multiple lung function measurements in order to gain better insight about the underlying biological mechanism that governs the inter-relationship between asthma state and lung function development.

  18. Evaluation of dielectric mixing models for microwave soil moisture retrieval using data from the Combined Radar/Radiometer (ComRAD) ground-based SMAP simulator

    USDA-ARS?s Scientific Manuscript database

    Soil moisture measurements are required to improve our understanding of hydrological processes, ecosystem functions, and linkages between the Earth’s water, energy, and carbon cycles. The efficient retrieval of soil moisture depends on various factors in which soil dielectric mixing models are consi...

  19. Estimated SAGE II ozone mixing ratios in early 1993 and comparisons with Stratospheric Photochemistry, Aerosols and Dynamic Expedition measurements

    NASA Technical Reports Server (NTRS)

    Yue, G. K.; Veiga, R. E.; Poole, L. R.; Zawodny, J. M.; Proffitt, M. H.

    1994-01-01

    An empirical time-series model for estimating ozone mixing ratios based on Stratospheric Aerosols and Gas Experiment II (SAGE II) monthly mean ozone data for the period October 1984 through June 1991 has been developed. The modeling results for ozone mixing ratios in the 10- to 30- km region in early months of 1993 are presented. In situ ozone profiles obtained by a dual-beam UV-absorption ozone photometer during the Stratospheric Photochemistry, Aerosols and Dynamics Expedition (SPADE) campaign, May 1-14, 1993, are compared with the model results. With the exception of two profiles at altitudes below 16 km, ozone mixing ratios derived by the model and measured by the ozone photometer are in relatively good agreement within their individual uncertainties. The identified discrepancies in the two profiles are discussed.

  20. Modeling the interplay between sea ice formation and the oceanic mixed layer: Limitations of simple brine rejection parameterizations

    NASA Astrophysics Data System (ADS)

    Barthélemy, Antoine; Fichefet, Thierry; Goosse, Hugues; Madec, Gurvan

    2015-02-01

    The subtle interplay between sea ice formation and ocean vertical mixing is hardly represented in current large-scale models designed for climate studies. Convective mixing caused by the brine release when ice forms is likely to prevail in leads and thin ice areas, while it occurs in models at the much larger horizontal grid cell scale. Subgrid-scale parameterizations have hence been developed to mimic the effects of small-scale convection using a vertical distribution of the salt rejected by sea ice within the mixed layer, instead of releasing it in the top ocean layer. Such a brine rejection parameterization is included in the global ocean-sea ice model NEMO-LIM3. Impacts on the simulated mixed layers and ocean temperature and salinity profiles, along with feedbacks on the sea ice cover, are then investigated in both hemispheres. The changes are overall relatively weak, except for mixed layer depths, which are in general excessively reduced compared to observation-based estimates. While potential model biases prevent a definitive attribution of this vertical mixing underestimation to the brine rejection parameterization, it is unlikely that the latter can be applied in all conditions. In that case, salt rejections do not play any role in mixed layer deepening, which is unrealistic. Applying the parameterization only for low ice-ocean relative velocities improves model results, but introduces additional parameters that are not well constrained by observations.

  1. Modelling the interplay between sea ice formation and the oceanic mixed layer: limitations of simple brine rejection parameterizations

    NASA Astrophysics Data System (ADS)

    Barthélemy, Antoine; Fichefet, Thierry; Goosse, Hugues; Madec, Gurvan

    2015-04-01

    The subtle interplay between sea ice formation and ocean vertical mixing is hardly represented in current large-scale models designed for climate studies. Convective mixing caused by the brine release when ice forms is likely to prevail in leads and thin ice areas, while it occurs in models at the much larger horizontal grid cell scale. Subgrid-scale parameterizations have hence been developed to mimic the effects of small-scale convection using a vertical distribution of the salt rejected by sea ice within the mixed layer, instead of releasing it in the top ocean layer. Such a brine rejection parameterization is included in the global ocean--sea ice model NEMO-LIM3. Impacts on the simulated mixed layers and ocean temperature and salinity profiles, along with feedbacks on the sea ice cover, are then investigated in both hemispheres. The changes are overall relatively weak, except for mixed layer depths, which are in general excessively reduced compared to observation-based estimates. While potential model biases prevent a definitive attribution of this vertical mixing underestimation to the brine rejection parameterization, it is unlikely that the latter can be applied in all conditions. In that case, salt rejections do not play any role in mixed layer deepening, which is unrealistic. Applying the parameterization only for low ice--ocean relative velocities improves model results, but introduces additional parameters that are not well constrained by observations.

  2. Experimental and theoretical characterization of an AC electroosmotic micromixer.

    PubMed

    Sasaki, Naoki; Kitamori, Takehiko; Kim, Haeng-Boo

    2010-01-01

    We have reported on a novel microfluidic mixer based on AC electroosmosis. To elucidate the mixer characteristics, we performed detailed measurements of mixing under various experimental conditions including applied voltage, frequency and solution viscosity. The results are discussed through comparison with results obtained from a theoretical model of AC electroosmosis. As predicted from the theoretical model, we found that a larger voltage (approximately 20 V(p-p)) led to more rapid mixing, while the dependence of the mixing on frequency (1-5 kHz) was insignificant under the present experimental conditions. Furthermore, the dependence of the mixing on viscosity was successfully explained by the theoretical model, and the applicability of the mixer in viscous solution (2.83 mPa s) was confirmed experimentally. By using these results, it is possible to estimate the mixing performance under given conditions. These estimations can provide guidelines for using the mixer in microfluidic chemical analysis.

  3. Mixed quantum-classical simulation of the hydride transfer reaction catalyzed by dihydrofolate reductase based on a mapped system-harmonic bath model

    NASA Astrophysics Data System (ADS)

    Xu, Yang; Song, Kai; Shi, Qiang

    2018-03-01

    The hydride transfer reaction catalyzed by dihydrofolate reductase is studied using a recently developed mixed quantum-classical method to investigate the nuclear quantum effects on the reaction. Molecular dynamics simulation is first performed based on a two-state empirical valence bond potential to map the atomistic model to an effective double-well potential coupled to a harmonic bath. In the mixed quantum-classical simulation, the hydride degree of freedom is quantized, and the effective harmonic oscillator modes are treated classically. It is shown that the hydride transfer reaction rate using the mapped effective double-well/harmonic-bath model is dominated by the contribution from the ground vibrational state. Further comparison with the adiabatic reaction rate constant based on the Kramers theory confirms that the reaction is primarily vibrationally adiabatic, which agrees well with the high transmission coefficients found in previous theoretical studies. The calculated kinetic isotope effect is also consistent with the experimental and recent theoretical results.

  4. A 1H NMR-based metabolomics approach to evaluate the geographical authenticity of herbal medicine and its application in building a model effectively assessing the mixing proportion of intentional admixtures: A case study of Panax ginseng: Metabolomics for the authenticity of herbal medicine.

    PubMed

    Nguyen, Huy Truong; Lee, Dong-Kyu; Choi, Young-Geun; Min, Jung-Eun; Yoon, Sang Jun; Yu, Yun-Hyun; Lim, Johan; Lee, Jeongmi; Kwon, Sung Won; Park, Jeong Hill

    2016-05-30

    Ginseng, the root of Panax ginseng has long been the subject of adulteration, especially regarding its origins. Here, 60 ginseng samples from Korea and China initially displayed similar genetic makeup when investigated by DNA-based technique with 23 chloroplast intergenic space regions. Hence, (1)H NMR-based metabolomics with orthogonal projections on the latent structure-discrimination analysis (OPLS-DA) were applied and successfully distinguished between samples from two countries using seven primary metabolites as discrimination markers. Furthermore, to recreate adulteration in reality, 21 mixed samples of numerous Korea/China ratios were tested with the newly built OPLS-DA model. The results showed satisfactory separation according to the proportion of mixing. Finally, a procedure for assessing mixing proportion of intentionally blended samples that achieved good predictability (adjusted R(2)=0.8343) was constructed, thus verifying its promising application to quality control of herbal foods by pointing out the possible mixing ratio of falsified samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Development and Validation of a 3-Dimensional CFB Furnace Model

    NASA Astrophysics Data System (ADS)

    Vepsäläinen, Arl; Myöhänen, Karl; Hyppäneni, Timo; Leino, Timo; Tourunen, Antti

    At Foster Wheeler, a three-dimensional CFB furnace model is essential part of knowledge development of CFB furnace process regarding solid mixing, combustion, emission formation and heat transfer. Results of laboratory and pilot scale phenomenon research are utilized in development of sub-models. Analyses of field-test results in industrial-scale CFB boilers including furnace profile measurements are simultaneously carried out with development of 3-dimensional process modeling, which provides a chain of knowledge that is utilized as feedback for phenomenon research. Knowledge gathered by model validation studies and up-to-date parameter databases are utilized in performance prediction and design development of CFB boiler furnaces. This paper reports recent development steps related to modeling of combustion and formation of char and volatiles of various fuel types in CFB conditions. Also a new model for predicting the formation of nitrogen oxides is presented. Validation of mixing and combustion parameters for solids and gases are based on test balances at several large-scale CFB boilers combusting coal, peat and bio-fuels. Field-tests including lateral and vertical furnace profile measurements and characterization of solid materials provides a window for characterization of fuel specific mixing and combustion behavior in CFB furnace at different loads and operation conditions. Measured horizontal gas profiles are projection of balance between fuel mixing and reactions at lower part of furnace and are used together with both lateral temperature profiles at bed and upper parts of furnace for determination of solid mixing and combustion model parameters. Modeling of char and volatile based formation of NO profiles is followed by analysis of oxidizing and reducing regions formed due lower furnace design and mixing characteristics of fuel and combustion airs effecting to formation ofNO furnace profile by reduction and volatile-nitrogen reactions. This paper presents CFB process analysis focused on combustion and NO profiles in pilot and industrial scale bituminous coal combustion.

  6. Scheduling Real-Time Mixed-Criticality Jobs

    NASA Astrophysics Data System (ADS)

    Baruah, Sanjoy K.; Bonifaci, Vincenzo; D'Angelo, Gianlorenzo; Li, Haohan; Marchetti-Spaccamela, Alberto; Megow, Nicole; Stougie, Leen

    Many safety-critical embedded systems are subject to certification requirements; some systems may be required to meet multiple sets of certification requirements, from different certification authorities. Certification requirements in such "mixed-criticality" systems give rise to interesting scheduling problems, that cannot be satisfactorily addressed using techniques from conventional scheduling theory. In this paper, we study a formal model for representing such mixed-criticality workloads. We demonstrate first the intractability of determining whether a system specified in this model can be scheduled to meet all its certification requirements, even for systems subject to two sets of certification requirements. Then we quantify, via the metric of processor speedup factor, the effectiveness of two techniques, reservation-based scheduling and priority-based scheduling, that are widely used in scheduling such mixed-criticality systems, showing that the latter of the two is superior to the former. We also show that the speedup factors are tight for these two techniques.

  7. A Turbulence model taking into account the longitudinal flow inhomogeneity in mixing layers and jets

    NASA Astrophysics Data System (ADS)

    Troshin, A. I.

    2017-06-01

    The problem of potential core length overestimation of subsonic free jets by Reynolds-averaged Navier-Stokes (RANS) based turbulence models is addressed. It is shown that the issue is due to the incorrect velocity profile modeling of the jet mixing layers. An additional source term in ω equation is proposed which takes into account the effect of longitudinal flow inhomogeneity on turbulence in mixing layers. Computations confirm that the modified Speziale-Sarkar-Gatski/Launder- Reece-Rodi-omega (SSG/LRR-ω) turbulence model correctly predicts the mean velocity profiles in both initial and far-field regions of subsonic free plane jet as well as the centerline velocity decay rate.

  8. 40 CFR Appendix E to Part 63 - Monitoring Procedure for Nonthoroughly Mixed Open Biological Treatment Systems at Kraft Pulp...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... II. Definitions Biological treatment unit = wastewater treatment unit designed and operated to... last zone in the series and ending with the first zone. B. Data Collection Requirements This method is based upon modeling the nonthoroughly mixed open biological treatment unit as a series of well-mixed...

  9. Development of a Mixed Methods Investigation of Process and Outcomes of Community-Based Participatory Research

    ERIC Educational Resources Information Center

    Lucero, Julie; Wallerstein, Nina; Duran, Bonnie; Alegria, Margarita; Greene-Moton, Ella; Israel, Barbara; Kastelic, Sarah; Magarati, Maya; Oetzel, John; Pearson, Cynthia; Schulz, Amy; Villegas, Malia; White Hat, Emily R.

    2018-01-01

    This article describes a mixed methods study of community-based participatory research (CBPR) partnership practices and the links between these practices and changes in health status and disparities outcomes. Directed by a CBPR conceptual model and grounded in indigenous-transformative theory, our nation-wide, cross-site study showcases the value…

  10. Renormalisation group corrections to neutrino mixing sum rules

    NASA Astrophysics Data System (ADS)

    Gehrlein, J.; Petcov, S. T.; Spinrath, M.; Titov, A. V.

    2016-11-01

    Neutrino mixing sum rules are common to a large class of models based on the (discrete) symmetry approach to lepton flavour. In this approach the neutrino mixing matrix U is assumed to have an underlying approximate symmetry form Ũν, which is dictated by, or associated with, the employed (discrete) symmetry. In such a setup the cosine of the Dirac CP-violating phase δ can be related to the three neutrino mixing angles in terms of a sum rule which depends on the symmetry form of Ũν. We consider five extensively discussed possible symmetry forms of Ũν: i) bimaximal (BM) and ii) tri-bimaximal (TBM) forms, the forms corresponding to iii) golden ratio type A (GRA) mixing, iv) golden ratio type B (GRB) mixing, and v) hexagonal (HG) mixing. For each of these forms we investigate the renormalisation group corrections to the sum rule predictions for δ in the cases of neutrino Majorana mass term generated by the Weinberg (dimension 5) operator added to i) the Standard Model, and ii) the minimal SUSY extension of the Standard Model.

  11. Local hyperspectral data multisharpening based on linear/linear-quadratic nonnegative matrix factorization by integrating lidar data

    NASA Astrophysics Data System (ADS)

    Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz

    2015-10-01

    In this paper, a new Spectral-Unmixing-based approach, using Nonnegative Matrix Factorization (NMF), is proposed to locally multi-sharpen hyperspectral data by integrating a Digital Surface Model (DSM) obtained from LIDAR data. In this new approach, the nature of the local mixing model is detected by using the local variance of the object elevations. The hyper/multispectral images are explored using small zones. In each zone, the variance of the object elevations is calculated from the DSM data in this zone. This variance is compared to a threshold value and the adequate linear/linearquadratic spectral unmixing technique is used in the considered zone to independently unmix hyperspectral and multispectral data, using an adequate linear/linear-quadratic NMF-based approach. The obtained spectral and spatial information thus respectively extracted from the hyper/multispectral images are then recombined in the considered zone, according to the selected mixing model. Experiments based on synthetic hyper/multispectral data are carried out to evaluate the performance of the proposed multi-sharpening approach and literature linear/linear-quadratic approaches used on the whole hyper/multispectral data. In these experiments, real DSM data are used to generate synthetic data containing linear and linear-quadratic mixed pixel zones. The DSM data are also used for locally detecting the nature of the mixing model in the proposed approach. Globally, the proposed approach yields good spatial and spectral fidelities for the multi-sharpened data and significantly outperforms the used literature methods.

  12. Improving the mixing performance of side channel type micromixers using an optimal voltage control model.

    PubMed

    Wu, Chien-Hsien; Yang, Ruey-Jen

    2006-06-01

    Electroosmotic flow in microchannels is restricted to low Reynolds number regimes. Since the inertia forces are extremely weak in such regimes, turbulent conditions do not readily develop, and hence species mixing occurs primarily as a result of diffusion. Consequently, achieving a thorough species mixing generally relies upon the use of extended mixing channels. This paper aims to improve the mixing performance of conventional side channel type micromixers by specifying the optimal driving voltages to be applied to each channel. In the proposed approach, the driving voltages are identified by constructing a simple theoretical scheme based on a 'flow-rate-ratio' model and Kirchhoff's law. The numerical and experimental results confirm that the optimal voltage control approach provides a better mixing performance than the use of a single driving voltage gradient.

  13. A Mixed Kijima Model Using the Weibull-Based Generalized Renewal Processes

    PubMed Central

    2015-01-01

    Generalized Renewal Processes are useful for approaching the rejuvenation of dynamical systems resulting from planned or unplanned interventions. We present new perspectives for the Generalized Renewal Processes in general and for the Weibull-based Generalized Renewal Processes in particular. Disregarding from literature, we present a mixed Generalized Renewal Processes approach involving Kijima Type I and II models, allowing one to infer the impact of distinct interventions on the performance of the system under study. The first and second theoretical moments of this model are introduced as well as its maximum likelihood estimation and random sampling approaches. In order to illustrate the usefulness of the proposed Weibull-based Generalized Renewal Processes model, some real data sets involving improving, stable, and deteriorating systems are used. PMID:26197222

  14. Scale-up on basis of structured mixing models: A new concept.

    PubMed

    Mayr, B; Moser, A; Nagy, E; Horvat, P

    1994-02-05

    A new scale-up concept based upon mixing models for bioreactors equipped with Rushton turbines using the tanks-in-series concept is presented. The physical mixing model includes four adjustable parameters, i.e., radial and axial circulation time, number of ideally mixed elements in one cascade, and the volume of the ideally mixed turbine region. The values of the model parameters were adjusted with the application of a modified Monte-Carlo optimization method, which fitted the simulated response function to the experimental curve. The number of cascade elements turned out to be constant (N = 4). The model parameter radial circulation time is in good agreement with the one obtained by the pumping capacity. In case of remaining parameters a first or second order formal equation was developed, including four operational parameters (stirring and aeration intensity, scale, viscosity). This concept can be extended to several other types of bioreactors as well, and it seems to be a suitable tool to compare the bioprocess performance of different types of bioreactors. (c) 1994 John Wiley & Sons, Inc.

  15. A Fatty Acid Based Bayesian Approach for Inferring Diet in Aquatic Consumers

    PubMed Central

    Holtgrieve, Gordon W.; Ward, Eric J.; Ballantyne, Ashley P.; Burns, Carolyn W.; Kainz, Martin J.; Müller-Navarra, Doerthe C.; Persson, Jonas; Ravet, Joseph L.; Strandberg, Ursula; Taipale, Sami J.; Alhgren, Gunnel

    2015-01-01

    We modified the stable isotope mixing model MixSIR to infer primary producer contributions to consumer diets based on their fatty acid composition. To parameterize the algorithm, we generated a ‘consumer-resource library’ of FA signatures of Daphnia fed different algal diets, using 34 feeding trials representing diverse phytoplankton lineages. This library corresponds to the resource or producer file in classic Bayesian mixing models such as MixSIR or SIAR. Because this library is based on the FA profiles of zooplankton consuming known diets, and not the FA profiles of algae directly, trophic modification of consumer lipids is directly accounted for. To test the model, we simulated hypothetical Daphnia comprised of 80% diatoms, 10% green algae, and 10% cryptophytes and compared the FA signatures of these known pseudo-mixtures to outputs generated by the mixing model. The algorithm inferred these simulated consumers were comprised of 82% (63-92%) [median (2.5th to 97.5th percentile credible interval)] diatoms, 11% (4-22%) green algae, and 6% (0-25%) cryptophytes. We used the same model with published phytoplankton stable isotope (SI) data for δ13C and δ15N to examine how a SI based approach resolved a similar scenario. With SI, the algorithm inferred that the simulated consumer assimilated 52% (4-91%) diatoms, 23% (1-78%) green algae, and 18% (1-73%) cyanobacteria. The accuracy and precision of SI based estimates was extremely sensitive to both resource and consumer uncertainty, as well as the trophic fractionation assumption. These results indicate that when using only two tracers with substantial uncertainty for the putative resources, as is often the case in this class of analyses, the underdetermined constraint in consumer-resource SI analyses may be intractable. The FA based approach alleviated the underdetermined constraint because many more FA biomarkers were utilized (n < 20), different primary producers (e.g., diatoms, green algae, and cryptophytes) have very characteristic FA compositions, and the FA profiles of many aquatic primary consumers are strongly influenced by their diets. PMID:26114945

  16. Simulation of mixing in the quick quench region of a rich burn-quick quench mix-lean burn combustor

    NASA Technical Reports Server (NTRS)

    Shih, Tom I.-P.; Nguyen, H. Lee; Howe, Gregory W.; Li, Z.

    1991-01-01

    A computer program was developed to study the mixing process in the quick quench region of a rich burn-quick quench mix-lean burn combustor. The computer program developed was based on the density-weighted, ensemble-averaged conservation equations of mass, momentum (full compressible Navier-Stokes), total energy, and species, closed by a k-epsilon turbulence model with wall functions. The combustion process was modeled by a two-step global reaction mechanism, and NO(x) formation was modeled by the Zeldovich mechanism. The formulation employed in the computer program and the essence of the numerical method of solution are described. Some results obtained for nonreacting and reacting flows with different main-flow to dilution-jet momentum flux ratios are also presented.

  17. Delamination modeling of laminate plate made of sublaminates

    NASA Astrophysics Data System (ADS)

    Kormaníková, Eva; Kotrasová, Kamila

    2017-07-01

    The paper presents the mixed-mode delamination of plates made of sublaminates. To this purpose an opening load mode of delamination is proposed as failure model. The failure model is implemented in ANSYS code to calculate the mixed-mode delamination response as energy release rate. The analysis is based on interface techniques. Within the interface finite element modeling there are calculated the individual components of damage parameters as spring reaction forces, relative displacements and energy release rates along the lamination front.

  18. Convective Overshoot in Stellar Interior

    NASA Astrophysics Data System (ADS)

    Zhang, Q. S.

    2015-07-01

    In stellar interiors, the turbulent thermal convection transports matters and energy, and dominates the structure and evolution of stars. The convective overshoot, which results from the non-local convective transport from the convection zone to the radiative zone, is one of the most uncertain and difficult factors in stellar physics at present. The classical method for studying the convective overshoot is the non-local mixing-length theory (NMLT). However, the NMLT bases on phenomenological assumptions, and leads to contradictions, thus the NMLT was criticized in literature. At present, the helioseismic studies have shown that the NMLT cannot satisfy the helioseismic requirements, and have pointed out that only the turbulent convection models (TCMs) can be accepted. In the first part of this thesis, models and derivations of both the NMLT and the TCM were introduced. In the second part, i.e., the work part, the studies on the TCM (theoretical analysis and applications), and the development of a new model of the convective overshoot mixing were described in detail. In the work of theoretical analysis on the TCM, the approximate solution and the asymptotic solution were obtained based on some assumptions. The structure of the overshoot region was discussed. In a large space of the free parameters, the approximate/asymptotic solutions are in good agreement with the numerical results. We found an important result that the scale of the overshoot region in which the thermal energy transport is effective is 1 HK (HK is the scale height of turbulence kinetic energy), which does not depend on the free parameters of the TCM. We applied the TCM and a simple overshoot mixing model in three cases. In the solar case, it was found that the temperature gradient in the overshoot region is in agreement with the helioseismic requirements, and the profiles of the solar lithium abundance, sound speed, and density of the solar models are also improved. In the low-mass stars of open clusters Hyades, Praesepe, NGC6633, NGC752, NGC3680, and M67, using the model and parameter same to the solar case to deal with the convective envelope overshoot mixing, the lithium abundances on the surface of the stellar models were consistent with the observations. In the case of the binary HY Vir, the same model and parameter also make the radii and effective temperatures of HY Vir stars with convective cores be consistent with the observations. Based on the implications of the above results, we found that the simple overshoot mixing model may need to be improved significantly. Motivated by those implications, we established a new model of the overshoot mixing based on the fluid dynamic equations, and worked out the diffusion coefficient of convective mixing. The diffusion coefficient shows different behaviors in convection zone and overshoot region. In the overshoot region, the buoyancy does negative works on flows, thus the fluid flows around the equilibrium location, which leads to a small scale and low efficiency of overshoot mixing. The physical properties are significantly different from the classical NMLT, and consistent with the helioseismic studies and numerical simulations. The new model was tested in stellar evolution, and its parameter was calibrated.

  19. Effect of Blockage and Location on Mixing of Swirling Coaxial Jets in a Non-expanding Circular Confinement

    NASA Astrophysics Data System (ADS)

    Patel, V. K.; Singh, S. N.; Seshadri, V.

    2013-06-01

    A study is conducted to evolve an effective design concept to improve mixing in a combustor chamber to reduce the amount of intake air. The geometry used is that of a gas turbine combustor model. For simplicity, both the jets have been considered as air jets and effect of heat release and chemical reaction has not been modeled. Various contraction shapes and blockage have been investigated by placing them downstream at different locations with respect to inlet to obtain better mixing. A commercial CFD code `Fluent 6.3' which is based on finite volume method has been used to solve the flow in the combustor model. Validation is done with the experimental data available in literature using standard k-ω turbulence model. The study has shown that contraction and blockage at optimum location enhances the mixing process. Further, the effect of swirl in the jets has also investigated.

  20. Case-Mix for Performance Management: A Risk Algorithm Based on ICD-10-CM.

    PubMed

    Gao, Jian; Moran, Eileen; Almenoff, Peter L

    2018-06-01

    Accurate risk adjustment is the key to a reliable comparison of cost and quality performance among providers and hospitals. However, the existing case-mix algorithms based on age, sex, and diagnoses can only explain up to 50% of the cost variation. More accurate risk adjustment is desired for provider performance assessment and improvement. To develop a case-mix algorithm that hospitals and payers can use to measure and compare cost and quality performance of their providers. All 6,048,895 patients with valid diagnoses and cost recorded in the US Veterans health care system in fiscal year 2016 were included in this study. The dependent variable was total cost at the patient level, and the explanatory variables were age, sex, and comorbidities represented by 762 clinically homogeneous groups, which were created by expanding the 283 categories from Clinical Classifications Software based on ICD-10-CM codes. The split-sample method was used to assess model overfitting and coefficient stability. The predictive power of the algorithms was ascertained by comparing the R, mean absolute percentage error, root mean square error, predictive ratios, and c-statistics. The expansion of the Clinical Classifications Software categories resulted in higher predictive power. The R reached 0.72 and 0.52 for the transformed and raw scale cost, respectively. The case-mix algorithm we developed based on age, sex, and diagnoses outperformed the existing case-mix models reported in the literature. The method developed in this study can be used by other health systems to produce tailored risk models for their specific purpose.

  1. Tunable, mixed-resolution modeling using library-based Monte Carlo and graphics processing units

    PubMed Central

    Mamonov, Artem B.; Lettieri, Steven; Ding, Ying; Sarver, Jessica L.; Palli, Rohith; Cunningham, Timothy F.; Saxena, Sunil; Zuckerman, Daniel M.

    2012-01-01

    Building on our recently introduced library-based Monte Carlo (LBMC) approach, we describe a flexible protocol for mixed coarse-grained (CG)/all-atom (AA) simulation of proteins and ligands. In the present implementation of LBMC, protein side chain configurations are pre-calculated and stored in libraries, while bonded interactions along the backbone are treated explicitly. Because the AA side chain coordinates are maintained at minimal run-time cost, arbitrary sites and interaction terms can be turned on to create mixed-resolution models. For example, an AA region of interest such as a binding site can be coupled to a CG model for the rest of the protein. We have additionally developed a hybrid implementation of the generalized Born/surface area (GBSA) implicit solvent model suitable for mixed-resolution models, which in turn was ported to a graphics processing unit (GPU) for faster calculation. The new software was applied to study two systems: (i) the behavior of spin labels on the B1 domain of protein G (GB1) and (ii) docking of randomly initialized estradiol configurations to the ligand binding domain of the estrogen receptor (ERα). The performance of the GPU version of the code was also benchmarked in a number of additional systems. PMID:23162384

  2. Likelihood-Based Random-Effect Meta-Analysis of Binary Events.

    PubMed

    Amatya, Anup; Bhaumik, Dulal K; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D

    2015-01-01

    Meta-analysis has been used extensively for evaluation of efficacy and safety of medical interventions. Its advantages and utilities are well known. However, recent studies have raised questions about the accuracy of the commonly used moment-based meta-analytic methods in general and for rare binary outcomes in particular. The issue is further complicated for studies with heterogeneous effect sizes. Likelihood-based mixed-effects modeling provides an alternative to moment-based methods such as inverse-variance weighted fixed- and random-effects estimators. In this article, we compare and contrast different mixed-effect modeling strategies in the context of meta-analysis. Their performance in estimation and testing of overall effect and heterogeneity are evaluated when combining results from studies with a binary outcome. Models that allow heterogeneity in both baseline rate and treatment effect across studies have low type I and type II error rates, and their estimates are the least biased among the models considered.

  3. Euler-Lagrange CFD modelling of unconfined gas mixing in anaerobic digestion.

    PubMed

    Dapelo, Davide; Alberini, Federico; Bridgeman, John

    2015-11-15

    A novel Euler-Lagrangian (EL) computational fluid dynamics (CFD) finite volume-based model to simulate the gas mixing of sludge for anaerobic digestion is developed and described. Fluid motion is driven by momentum transfer from bubbles to liquid. Model validation is undertaken by assessing the flow field in a labscale model with particle image velocimetry (PIV). Conclusions are drawn about the upscaling and applicability of the model to full-scale problems, and recommendations are given for optimum application. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Hydrochemical assessment of freshening saline groundwater using multiple end-members mixing modeling: A study of Red River delta aquifer, Vietnam

    NASA Astrophysics Data System (ADS)

    Kim, Ji-Hyun; Kim, Kyoung-Ho; Thao, Nguyen Thi; Batsaikhan, Bayartungalag; Yun, Seong-Taek

    2017-06-01

    In this study, we evaluated the water quality status (especially, salinity problems) and hydrogeochemical processes of an alluvial aquifer in a floodplain of the Red River delta, Vietnam, based on the hydrochemical and isotopic data of groundwater samples (n = 23) from the Kien Xuong district of the Thai Binh province. Following the historical inundation by paleo-seawater during coastal progradation, the aquifer has been undergone progressive freshening and land reclamation to enable settlements and farming. The hydrochemical data of water samples showed a broad hydrochemical change, from Na-Cl through Na-HCO3 to Ca-HCO3 types, suggesting that groundwater was overall evolved through the freshening process accompanying cation exchange. The principal component analysis (PCA) of the hydrochemical data indicates the occurrence of three major hydrogeochemical processes occurring in an aquifer, namely: 1) progressive freshening of remaining paleo-seawater, 2) water-rock interaction (i.e., dissolution of silicates), and 3) redox process including sulfate reduction, as indicated by heavy sulfur and oxygen isotope compositions of sulfate. To quantitatively assess the hydrogeochemical processes, the end-member mixing analysis (EMMA) and the forward mixing modeling using PHREEQC code were conducted. The EMMA results show that the hydrochemical model with the two-dimensional mixing space composed of PC 1 and PC 2 best explains the mixing in the study area; therefore, we consider that the groundwater chemistry mainly evolved by mixing among three end-members (i.e., paleo-seawater, infiltrating rain, and the K-rich groundwater). The distinct depletion of sulfate in groundwater, likely due to bacterial sulfate reduction, can also be explained by EMMA. The evaluation of mass balances using geochemical modeling supports the explanation that the freshening process accompanying direct cation exchange occurs through mixing among three end-members involving the K-rich groundwater. This study shows that the multiple end-members mixing model is useful to more successfully assess complex hydrogeochemical processes occurring in a salinized aquifer under freshening, as compared to the conventional interpretation using the theoretical mixing line based on only two end-members (i.e., seawater and rainwater).

  5. Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing

    NASA Astrophysics Data System (ADS)

    Watanabe, T.; Nagata, K.

    2016-08-01

    We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting a value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES-LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.

  6. Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watanabe, T., E-mail: watanabe.tomoaki@c.nagoya-u.jp; Nagata, K.

    We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting amore » value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES–LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.« less

  7. Formulation and Validation of an Efficient Computational Model for a Dilute, Settling Suspension Undergoing Rotational Mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprague, Michael A.; Stickel, Jonathan J.; Sitaraman, Hariswaran

    Designing processing equipment for the mixing of settling suspensions is a challenging problem. Achieving low-cost mixing is especially difficult for the application of slowly reacting suspended solids because the cost of impeller power consumption becomes quite high due to the long reaction times (batch mode) or due to large-volume reactors (continuous mode). Further, the usual scale-up metrics for mixing, e.g., constant tip speed and constant power per volume, do not apply well for mixing of suspensions. As an alternative, computational fluid dynamics (CFD) can be useful for analyzing mixing at multiple scales and determining appropriate mixer designs and operating parameters.more » We developed a mixture model to describe the hydrodynamics of a settling cellulose suspension. The suspension motion is represented as a single velocity field in a computationally efficient Eulerian framework. The solids are represented by a scalar volume-fraction field that undergoes transport due to particle diffusion, settling, fluid advection, and shear stress. A settling model and a viscosity model, both functions of volume fraction, were selected to fit experimental settling and viscosity data, respectively. Simulations were performed with the open-source Nek5000 CFD program, which is based on the high-order spectral-finite-element method. Simulations were performed for the cellulose suspension undergoing mixing in a laboratory-scale vane mixer. The settled-bed heights predicted by the simulations were in semi-quantitative agreement with experimental observations. Further, the simulation results were in quantitative agreement with experimentally obtained torque and mixing-rate data, including a characteristic torque bifurcation. In future work, we plan to couple this CFD model with a reaction-kinetics model for the enzymatic digestion of cellulose, allowing us to predict enzymatic digestion performance for various mixing intensities and novel reactor designs.« less

  8. Quantifying uncertainty in stable isotope mixing models

    DOE PAGES

    Davis, Paul; Syme, James; Heikoop, Jeffrey; ...

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [ Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ 15N and δ 18O) butmore » all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.« less

  9. Modeling reactive transport with particle tracking and kernel estimators

    NASA Astrophysics Data System (ADS)

    Rahbaralam, Maryam; Fernandez-Garcia, Daniel; Sanchez-Vila, Xavier

    2015-04-01

    Groundwater reactive transport models are useful to assess and quantify the fate and transport of contaminants in subsurface media and are an essential tool for the analysis of coupled physical, chemical, and biological processes in Earth Systems. Particle Tracking Method (PTM) provides a computationally efficient and adaptable approach to solve the solute transport partial differential equation. On a molecular level, chemical reactions are the result of collisions, combinations, and/or decay of different species. For a well-mixed system, the chem- ical reactions are controlled by the classical thermodynamic rate coefficient. Each of these actions occurs with some probability that is a function of solute concentrations. PTM is based on considering that each particle actually represents a group of molecules. To properly simulate this system, an infinite number of particles is required, which is computationally unfeasible. On the other hand, a finite number of particles lead to a poor-mixed system which is limited by diffusion. Recent works have used this effect to actually model incomplete mix- ing in naturally occurring porous media. In this work, we demonstrate that this effect in most cases should be attributed to a defficient estimation of the concentrations and not to the occurrence of true incomplete mixing processes in porous media. To illustrate this, we show that a Kernel Density Estimation (KDE) of the concentrations can approach the well-mixed solution with a limited number of particles. KDEs provide weighting functions of each particle mass that expands its region of influence, hence providing a wider region for chemical reactions with time. Simulation results show that KDEs are powerful tools to improve state-of-the-art simulations of chemical reactions and indicates that incomplete mixing in diluted systems should be modeled based on alternative conceptual models and not on a limited number of particles.

  10. A Water Model Study on Mixing Behavior of the Two-Layered Bath in Bottom Blown Copper Smelting Furnace

    NASA Astrophysics Data System (ADS)

    Shui, Lang; Cui, Zhixiang; Ma, Xiaodong; Jiang, Xu; Chen, Mao; Xiang, Yong; Zhao, Baojun

    2018-05-01

    The bottom-blown copper smelting furnace is a novel copper smelter developed in recent years. Many advantages of this furnace have been found, related to bath mixing behavior under its specific gas injection scheme. This study aims to use an oil-water double-phased laboratory-scale model to investigate the impact of industry-adjustable variables on bath mixing time, including lower layer thickness, gas flow rate, upper layer thickness and upper layer viscosity. Based on experimental results, an overall empirical relationship of mixing time in terms of these variables has been correlated, which provides the methodology for industry to optimize mass transfer in the furnace.

  11. An epidemic model to evaluate the homogeneous mixing assumption

    NASA Astrophysics Data System (ADS)

    Turnes, P. P.; Monteiro, L. H. A.

    2014-11-01

    Many epidemic models are written in terms of ordinary differential equations (ODE). This approach relies on the homogeneous mixing assumption; that is, the topological structure of the contact network established by the individuals of the host population is not relevant to predict the spread of a pathogen in this population. Here, we propose an epidemic model based on ODE to study the propagation of contagious diseases conferring no immunity. The state variables of this model are the percentages of susceptible individuals, infectious individuals and empty space. We show that this dynamical system can experience transcritical and Hopf bifurcations. Then, we employ this model to evaluate the validity of the homogeneous mixing assumption by using real data related to the transmission of gonorrhea, hepatitis C virus, human immunodeficiency virus, and obesity.

  12. A D-vine copula-based model for repeated measurements extending linear mixed models with homogeneous correlation structure.

    PubMed

    Killiches, Matthias; Czado, Claudia

    2018-03-22

    We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.

  13. A refined and dynamic cellular automaton model for pedestrian-vehicle mixed traffic flow

    NASA Astrophysics Data System (ADS)

    Liu, Mianfang; Xiong, Shengwu

    2016-12-01

    Mixed traffic flow sharing the “same lane” and having no discipline on road is a common phenomenon in the developing countries. For example, motorized vehicles (m-vehicles) and nonmotorized vehicles (nm-vehicles) may share the m-vehicle lane or nm-vehicle lane and pedestrians may share the nm-vehicle lane. Simulating pedestrian-vehicle mixed traffic flow consisting of three kinds of traffic objects: m-vehicles, nm-vehicles and pedestrians, can be a challenge because there are some erratic drivers or pedestrians who fail to follow the lane disciplines. In the paper, we investigate various moving and interactive behavior associated with mixed traffic flow, such as lateral drift including illegal lane-changing and transverse crossing different lanes, overtaking and forward movement, and propose some new moving and interactive rules for pedestrian-vehicle mixed traffic flow based on a refined and dynamic cellular automaton (CA) model. Simulation results indicate that the proposed model can be used to investigate the traffic flow characteristic in a mixed traffic flow system and corresponding complicated traffic problems, such as, the moving characteristics of different traffic objects, interaction phenomenon between different traffic objects, traffic jam, traffic conflict, etc., which are consistent with the actual mixed traffic system. Therefore, the proposed model provides a solid foundation for the management, planning and evacuation of the mixed traffic flow.

  14. Nursing home case mix in Wisconsin. Findings and policy implications.

    PubMed

    Arling, G; Zimmerman, D; Updike, L

    1989-02-01

    Along with many other states, Wisconsin is considering a case mix approach to Medicaid nursing home reimbursement. To support this effort, a nursing home case mix model was developed from a representative sample of 410 Medicaid nursing home residents from 56 facilities in Wisconsin. The model classified residents into mutually exclusive groups that were homogeneous in their use of direct care resources, i.e., minutes of direct care time (weighted for nurse skill level) over a 7-day period. Groups were defined initially by intense, Special, or Routine nursing requirements. Within these nursing requirement categories, subgroups were formed by the presence/absence of behavioral problems and dependency in activities of daily living (ADL). Wisconsin's current Skilled/Intermediate Care (SNF/ICF) classification system was analyzed in light of the case mix model and found to be less effective in distinguishing residents by resource use. The case mix model accounted for 48% of the variance in resource use, whereas the SNF/ICF classification system explained 22%. Comparisons were drawn with nursing home case mix models in New York State (RUG-II) and Minnesota. Despite progress in the study of nursing home case mix and its application to reimbursement reform, methodologic and policy issues remain. These include the differing operational definitions for nursing requirements and ADL dependency, the inconsistency in findings concerning psychobehavioral problems, and the problem of promoting positive health and functional outcomes based on models that may be insensitive to change in resident conditions over time.

  15. Adapt-Mix: learning local genetic correlation structure improves summary statistics-based analyses

    PubMed Central

    Park, Danny S.; Brown, Brielin; Eng, Celeste; Huntsman, Scott; Hu, Donglei; Torgerson, Dara G.; Burchard, Esteban G.; Zaitlen, Noah

    2015-01-01

    Motivation: Approaches to identifying new risk loci, training risk prediction models, imputing untyped variants and fine-mapping causal variants from summary statistics of genome-wide association studies are playing an increasingly important role in the human genetics community. Current summary statistics-based methods rely on global ‘best guess’ reference panels to model the genetic correlation structure of the dataset being studied. This approach, especially in admixed populations, has the potential to produce misleading results, ignores variation in local structure and is not feasible when appropriate reference panels are missing or small. Here, we develop a method, Adapt-Mix, that combines information across all available reference panels to produce estimates of local genetic correlation structure for summary statistics-based methods in arbitrary populations. Results: We applied Adapt-Mix to estimate the genetic correlation structure of both admixed and non-admixed individuals using simulated and real data. We evaluated our method by measuring the performance of two summary statistics-based methods: imputation and joint-testing. When using our method as opposed to the current standard of ‘best guess’ reference panels, we observed a 28% decrease in mean-squared error for imputation and a 73.7% decrease in mean-squared error for joint-testing. Availability and implementation: Our method is publicly available in a software package called ADAPT-Mix available at https://github.com/dpark27/adapt_mix. Contact: noah.zaitlen@ucsf.edu PMID:26072481

  16. Flow analysis for efficient design of wavy structured microchannel mixing devices

    NASA Astrophysics Data System (ADS)

    Kanchan, Mithun; Maniyeri, Ranjith

    2018-04-01

    Microfluidics is a rapidly growing field of applied research which is strongly driven by demands of bio-technology and medical innovation. Lab-on-chip (LOC) is one such application which deals with integrating bio-laboratory on micro-channel based single fluidic chip. Since fluid flow in such devices is restricted to laminar regime, designing an efficient passive modulator to induce chaotic mixing for such diffusion based flow is a major challenge. In the present work two-dimensional numerical simulation of viscous incompressible flow is carried out using immersed boundary method (IBM) to obtain an efficient design for wavy structured micro-channel mixing devices. The continuity and Navier-Stokes equations governing the flow are solved by fractional step based finite volume method on a staggered Cartesian grid system. IBM uses Eulerian co-ordinates to describe fluid flow and Lagrangian co-ordinates to describe solid boundary. Dirac delta function is used to couple both these co-ordinate variables. A tether forcing term is used to impose the no-slip boundary condition on the wavy structure and fluid interface. Fluid flow analysis by varying Reynolds number is carried out for four wavy structure models and one straight line model. By analyzing fluid accumulation zones and flow velocities, it can be concluded that straight line structure performs better mixing for low Reynolds number and Model 2 for higher Reynolds number. Thus wavy structures can be incorporated in micro-channels to improve mixing efficiency.

  17. Mathematical modeling of the kinetics of deposition of particles during their pulse introduction through the free surface of a mixed-medium plane layer

    NASA Astrophysics Data System (ADS)

    Boger, A. A.; Ryazhskikh, V. I.; Slyusarev, M. I.

    2012-01-01

    Based on diffusion concepts of transfer of slightly concentrated polydisperse suspensions in the gravity field, we propose a mathematical model of the kinetics of deposition of such suspensions in a plane layer of a homogeneously mixed medium through the free surface of which Stokesian particles penetrate according to the rectangular pulse law.

  18. The prediction of sea-surface temperature variations by means of an advective mixed-layer ocean model

    NASA Technical Reports Server (NTRS)

    Atlas, R. M.

    1976-01-01

    An advective mixed layer ocean model was developed by eliminating the assumption of horizontal homogeneity in an already existing mixed layer model, and then superimposing a mean and anomalous wind driven current field. This model is based on the principle of conservation of heat and mechanical energy and utilizes a box grid for the advective part of the calculation. Three phases of experiments were conducted: evaluation of the model's ability to account for climatological sea surface temperature (SST) variations in the cooling and heating seasons, sensitivity tests in which the effect of hypothetical anomalous winds was evaluated, and a thirty-day synoptic calculation using the model. For the case studied, the accuracy of the predictions was improved by the inclusion of advection, although nonadvective effects appear to have dominated.

  19. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    PubMed

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  20. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    PubMed Central

    Shek, Daniel T. L.; Ma, Cecilia M. S.

    2011-01-01

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented. PMID:21218263

  1. Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.

    PubMed

    Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C

    2014-12-01

    D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.

  2. Comparison of an Agent-based Model of Disease Propagation with the Generalised SIR Epidemic Model

    DTIC Science & Technology

    2009-08-01

    has become a practical method for conducting Epidemiological Modelling. In the agent- based approach the whole township can be modelled as a system of...SIR system was initially developed based on a very simplified model of social interaction. For instance an assumption of uniform population mixing was...simulating the progress of a disease within a host and of transmission between hosts is based upon Transportation Analysis and Simulation System

  3. Recalibrating disease parameters for increasing realism in modeling epidemics in closed settings.

    PubMed

    Bioglio, Livio; Génois, Mathieu; Vestergaard, Christian L; Poletto, Chiara; Barrat, Alain; Colizza, Vittoria

    2016-11-14

    The homogeneous mixing assumption is widely adopted in epidemic modelling for its parsimony and represents the building block of more complex approaches, including very detailed agent-based models. The latter assume homogeneous mixing within schools, workplaces and households, mostly for the lack of detailed information on human contact behaviour within these settings. The recent data availability on high-resolution face-to-face interactions makes it now possible to assess the goodness of this simplified scheme in reproducing relevant aspects of the infection dynamics. We consider empirical contact networks gathered in different contexts, as well as synthetic data obtained through realistic models of contacts in structured populations. We perform stochastic spreading simulations on these contact networks and in populations of the same size under a homogeneous mixing hypothesis. We adjust the epidemiological parameters of the latter in order to fit the prevalence curve of the contact epidemic model. We quantify the agreement by comparing epidemic peak times, peak values, and epidemic sizes. Good approximations of the peak times and peak values are obtained with the homogeneous mixing approach, with a median relative difference smaller than 20 % in all cases investigated. Accuracy in reproducing the peak time depends on the setting under study, while for the peak value it is independent of the setting. Recalibration is found to be linear in the epidemic parameters used in the contact data simulations, showing changes across empirical settings but robustness across groups and population sizes. An adequate rescaling of the epidemiological parameters can yield a good agreement between the epidemic curves obtained with a real contact network and a homogeneous mixing approach in a population of the same size. The use of such recalibrated homogeneous mixing approximations would enhance the accuracy and realism of agent-based simulations and limit the intrinsic biases of the homogeneous mixing.

  4. Mixing characterisation of full-scale membrane bioreactors: CFD modelling with experimental validation.

    PubMed

    Brannock, M; Wang, Y; Leslie, G

    2010-05-01

    Membrane Bioreactors (MBRs) have been successfully used in aerobic biological wastewater treatment to solve the perennial problem of effective solids-liquid separation. The optimisation of MBRs requires knowledge of the membrane fouling, biokinetics and mixing. However, research has mainly concentrated on the fouling and biokinetics (Ng and Kim, 2007). Current methods of design for a desired flow regime within MBRs are largely based on assumptions (e.g. complete mixing of tanks) and empirical techniques (e.g. specific mixing energy). However, it is difficult to predict how sludge rheology and vessel design in full-scale installations affects hydrodynamics, hence overall performance. Computational Fluid Dynamics (CFD) provides a method for prediction of how vessel features and mixing energy usage affect the hydrodynamics. In this study, a CFD model was developed which accounts for aeration, sludge rheology and geometry (i.e. bioreactor and membrane module). This MBR CFD model was then applied to two full-scale MBRs and was successfully validated against experimental results. The effect of sludge settling and rheology was found to have a minimal impact on the bulk mixing (i.e. the residence time distribution).

  5. Effects of land use data on dry deposition in a regional photochemical model for eastern Texas.

    PubMed

    McDonald-Buller, E; Wiedinmyer, C; Kimura, Y; Allen, D

    2001-08-01

    Land use data are among the inputs used to determine dry deposition velocities for photochemical grid models such as the Comprehensive Air Quality Model with extensions (CAMx) that is currently used for attainment demonstrations and air quality planning by the state of Texas. The sensitivity of dry deposition and O3 mixing ratios to land use classification was investigated by comparing predictions based on default U.S. Geological Survey (USGS) land use data to predictions based on recently compiled land use data that were collected to improve biogenic emissions estimates. Dry deposition of O3 decreased throughout much of eastern Texas, especially in urban areas, with the new land use data. Predicted 1-hr averaged O3 mixing ratios with the new land use data were as much as 11 ppbv greater and 6 ppbv less than predictions based on USGS land use data during the late afternoon. In addition, the area with peak O3 mixing ratios in excess of 100 ppbv increased significantly in urban areas when deposition velocities were calculated based on the new land use data. Finally, more detailed data on land use within urban areas resulted in peak changes in O3 mixing ratios of approximately 2 ppbv. These results indicate the importance of establishing accurate, internally consistent land use data for photochemical modeling in urban areas in Texas. They also indicate the need for field validation of deposition rates in areas experiencing changing land use patterns, such as during urban reforestation programs or residential and commercial development.

  6. A Multi-wavenumber Theory for Eddy Diffusivities: Applications to the DIMES Region

    NASA Astrophysics Data System (ADS)

    Chen, R.; Gille, S. T.; McClean, J.; Flierl, G.; Griesel, A.

    2014-12-01

    Climate models are sensitive to the representation of ocean mixing processes. This has motivated recent efforts to collect observations aimed at improving mixing estimates and parameterizations. The US/UK field program Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean (DIMES), begun in 2009, is providing such estimates upstream of and within the Drake Passage. This region is characterized by topography, and strong zonal jets. In previous studies, mixing length theories, based on the assumption that eddies are dominated by a single wavenumber and phase speed, were formulated to represent the estimated mixing patterns in jets. However, in spite of the success of the single wavenumber theory in some other scenarios, it does not effectively predict the vertical structures of observed eddy diffusivities in the DIMES area. Considering that eddy motions encompass a wide range of wavenumbers, which all contribute to mixing, in this study we formulated a multi-wavenumber theory to predict eddy mixing rates. We test our theory for a domain encompassing the entire Southern Ocean. We estimated eddy diffusivities and mixing lengths from one million numerical floats in a global eddying model. These float-based mixing estimates were compared with the predictions from both the single-wavenumber and the multi-wavenumber theories. Our preliminary results in the DIMES area indicate that, compared to the single-wavenumber theory, the multi-wavenumber theory better predicts the vertical mixing structures in the vast areas where the mean flow is weak; however in the intense jet region, both theories have similar predictive skill.

  7. Pore-scale and continuum simulations of solute transport micromodel benchmark experiments

    DOE PAGES

    Oostrom, M.; Mehmani, Y.; Romero-Gomez, P.; ...

    2014-06-18

    Four sets of nonreactive solute transport experiments were conducted with micromodels. Three experiments with one variable, i.e., flow velocity, grain diameter, pore-aspect ratio, and flow-focusing heterogeneity were in each set. The data sets were offered to pore-scale modeling groups to test their numerical simulators. Each set consisted of two learning experiments, for which our results were made available, and one challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the transverse dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing,more » and considerably enhanced mixing due to flow focusing. Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice Boltzmann (LB) approach, and one used a computational fluid dynamics (CFD) technique. Furthermore, we used the learning experiments, by the PN models, to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used the learning experiments to appropriately discretize the spatial grid representations. For the continuum modeling, the required dispersivity input values were estimated based on published nonlinear relations between transverse dispersion coefficients and Peclet number. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values, resulting in reduced dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models, which account for the micromodel geometry and underlying flow and transport physics, needed up to several days on supercomputers to resolve the more complex problems.« less

  8. A Methodology for Identifying Cost Effective Strategic Force Mixes.

    DTIC Science & Technology

    1984-12-01

    is not to say that the model could not be used to examine force increases. Given that the strategic force is already a mix of weapons, what is the...rules allow for the determination of what weapon mix to buy based on only the relative prices of the weapons and the parameters of the CES production...AD-A 151 773 AFIT/GOR/OS/84j /r A METHODOLOGY FOR IDENTIFYING COST EFFECTIVE STRATEGIC FORCE MIXES THESIS D I Thomas W. Manacapilli

  9. Boosted Regression Tree Models to Explain Watershed Nutrient Concentrations and Biological Condition

    EPA Science Inventory

    Boosted regression tree (BRT) models were developed to quantify the nonlinear relationships between landscape variables and nutrient concentrations in a mesoscale mixed land cover watershed during base-flow conditions. Factors that affect instream biological components, based on ...

  10. Modeling the internal dynamics of energy and mass transfer in an imperfectly mixed ventilated airspace.

    PubMed

    Janssens, K; Van Brecht, A; Zerihun Desta, T; Boonen, C; Berckmans, D

    2004-06-01

    The present paper outlines a modeling approach, which has been developed to model the internal dynamics of heat and moisture transfer in an imperfectly mixed ventilated airspace. The modeling approach, which combines the classical heat and moisture balance differential equations with the use of experimental time-series data, provides a physically meaningful description of the process and is very useful for model-based control purposes. The paper illustrates how the modeling approach has been applied to a ventilated laboratory test room with internal heat and moisture production. The results are evaluated and some valuable suggestions for future research are forwarded. The modeling approach outlined in this study provides an ideal form for advanced model-based control system design. The relatively low number of parameters makes it well suited for model-based control purposes, as a limited number of identification experiments is sufficient to determine these parameters. The model concept provides information about the air quality and airflow pattern in an arbitrary building. By using this model as a simulation tool, the indoor air quality and airflow pattern can be optimized.

  11. A hybrid probabilistic/spectral model of scalar mixing

    NASA Astrophysics Data System (ADS)

    Vaithianathan, T.; Collins, Lance

    2002-11-01

    In the probability density function (PDF) description of a turbulent reacting flow, the local temperature and species concentration are replaced by a high-dimensional joint probability that describes the distribution of states in the fluid. The PDF has the great advantage of rendering the chemical reaction source terms closed, independent of their complexity. However, molecular mixing, which involves two-point information, must be modeled. Indeed, the qualitative shape of the PDF is sensitive to this modeling, hence the reliability of the model to predict even the closed chemical source terms rests heavily on the mixing model. We will present a new closure to the mixing based on a spectral representation of the scalar field. The model is implemented as an ensemble of stochastic particles, each carrying scalar concentrations at different wavenumbers. Scalar exchanges within a given particle represent ``transfer'' while scalar exchanges between particles represent ``mixing.'' The equations governing the scalar concentrations at each wavenumber are derived from the eddy damped quasi-normal Markovian (or EDQNM) theory. The model correctly predicts the evolution of an initial double delta function PDF into a Gaussian as seen in the numerical study by Eswaran & Pope (1988). Furthermore, the model predicts the scalar gradient distribution (which is available in this representation) approaches log normal at long times. Comparisons of the model with data derived from direct numerical simulations will be shown.

  12. High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software

    PubMed Central

    Fabregat-Traver, Diego; Sharapov, Sodbo Zh.; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo

    2014-01-01

    To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the ’omics’ context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL. PMID:25717363

  13. High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software.

    PubMed

    Fabregat-Traver, Diego; Sharapov, Sodbo Zh; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo

    2014-01-01

    To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the 'omics' context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL.

  14. Scale model performance test investigation of exhaust system mixers for an Energy Efficient Engine /E3/ propulsion system

    NASA Technical Reports Server (NTRS)

    Kuchar, A. P.; Chamberlin, R.

    1980-01-01

    A scale model performance test was conducted as part of the NASA Energy Efficient Engine (E3) Program, to investigate the geometric variables that influence the aerodynamic design of exhaust system mixers for high-bypass, mixed-flow engines. Mixer configuration variables included lobe number, penetration and perimeter, as well as several cutback mixer geometries. Mixing effectiveness and mixer pressure loss were determined using measured thrust and nozzle exit total pressure and temperature surveys. Results provide a data base to aid the analysis and design development of the E3 mixed-flow exhaust system.

  15. Modeling and simulation of protein elution in linear pH and salt gradients on weak, strong and mixed cation exchange resins applying an extended Donnan ion exchange model.

    PubMed

    Wittkopp, Felix; Peeck, Lars; Hafner, Mathias; Frech, Christian

    2018-04-13

    Process development and characterization based on mathematic modeling provides several advantages and has been applied more frequently over the last few years. In this work, a Donnan equilibrium ion exchange (DIX) model is applied for modelling and simulation of ion exchange chromatography of a monoclonal antibody in linear chromatography. Four different cation exchange resin prototypes consisting of weak, strong and mixed ligands are characterized using pH and salt gradient elution experiments applying the extended DIX model. The modelling results are compared with the results using a classic stoichiometric displacement model. The Donnan equilibrium model is able to describe all four prototype resins while the stoichiometric displacement model fails for the weak and mixed weak/strong ligands. Finally, in silico chromatogram simulations of pH and pH/salt dual gradients are performed to verify the results and to show the consistency of the developed model. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Soft sensor development for Mooney viscosity prediction in rubber mixing process based on GMMDJITGPR algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Chen, Xiangguang; Wang, Li; Jin, Huaiping

    2017-01-01

    In rubber mixing process, the key parameter (Mooney viscosity), which is used to evaluate the property of the product, can only be obtained with 4-6h delay offline. It is quite helpful for the industry, if the parameter can be estimate on line. Various data driven soft sensors have been used to prediction in the rubber mixing. However, it always not functions well due to the phase and nonlinear property in the process. The purpose of this paper is to develop an efficient soft sensing algorithm to solve the problem. Based on the proposed GMMD local sample selecting criterion, the phase information is extracted in the local modeling. Using the Gaussian local modeling method within Just-in-time (JIT) learning framework, nonlinearity of the process is well handled. Efficiency of the new method is verified by comparing the performance with various mainstream soft sensors, using the samples from real industrial rubber mixing process.

  17. Using a hybrid model to predict solute transfer from initially saturated soil into surface runoff with controlled drainage water.

    PubMed

    Tong, Juxiu; Hu, Bill X; Yang, Jinzhong; Zhu, Yan

    2016-06-01

    The mixing layer theory is not suitable for predicting solute transfer from initially saturated soil to surface runoff water under controlled drainage conditions. By coupling the mixing layer theory model with the numerical model Hydrus-1D, a hybrid solute transfer model has been proposed to predict soil solute transfer from an initially saturated soil into surface water, under controlled drainage water conditions. The model can also consider the increasing ponding water conditions on soil surface before surface runoff. The data of solute concentration in surface runoff and drainage water from a sand experiment is used as the reference experiment. The parameters for the water flow and solute transfer model and mixing layer depth under controlled drainage water condition are identified. Based on these identified parameters, the model is applied to another initially saturated sand experiment with constant and time-increasing mixing layer depth after surface runoff, under the controlled drainage water condition with lower drainage height at the bottom. The simulation results agree well with the observed data. Study results suggest that the hybrid model can accurately simulate the solute transfer from initially saturated soil into surface runoff under controlled drainage water condition. And it has been found that the prediction with increasing mixing layer depth is better than that with the constant one in the experiment with lower drainage condition. Since lower drainage condition and deeper ponded water depth result in later runoff start time, more solute sources in the mixing layer are needed for the surface water, and larger change rate results in the increasing mixing layer depth.

  18. Combined Recirculatory-compartmental Population Pharmacokinetic Modeling of Arterial and Venous Plasma S(+) and R(-) Ketamine Concentrations.

    PubMed

    Henthorn, Thomas K; Avram, Michael J; Dahan, Albert; Gustafsson, Lars L; Persson, Jan; Krejcie, Tom C; Olofsen, Erik

    2018-05-16

    The pharmacokinetics of infused drugs have been modeled without regard for recirculatory or mixing kinetics. We used a unique ketamine dataset with simultaneous arterial and venous blood sampling, during and after separate S(+) and R(-) ketamine infusions, to develop a simplified recirculatory model of arterial and venous plasma drug concentrations. S(+) or R(-) ketamine was infused over 30 min on two occasions to 10 healthy male volunteers. Frequent, simultaneous arterial and forearm venous blood samples were obtained for up to 11 h. A multicompartmental pharmacokinetic model with front-end arterial mixing and venous blood components was developed using nonlinear mixed effects analyses. A three-compartment base pharmacokinetic model with additional arterial mixing and arm venous compartments and with shared S(+)/R(-) distribution kinetics proved superior to standard compartmental modeling approaches. Total pharmacokinetic flow was estimated to be 7.59 ± 0.36 l/min (mean ± standard error of the estimate), and S(+) and R(-) elimination clearances were 1.23 ± 0.04 and 1.06 ± 0.03 l/min, respectively. The arm-tissue link rate constant was 0.18 ± 0.01 min and the fraction of arm blood flow estimated to exchange with arm tissue was 0.04 ± 0.01. Arterial drug concentrations measured during drug infusion have two kinetically distinct components: partially or lung-mixed drug and fully mixed-recirculated drug. Front-end kinetics suggest the partially mixed concentration is proportional to the ratio of infusion rate and total pharmacokinetic flow. This simplified modeling approach could lead to more generalizable models for target-controlled infusions and improved methods for analyzing pharmacokinetic-pharmacodynamic data.

  19. Impact of Lateral Mixing in the Ocean on El Nino in Fully Coupled Climate Models

    NASA Astrophysics Data System (ADS)

    Gnanadesikan, A.; Russell, A.; Pradal, M. A. S.; Abernathey, R. P.

    2016-02-01

    Given the large number of processes that can affect El Nino, it is difficult to understand why different climate models simulate El Nino differently. This paper focusses on the role of lateral mixing by mesoscale eddies. There is significant disagreement about the value of the mixing coefficient ARedi which parameterizes the lateral mixing of tracers. Coupled climate models usually prescribe small values of this coefficient, ranging between a few hundred and a few thousand m2/s. Observations, however, suggest values that are much larger. We present a sensitivity study with a suite of Earth System Models that examines the impact of varying ARedi on the amplitude of El Nino. We examine the effect of varying a spatially constant ARedi over a range of values similar to that seen in the IPCC AR5 models, as well as looking at two spatially varying distributions based on altimetric velocity estimates. While the expectation that higher values of ARedi should damp anomalies is borne out in the model, it is more than compensated by a weaker damping due to vertical mixing and a stronger response of atmospheric winds to SST anomalies. Under higher mixing, a weaker zonal SST gradient causes the center of convection over the Warm pool to shift eastward and to become more sensitive to changes in cold tongue SSTs . Changes in the SST gradient also explain interdecadal ENSO variability within individual model runs.

  20. Modeling the purging of dense fluid from a street canyon driven by an interfacial mixing flow and skimming flow

    NASA Astrophysics Data System (ADS)

    Baratian-Ghorghi, Z.; Kaye, N. B.

    2013-07-01

    An experimental study is presented to investigate the mechanism of flushing a trapped dense contaminant from a canyon by turbulent boundary layer flow. The results of a series of steady-state experiments are used to parameterize the flushing mechanisms. The steady-state experimental results for a canyon with aspect ratio one indicate that dense fluid is removed from the canyon by two different processes, skimming of dense fluid from the top of the dense layer; and by an interfacial mixing flow that mixes fresh fluid down into the dense lower layer (entrainment) while mixing dense fluid into the flow above the canyon (detrainment). A model is developed for the time varying buoyancy profile within the canyon as a function of the Richardson number which parameterizes both the interfacial mixing and skimming processes observed. The continuous release steady-state experiments allowed for the direct measurement of the skimming and interfacial mixing flow rates for any layer depth and Richardson number. Both the skimming rate and the interfacial mixing rate were found to be power-law functions of the Richardson number of the layer. The model results were compared to the results of previously published finite release experiments [Z. Baratian-Ghorghi and N. B. Kaye, Atmos. Environ. 60, 392-402 (2012)], 10.1016/j.atmosenv.2012.06.077. A high degree of consistency was found between the finite release data and the continuous release data. This agreement acts as an excellent check on the measurement techniques used, as the finite release data was based on curve fitting through buoyancy versus time data, while the continuous release data was calculated directly by measuring the rate of addition of volume and buoyancy once a steady-state was established. Finally, a system of ordinary differential equations is presented to model the removal of dense fluid from the canyon based on empirical correlations of the skimming and interfacial mixing taken form the steady-state experiments. The ODE model predicts well the time taken for a finite volume of dense fluid to be flushed from a canyon.

  1. Internal friction and vulnerability of mixed alkali glasses.

    PubMed

    Peibst, Robby; Schott, Stephan; Maass, Philipp

    2005-09-09

    Based on a hopping model we show how the mixed alkali effect in glasses can be understood if only a small fraction c(V) of the available sites for the mobile ions is vacant. In particular, we reproduce the peculiar behavior of the internal friction and the steep fall ("vulnerability") of the mobility of the majority ion upon small replacements by the minority ion. The single and mixed alkali internal friction peaks are caused by ion-vacancy and ion-ion exchange processes. If c(V) is small, they can become comparable in height even at small mixing ratios. The large vulnerability is explained by a trapping of vacancies induced by the minority ions. Reasonable choices of model parameters yield typical behaviors found in experiments.

  2. Partially linear mixed-effects joint models for skewed and missing longitudinal competing risks outcomes.

    PubMed

    Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong

    2017-12-18

    Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.

  3. How Many Separable Sources? Model Selection In Independent Components Analysis

    PubMed Central

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  4. Modelling the vertical distribution of Prochlorococcus and Synechococcus in the North Pacific Subtropical Ocean.

    PubMed

    Rabouille, Sophie; Edwards, Christopher A; Zehr, Jonathan P

    2007-10-01

    A simple model was developed to examine the vertical distribution of Prochlorococcus and Synechococcus ecotypes in the water column, based on their adaptation to light intensity. Model simulations were compared with a 14-year time series of Prochlorococcus and Synechococcus cell abundances at Station ALOHA in the North Pacific Subtropical Gyre. Data were analysed to examine spatial and temporal patterns in abundances and their ranges of variability in the euphotic zone, the surface mixed layer and the layer in the euphotic zone but below the base of the mixed layer. Model simulations show that the apparent occupation of the whole euphotic zone by a genus can be the result of a co-occurrence of different ecotypes that segregate vertically. The segregation of ecotypes can result simply from differences in light response. A sensitivity analysis of the model, performed on the parameter alpha (initial slope of the light-response curve) and the DIN concentration in the upper water column, demonstrates that the model successfully reproduces the observed range of vertical distributions. Results support the idea that intermittent mixing events may have important ecological and geochemical impacts on the phytoplankton community at Station ALOHA.

  5. A Ground-Based Doppler Radar and Micropulse Lidar Forward Simulator for GCM Evaluation of Arctic Mixed-Phase Clouds: Moving Forward Towards an Apples-to-apples Comparison of Hydrometeor Phase

    NASA Astrophysics Data System (ADS)

    Lamer, K.; Fridlind, A. M.; Ackerman, A. S.; Kollias, P.; Clothiaux, E. E.

    2017-12-01

    An important aspect of evaluating Artic cloud representation in a general circulation model (GCM) consists of using observational benchmarks which are as equivalent as possible to model output in order to avoid methodological bias and focus on correctly diagnosing model dynamical and microphysical misrepresentations. However, current cloud observing systems are known to suffer from biases such as limited sensitivity, and stronger response to large or small hydrometeors. Fortunately, while these observational biases cannot be corrected, they are often well understood and can be reproduced in forward simulations. Here a ground-based millimeter wavelength Doppler radar and micropulse lidar forward simulator able to interface with output from the Goddard Institute for Space Studies (GISS) ModelE GCM is presented. ModelE stratiform hydrometeor fraction, mixing ratio, mass-weighted fall speed and effective radius are forward simulated to vertically-resolved profiles of radar reflectivity, Doppler velocity and spectrum width as well as lidar backscatter and depolarization ratio. These forward simulated fields are then compared to Atmospheric Radiation Measurement (ARM) North Slope of Alaska (NSA) ground-based observations to assess cloud vertical structure (CVS). Model evalution of Arctic mixed-phase cloud would also benefit from hydrometeor phase evaluation. While phase retrieval from synergetic observations often generates large uncertainties, the same retrieval algorithm can be applied to observed and forward-simulated radar-lidar fields, thereby producing retrieved hydrometeor properties with potentially the same uncertainties. Comparing hydrometeor properties retrieved in exactly the same way aims to produce the best apples-to-apples comparisons between GCM ouputs and observations. The use of a comprenhensive ground-based forward simulator coupled with a hydrometeor classification retrieval algorithm provides a new perspective for GCM evaluation of Arctic mixed-phase clouds from the ground where low-level supercooled liquid layer are more easily observed and where additional environmental properties such as cloud condensation nuclei are quantified. This should help assist in choosing between several possible diagnostic ice nucleation schemes for ModelE stratiform cloud.

  6. Spatial generalised linear mixed models based on distances.

    PubMed

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E

    2016-10-01

    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  7. A Model of High-Frequency Self-Mixing in Double-Barrier Rectifier

    NASA Astrophysics Data System (ADS)

    Palma, Fabrizio; Rao, R.

    2018-03-01

    In this paper, a new model of the frequency dependence of the double-barrier THz rectifier is presented. The new structure is of interest because it can be realized by CMOS image sensor technology. Its application in a complex field such as that of THz receivers requires the availability of an analytical model, which is reliable and able to highlight the dependence on the parameters of the physical structure. The model is based on the hydrodynamic semiconductor equations, solved in the small signal approximation. The model depicts the mechanisms of the THz modulation of the charge in the depleted regions of the double-barrier device and explains the self-mixing process, the frequency dependence, and the detection capability of the structure. The model thus substantially improves the analytical models of the THz rectification available in literature, mainly based on lamped equivalent circuits.

  8. Making a mixed-model line more efficient and flexible by introducing a bypass line

    NASA Astrophysics Data System (ADS)

    Matsuura, Sho; Matsuura, Haruki; Asada, Akiko

    2017-04-01

    This paper provides a design procedure for the bypass subline in a mixed-model assembly line. The bypass subline is installed to reduce the effect of the large difference in operation times among products assembled together in a mixed-model line. The importance of the bypass subline has been increasing in association with the rising necessity for efficiency and flexibility in modern manufacturing. The main topics of this paper are as follows: 1) the conditions in which the bypass subline effectively functions, and 2) how the load should be distributed between the main line and the bypass subline, depending on production conditions such as degree of difference in operation times among products and the mixing ratio of products. To address these issues, we analyzed the lower and the upper bounds of the line length. Based on the results, a design procedure and a numerical example are demonstrated.

  9. The effects of model composition design choices on high-fidelity simulations of motoneuron recruitment and firing behaviors

    NASA Astrophysics Data System (ADS)

    Allen, John M.; Elbasiouny, Sherif M.

    2018-06-01

    Objective. Computational models often require tradeoffs, such as balancing detail with efficiency; yet optimal balance should incorporate sound design features that do not bias the results of the specific scientific question under investigation. The present study examines how model design choices impact simulation results. Approach. We developed a rigorously-validated high-fidelity computational model of the spinal motoneuron pool to study three long-standing model design practices which have yet to be examined for their impact on motoneuron recruitment, firing rate, and force simulations. The practices examined were the use of: (1) generic cell models to simulate different motoneuron types, (2) discrete property ranges for different motoneuron types, and (3) biological homogeneity of cell properties within motoneuron types. Main results. Our results show that each of these practices accentuates conditions of motoneuron recruitment based on the size principle, and minimizes conditions of mixed and reversed recruitment orders, which have been observed in animal and human recordings. Specifically, strict motoneuron orderly size recruitment occurs, but in a compressed range, after which mixed and reverse motoneuron recruitment occurs due to the overlap in electrical properties of different motoneuron types. Additionally, these practices underestimate the motoneuron firing rates and force data simulated by existing models. Significance. Our results indicate that current modeling practices increase conditions of motoneuron recruitment based on the size principle, and decrease conditions of mixed and reversed recruitment order, which, in turn, impacts the predictions made by existing models on motoneuron recruitment, firing rate, and force. Additionally, mixed and reverse motoneuron recruitment generated higher muscle force than orderly size motoneuron recruitment in these simulations and represents one potential scheme to increase muscle efficiency. The examined model design practices, as well as the present results, are applicable to neuronal modeling throughout the nervous system.

  10. Pore-scale and Continuum Simulations of Solute Transport Micromodel Benchmark Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oostrom, Martinus; Mehmani, Yashar; Romero Gomez, Pedro DJ

    Four sets of micromodel nonreactive solute transport experiments were conducted with flow velocity, grain diameter, pore-aspect ratio, and flow focusing heterogeneity as the variables. The data sets were offered to pore-scale modeling groups to test their simulators. Each set consisted of two learning experiments, for which all results was made available, and a challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing, and considerably enhanced mixing due to flow focusing.more » Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice-Boltzmann (LB) approach, and one employed a computational fluid dynamics (CFD) technique. The learning experiments were used by the PN models to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used these experiments to appropriately discretize the grid representations. The continuum model use published non-linear relations between transverse dispersion coefficients and Peclet numbers to compute the required dispersivity input values. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values and, resulting in less dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models needed up to several days on supercomputers to resolve the more complex problems.« less

  11. The effects of model composition design choices on high-fidelity simulations of motoneuron recruitment and firing behaviors.

    PubMed

    Allen, John M; Elbasiouny, Sherif M

    2018-06-01

    Computational models often require tradeoffs, such as balancing detail with efficiency; yet optimal balance should incorporate sound design features that do not bias the results of the specific scientific question under investigation. The present study examines how model design choices impact simulation results. We developed a rigorously-validated high-fidelity computational model of the spinal motoneuron pool to study three long-standing model design practices which have yet to be examined for their impact on motoneuron recruitment, firing rate, and force simulations. The practices examined were the use of: (1) generic cell models to simulate different motoneuron types, (2) discrete property ranges for different motoneuron types, and (3) biological homogeneity of cell properties within motoneuron types. Our results show that each of these practices accentuates conditions of motoneuron recruitment based on the size principle, and minimizes conditions of mixed and reversed recruitment orders, which have been observed in animal and human recordings. Specifically, strict motoneuron orderly size recruitment occurs, but in a compressed range, after which mixed and reverse motoneuron recruitment occurs due to the overlap in electrical properties of different motoneuron types. Additionally, these practices underestimate the motoneuron firing rates and force data simulated by existing models. Our results indicate that current modeling practices increase conditions of motoneuron recruitment based on the size principle, and decrease conditions of mixed and reversed recruitment order, which, in turn, impacts the predictions made by existing models on motoneuron recruitment, firing rate, and force. Additionally, mixed and reverse motoneuron recruitment generated higher muscle force than orderly size motoneuron recruitment in these simulations and represents one potential scheme to increase muscle efficiency. The examined model design practices, as well as the present results, are applicable to neuronal modeling throughout the nervous system.

  12. A geometric nonlinear degenerated shell element using a mixed formulation with independently assumed strain fields. Final Report; Ph.D. Thesis, 1989

    NASA Technical Reports Server (NTRS)

    Graf, Wiley E.

    1991-01-01

    A mixed formulation is chosen to overcome deficiencies of the standard displacement-based shell model. Element development is traced from the incremental variational principle on through to the final set of equilibrium equations. Particular attention is paid to developing specific guidelines for selecting the optimal set of strain parameters. A discussion of constraint index concepts and their predictive capability related to locking is included. Performance characteristics of the elements are assessed in a wide variety of linear and nonlinear plate/shell problems. Despite limiting the study to geometric nonlinear analysis, a substantial amount of additional insight concerning the finite element modeling of thin plate/shell structures is provided. For example, in nonlinear analysis, given the same mesh and load step size, mixed elements converge in fewer iterations than equivalent displacement-based models. It is also demonstrated that, in mixed formulations, lower order elements are preferred. Additionally, meshes used to obtain accurate linear solutions do not necessarily converge to the correct nonlinear solution. Finally, a new form of locking was identified associated with employing elements designed for biaxial bending in uniaxial bending applications.

  13. Strength and deformation characteristics of pavements

    NASA Astrophysics Data System (ADS)

    Shook, J. F.; Kallas, B. F.; McCullough, B. F.; Taute, A.; Rada, G.; Witczak, M. W.; Heisey, J. S.; Stokoe, K. H.; Meyer, A. H.; Huffman, M. S.

    The Colorado experimental base project was a full-scale field experment constructed with various thicknesses of two full depth hot mix sand asphalt beans, one full depth asphalt concrete base, and one thickness of a standard design with untreated base and subbase layers. Relative thicknesses of one asphalt concrete base, two hot mix sand asphalt bases, and one standard design with untreated base and subbase required to give an equal level of pavement performance were determined. Certain measured properties of the pavement and the pavement components were related to observed levels of performance by using both empirical and theoretical models for pavement behavior.

  14. Foundations of chaotic mixing.

    PubMed

    Wiggins, Stephen; Ottino, Julio M

    2004-05-15

    The simplest mixing problem corresponds to the mixing of a fluid with itself; this case provides a foundation on which the subject rests. The objective here is to study mixing independently of the mechanisms used to create the motion and review elements of theory focusing mostly on mathematical foundations and minimal models. The flows under consideration will be of two types: two-dimensional (2D) 'blinking flows', or three-dimensional (3D) duct flows. Given that mixing in continuous 3D duct flows depends critically on cross-sectional mixing, and that many microfluidic applications involve continuous flows, we focus on the essential aspects of mixing in 2D flows, as they provide a foundation from which to base our understanding of more complex cases. The baker's transformation is taken as the centrepiece for describing the dynamical systems framework. In particular, a hierarchy of characterizations of mixing exist, Bernoulli --> mixing --> ergodic, ordered according to the quality of mixing (the strongest first). Most importantly for the design process, we show how the so-called linked twist maps function as a minimal picture of mixing, provide a mathematical structure for understanding the type of 2D flows that arise in many micromixers already built, and give conditions guaranteeing the best quality mixing. Extensions of these concepts lead to first-principle-based designs without resorting to lengthy computations.

  15. An in-depth assessment of a diagnosis-based risk adjustment model based on national health insurance claims: the application of the Johns Hopkins Adjusted Clinical Group case-mix system in Taiwan.

    PubMed

    Chang, Hsien-Yen; Weiner, Jonathan P

    2010-01-18

    Diagnosis-based risk adjustment is becoming an important issue globally as a result of its implications for payment, high-risk predictive modelling and provider performance assessment. The Taiwanese National Health Insurance (NHI) programme provides universal coverage and maintains a single national computerized claims database, which enables the application of diagnosis-based risk adjustment. However, research regarding risk adjustment is limited. This study aims to examine the performance of the Adjusted Clinical Group (ACG) case-mix system using claims-based diagnosis information from the Taiwanese NHI programme. A random sample of NHI enrollees was selected. Those continuously enrolled in 2002 were included for concurrent analyses (n = 173,234), while those in both 2002 and 2003 were included for prospective analyses (n = 164,562). Health status measures derived from 2002 diagnoses were used to explain the 2002 and 2003 health expenditure. A multivariate linear regression model was adopted after comparing the performance of seven different statistical models. Split-validation was performed in order to avoid overfitting. The performance measures were adjusted R2 and mean absolute prediction error of five types of expenditure at individual level, and predictive ratio of total expenditure at group level. The more comprehensive models performed better when used for explaining resource utilization. Adjusted R2 of total expenditure in concurrent/prospective analyses were 4.2%/4.4% in the demographic model, 15%/10% in the ACGs or ADGs (Aggregated Diagnosis Group) model, and 40%/22% in the models containing EDCs (Expanded Diagnosis Cluster). When predicting expenditure for groups based on expenditure quintiles, all models underpredicted the highest expenditure group and overpredicted the four other groups. For groups based on morbidity burden, the ACGs model had the best performance overall. Given the widespread availability of claims data and the superior explanatory power of claims-based risk adjustment models over demographics-only models, Taiwan's government should consider using claims-based models for policy-relevant applications. The performance of the ACG case-mix system in Taiwan was comparable to that found in other countries. This suggested that the ACG system could be applied to Taiwan's NHI even though it was originally developed in the USA. Many of the findings in this paper are likely to be relevant to other diagnosis-based risk adjustment methodologies.

  16. Parameterization of large-scale turbulent diffusion in the presence of both well-mixed and weakly mixed patchy layers

    NASA Astrophysics Data System (ADS)

    Osman, M. K.; Hocking, W. K.; Tarasick, D. W.

    2016-06-01

    Vertical diffusion and mixing of tracers in the upper troposphere and lower stratosphere (UTLS) are not uniform, but primarily occur due to patches of turbulence that are intermittent in time and space. The effective diffusivity of regions of patchy turbulence is related to statistical parameters describing the morphology of turbulent events, such as lifetime, number, width, depth and local diffusivity (i.e., diffusivity within the turbulent patch) of the patches. While this has been recognized in the literature, the primary focus has been on well-mixed layers, with few exceptions. In such cases the local diffusivity is irrelevant, but this is not true for weakly and partially mixed layers. Here, we use both theory and numerical simulations to consider the impact of intermediate and weakly mixed layers, in addition to well-mixed layers. Previous approaches have considered only one dimension (vertical), and only a small number of layers (often one at each time step), and have examined mixing of constituents. We consider a two-dimensional case, with multiple layers (10 and more, up to hundreds and even thousands), having well-defined, non-infinite, lengths and depths. We then provide new formulas to describe cases involving well-mixed layers which supersede earlier expressions. In addition, we look in detail at layers that are not well mixed, and, as an interesting variation on previous models, our procedure is based on tracking the dispersion of individual particles, which is quite different to the earlier approaches which looked at mixing of constituents. We develop an expression which allows determination of the degree of mixing, and show that layers used in some previous models were in fact not well mixed and so produced erroneous results. We then develop a generalized model based on two dimensional random-walk theory employing Rayleigh distributions which allows us to develop a universal formula for diffusion rates for multiple two-dimensional layers with general degrees of mixing. We show that it is the largest, most vigorous and less common turbulent layers that make the major contribution to global diffusion. Finally, we make estimates of global-scale diffusion coefficients in the lower stratosphere and upper troposphere. For the lower stratosphere, κeff ≈ 2x10-2 m2 s-1, assuming no other processes contribute to large-scale diffusion.

  17. Development of a Reduced-Order Three-Dimensional Flow Model for Thermal Mixing and Stratification Simulation during Reactor Transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Rui

    2017-09-03

    Mixing, thermal-stratification, and mass transport phenomena in large pools or enclosures play major roles for the safety of reactor systems. Depending on the fidelity requirement and computational resources, various modeling methods, from the 0-D perfect mixing model to 3-D Computational Fluid Dynamics (CFD) models, are available. Each is associated with its own advantages and shortcomings. It is very desirable to develop an advanced and efficient thermal mixing and stratification modeling capability embedded in a modern system analysis code to improve the accuracy of reactor safety analyses and to reduce modeling uncertainties. An advanced system analysis tool, SAM, is being developedmore » at Argonne National Laboratory for advanced non-LWR reactor safety analysis. While SAM is being developed as a system-level modeling and simulation tool, a reduced-order three-dimensional module is under development to model the multi-dimensional flow and thermal mixing and stratification in large enclosures of reactor systems. This paper provides an overview of the three-dimensional finite element flow model in SAM, including the governing equations, stabilization scheme, and solution methods. Additionally, several verification and validation tests are presented, including lid-driven cavity flow, natural convection inside a cavity, laminar flow in a channel of parallel plates. Based on the comparisons with the analytical solutions and experimental results, it is demonstrated that the developed 3-D fluid model can perform very well for a wide range of flow problems.« less

  18. Scalar entrainment in the mixing layer

    NASA Technical Reports Server (NTRS)

    Sandham, N. D.; Mungal, M. G.; Broadwell, J. E.; Reynolds, W. C.

    1988-01-01

    New definitions of entrainment and mixing based on the passive scalar field in the plane mixing layer are proposed. The definitions distinguish clearly between three fluid states: (1) unmixed fluid, (2) fluid engulfed in the mixing layer, trapped between two scalar contours, and (3) mixed fluid. The difference betwen (2) and (3) is the amount of fluid which has been engulfed during the pairing process, but has not yet mixed. Trends are identified from direct numerical simulations and extensions to high Reynolds number mixing layers are made in terms of the Broadwell-Breidenthal mixing model. In the limit of high Peclet number (Pe = ReSc) it is speculated that engulfed fluid rises in steps associated with pairings, introducing unmixed fluid into the large scale structures, where it is eventually mixed at the Kolmogorov scale. From this viewpoint, pairing is a prerequisite for mixing in the turbulent plane mixing layer.

  19. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. A stable isotope model for combined source apportionment and degradation quantification of environmental pollutants

    NASA Astrophysics Data System (ADS)

    Lutz, Stefanie; Van Breukelen, Boris

    2014-05-01

    Natural attenuation can represent a complementary or alternative approach to engineered remediation of polluted sites. In this context, compound specific stable isotope analysis (CSIA) has proven a useful tool, as it can provide evidence of natural attenuation and assess the extent of in-situ degradation based on changes in isotope ratios of pollutants. Moreover, CSIA can allow for source identification and apportionment, which might help to identify major emission sources in complex contamination scenarios. However, degradation and mixing processes in aquifers can lead to changes in isotopic compositions, such that their simultaneous occurrence might complicate combined source apportionment (SA) and assessment of the extent of degradation (ED). We developed a mathematical model (stable isotope sources and sinks model; SISS model) based on the linear stable isotope mixing model and the Rayleigh equation that allows for simultaneous SA and quantification of the ED in a scenario of two emission sources and degradation via one reaction pathway. It was shown that the SISS model with CSIA of at least two elements contained in the pollutant (e.g., C and H in benzene) allows for unequivocal SA even in the presence of degradation-induced isotope fractionation. In addition, the model enables precise quantification of the ED provided degradation follows instantaneous mixing of two sources. If mixing occurs after two sources have degraded separately, the model can still yield a conservative estimate of the overall extent of degradation. The SISS model was validated against virtual data from a two-dimensional reactive transport model. The model results for SA and ED were in good agreement with the simulation results. The application of the SISS model to field data of benzene contamination was, however, challenged by large uncertainties in measured isotope data. Nonetheless, the use of the SISS model provided a better insight into the interplay of mixing and degradation processes at the field site, as it revealed the prevailing contribution of one emission source and a low overall ED. The model can be extended to a larger number of sources and sinks. It may aid in forensics and natural attenuation assessment of soil, groundwater, surface water, or atmospheric pollution.

  1. Mixed convection flow of sodium alginate (SA-NaAlg) based molybdenum disulphide (MoS2) nanofluids: Maxwell Garnetts and Brinkman models

    NASA Astrophysics Data System (ADS)

    Ahmed, Tarek Nabil; Khan, Ilyas

    2018-03-01

    This article aims to study the mixed convection heat transfer in non-Newtonian nanofluids over an infinite vertical plate. Mixed convection is caused due to buoyancy force and sudden plate motion. Sodium alginate (SA-NaAlg) is considered as non-Newtonian base fluid and molybdenum disulphide (MoS2) as nanoparticles are suspended in it. The effective thermal conductivity and viscosity of nanofluid are calculated using the Maxwell-Garnetts (MG) and Brinkman models, respectively. The flow is modeled in the form of partial differential equations with imposed physical conditions. Exact solutions for velocity and temperature fields are developed by means of the Laplace transform technique. Numerical computations are performed for different governing parameters such as non-Newtonian parameter, Grashof number and nanoparticle volume fraction and the results are plotted in various graphs. Results for skin friction and Nusselt number are presented in tabular form which show that increasing nanoparticle volume fraction leads to heat transfer enhancement and increasing skin friction.

  2. Performance analysis of a microfluidic mixer based on high gradient magnetic separation principles

    NASA Astrophysics Data System (ADS)

    Liu, Mengyu; Han, Xiaotao; Cao, Quanliang; Li, Liang

    2017-09-01

    To achieve a rapid mixing between a water-based ferrofluid and DI water in a microfluidic environment, a magnetically actuated mixing system based on high gradient magnetic separation principles is proposed in this work. The microfluidic system consists of a T-shaped mirochannel and an array of integrated soft-magnetic elements at the sidewall of the channel. With the aid of an external magnetic bias field, these elements are magnetized to produce a magnetic volume force acting on the fluids containing magnetic nanoparticles, and then to induce additional flows for improving the mixing performance. The mixing process is numerically investigated through analyzing the concentration distribution of magnetic nanoparticles using a coupled particle-fluid transport model, and mixing performances under different parametrical conditions are investigated in detail. Numerical results show that a high mixing efficiency around 97.5% can be achieved within 2 s under an inlet flow rate of 1 mm s-1 and a relatively low magnetic bias field of 50 mT. Meanwhile, it has been found that there is an optimum number of magnetic elements used for obtaining the best mixing performance. These results show the potential of the proposed mixing method in lab-on-a-chip system and could be helpful in designing and optimizing system performance.

  3. Vocational-Technical Education Reforms in Germany, Netherlands, France and U.K. and Their Implications to Taiwan.

    ERIC Educational Resources Information Center

    Lee, Lung-Sheng

    Three major models of vocational education and training provision for the 16- to 19-year-old age group have been identified: schooling model, which emphasizes full-time schooling until age 18; dual model, which involves mainly work-based apprenticeship training with some school-based general education; and mixed model. Germany is an exemplar of…

  4. Modeling Cloud Phase Fraction Based on In-situ Observations in Stratiform Clouds

    NASA Astrophysics Data System (ADS)

    Boudala, F. S.; Isaac, G. A.

    2005-12-01

    Mixed-phase clouds influence weather and climate in several ways. Due to the fact that they exhibit very different optical properties as compared to ice or liquid only clouds, they play an important role in the earth's radiation balance by modifying the optical properties of clouds. Precipitation development in clouds is also enhanced under mixed-phase conditions and these clouds may contain large supercooled drops that freeze quickly in contact with aircraft surfaces that may be a hazard to aviation. The existence of ice and liquid phase clouds together in the same environment is thermodynamically unstable, and thus they are expected to disappear quickly. However, several observations show that mixed-phase clouds are relatively stable in the natural environment and last for several hours. Although there have been some efforts being made in the past to study the microphysical properties of mixed-phase clouds, there are still a number of uncertainties in modeling these clouds particularly in large scale numerical models. In most models, very simple temperature dependent parameterizations of cloud phase fraction are being used to estimate the fraction of ice or liquid phase in a given mixed-phase cloud. In this talk, two different parameterizations of ice fraction using in-situ aircraft measurements of cloud microphysical properties collected in extratropical stratiform clouds during several field programs will be presented. One of the parameterizations has been tested using a single prognostic equation developed by Tremblay et al. (1996) for application in the Canadian regional weather prediction model. The addition of small ice particles significantly increased the vapor deposition rate when the natural atmosphere is assumed to be water saturated, and thus this enhanced the glaciation of simulated mixed-phase cloud via the Bergeron-Findeisen process without significantly affecting the other cloud microphysical processes such as riming and particle sedimentation rates. After the water vapor pressure in mixed-phase cloud was modified based on the Lord et al. (1984) scheme by weighting the saturation water vapor pressure with ice fraction, it was possible to simulate more stable mixed-phase cloud. It was also noted that the ice particle concentration (L>100 μm) in mixed-phase cloud is lower on average by a factor 3 and as a result the parameterization should be corrected for this effect. After accounting for this effect, the parameterized ice fraction agreed well with observed mean ice fraction.

  5. Target space pseudoduality in supersymmetric sigma models on symmetric spaces

    NASA Astrophysics Data System (ADS)

    Sarisaman, Mustafa

    We discuss the target space pseudoduality in supersymmetric sigma models on symmetric spaces. We first consider the case where sigma models based on real compact connected Lie groups of the same dimensionality and give examples using three dimensional models on target spaces. We show explicit construction of nonlocal conserved currents on the pseudodual manifold. We then switch the Lie group valued pseudoduality equations to Lie algebra valued ones, which leads to an infinite number of pseudoduality equations. We obtain an infinite number of conserved currents on the tangent bundle of the pseudo-dual manifold. Since pseudoduality imposes the condition that sigma models pseudodual to each other are based on symmetric spaces with opposite curvatures (i.e. dual symmetric spaces), we investigate pseudoduality transformation on the symmetric space sigma models in the third chapter. We see that there can be mixing of decomposed spaces with each other, which leads to mixings of the following expressions. We obtain the pseudodual conserved currents which are viewed as the orthonormal frame on the pullback bundle of the tangent space of G˜ which is the Lie group on which the pseudodual model based. Hence we obtain the mixing forms of curvature relations and one loop renormalization group beta function by means of these currents. In chapter four, we generalize the classical construction of pseudoduality transformation to supersymmetric case. We perform this both by component expansion method on manifold M and by orthonormal coframe method on manifold SO( M). The component method produces the result that pseudoduality transformation is not invertible at all points and occurs from all points on one manifold to only one point where riemann normal coordinates valid on the second manifold. Torsion of the sigma model on M must vanish while it is nonvanishing on M˜, and curvatures of the manifolds must be constant and the same because of anticommuting grassmann numbers. We obtain the similar results with the classical case in orthonormal coframe method. In case of super WZW sigma models pseudoduality equations result in three different pseudoduality conditions; flat space, chiral and antichiral pseudoduality. Finally we study the pseudoduality transformations on symmetric spaces using two different methods again. These two methods yield similar results to the classical cases with the exception that commuting bracket relations in classical case turns out to be anticommuting ones because of the appearance of grassmann numbers. It is understood that constraint relations in case of non-mixing pseudoduality are the remnants of mixing pseudoduality. Once mixing terms are included in the pseudoduality the constraint relations disappear.

  6. Response Surface Methodology for the Optimization of Preparation of Biocomposites Based on Poly(lactic acid) and Durian Peel Cellulose

    PubMed Central

    Penjumras, Patpen; Abdul Rahman, Russly; Talib, Rosnita A.; Abdan, Khalina

    2015-01-01

    Response surface methodology was used to optimize preparation of biocomposites based on poly(lactic acid) and durian peel cellulose. The effects of cellulose loading, mixing temperature, and mixing time on tensile strength and impact strength were investigated. A central composite design was employed to determine the optimum preparation condition of the biocomposites to obtain the highest tensile strength and impact strength. A second-order polynomial model was developed for predicting the tensile strength and impact strength based on the composite design. It was found that composites were best fit by a quadratic regression model with high coefficient of determination (R 2) value. The selected optimum condition was 35 wt.% cellulose loading at 165°C and 15 min of mixing, leading to a desirability of 94.6%. Under the optimum condition, the tensile strength and impact strength of the biocomposites were 46.207 MPa and 2.931 kJ/m2, respectively. PMID:26167523

  7. Decohesion Elements using Two and Three-Parameter Mixed-Mode Criteria

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Camanho, Pedro P.

    2001-01-01

    An eight-node decohesion element implementing different criteria to predict delamination growth under mixed-mode loading is proposed. The element is used at the interface between solid finite elements to model the initiation and propagation of delamination. A single displacement-based damage parameter is used in a softening law to track the damage state of the interface. The power law criterion and a three-parameter mixed-mode criterion are used to predict delamination growth. The accuracy of the predictions is evaluated in single mode delamination and in the mixed-mode bending tests.

  8. Offline modeling for product quality prediction of mineral processing using modeling error PDF shaping and entropy minimization.

    PubMed

    Ding, Jinliang; Chai, Tianyou; Wang, Hong

    2011-03-01

    This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.

  9. A minimal model of neutrino flavor

    NASA Astrophysics Data System (ADS)

    Luhn, Christoph; Parattu, Krishna Mohan; Wingerter, Akın

    2012-12-01

    Models of neutrino mass which attempt to describe the observed lepton mixing pattern are typically based on discrete family symmetries with a non-Abelian and one or more Abelian factors. The latter so-called shaping symmetries are imposed in order to yield a realistic phenomenology by forbidding unwanted operators. Here we propose a supersymmetric model of neutrino flavor which is based on the group T 7 and does not require extra {Z} N or U(1) factors in the Yukawa sector, which makes it the smallest realistic family symmetry that has been considered so far. At leading order, the model predicts tribimaximal mixing which arises completely accidentally from a combination of the T 7 Clebsch-Gordan coefficients and suitable flavon alignments. Next-to-leading order (NLO) operators break the simple tribimaximal structure and render the model compatible with the recent results of the Daya Bay and Reno collaborations which have measured a reactor angle of around 9°. Problematic NLO deviations of the other two mixing angles can be controlled in an ultraviolet completion of the model. The vacuum alignment mechanism that we use necessitates the introduction of a hidden flavon sector that transforms under a {Z} 6 symmetry, thereby spoiling the minimality of our model whose flavor symmetry is then T 7 × {Z} 6.

  10. Advanced scatter search approach and its application in a sequencing problem of mixed-model assembly lines in a case company

    NASA Astrophysics Data System (ADS)

    Liu, Qiong; Wang, Wen-xi; Zhu, Ke-ren; Zhang, Chao-yong; Rao, Yun-qing

    2014-11-01

    Mixed-model assembly line sequencing is significant in reducing the production time and overall cost of production. To improve production efficiency, a mathematical model aiming simultaneously to minimize overtime, idle time and total set-up costs is developed. To obtain high-quality and stable solutions, an advanced scatter search approach is proposed. In the proposed algorithm, a new diversification generation method based on a genetic algorithm is presented to generate a set of potentially diverse and high-quality initial solutions. Many methods, including reference set update, subset generation, solution combination and improvement methods, are designed to maintain the diversification of populations and to obtain high-quality ideal solutions. The proposed model and algorithm are applied and validated in a case company. The results indicate that the proposed advanced scatter search approach is significant for mixed-model assembly line sequencing in this company.

  11. Analysis of the type II robotic mixed-model assembly line balancing problem

    NASA Astrophysics Data System (ADS)

    Çil, Zeynel Abidin; Mete, Süleyman; Ağpak, Kürşad

    2017-06-01

    In recent years, there has been an increasing trend towards using robots in production systems. Robots are used in different areas such as packaging, transportation, loading/unloading and especially assembly lines. One important step in taking advantage of robots on the assembly line is considering them while balancing the line. On the other hand, market conditions have increased the importance of mixed-model assembly lines. Therefore, in this article, the robotic mixed-model assembly line balancing problem is studied. The aim of this study is to develop a new efficient heuristic algorithm based on beam search in order to minimize the sum of cycle times over all models. In addition, mathematical models of the problem are presented for comparison. The proposed heuristic is tested on benchmark problems and compared with the optimal solutions. The results show that the algorithm is very competitive and is a promising tool for further research.

  12. Effect of exercise on patient specific abdominal aortic aneurysm flow topology and mixing

    PubMed Central

    Arzani, Amirhossein; Les, Andrea S.; Dalman, Ronald L.; Shadden, Shawn C.

    2014-01-01

    SUMMARY Computational fluid dynamics modeling was used to investigate changes in blood transport topology between rest and exercise conditions in five patient-specific abdominal aortic aneurysm models. Magnetic resonance imaging was used to provide the vascular anatomy and necessary boundary conditions for simulating blood velocity and pressure fields inside each model. Finite-time Lyapunov exponent fields, and associated Lagrangian coherent structures, were computed from blood velocity data, and used to compare features of the transport topology between rest and exercise both mechanistically and qualitatively. A mix-norm and mix-variance measure based on fresh blood distribution throughout the aneurysm over time were implemented to quantitatively compare mixing between rest and exercise. Exercise conditions resulted in higher and more uniform mixing, and reduced the overall residence time in all aneurysms. Separated regions of recirculating flow were commonly observed in rest, and these regions were either reduced or removed by attached and unidirectional flow during exercise, or replaced with regional chaotic and transiently turbulent mixing, or persisted and even extended during exercise. The main factor that dictated the change in flow topology from rest to exercise was the behavior of the jet of blood penetrating into the aneurysm during systole. PMID:24493404

  13. Problem-Based Learning--Buginese Cultural Knowledge Model--Case Study: Teaching Mathematics at Junior High School

    ERIC Educational Resources Information Center

    Cheriani, Cheriani; Mahmud, Alimuddin; Tahmir, Suradi; Manda, Darman; Dirawan, Gufran Darma

    2015-01-01

    This study aims to determine the differences in learning output by using Problem Based Model combines with the "Buginese" Local Cultural Knowledge (PBL-Culture). It is also explores the students activities in learning mathematics subject by using PBL-Culture Models. This research is using Mixed Methods approach that combined quantitative…

  14. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation (ODE) Models with Mixed Effects

    PubMed Central

    Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam

    2016-01-01

    Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255

  15. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    PubMed

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  16. Quasi-Geostrophic Diagnosis of Mixed-Layer Dynamics Embedded in a Mesoscale Turbulent Field

    NASA Astrophysics Data System (ADS)

    Chavanne, C. P.; Klein, P.

    2016-02-01

    A new quasi-geostrophic model has been developed to diagnose the three-dimensional circulation, including the vertical velocity, in the upper ocean from high-resolution observations of sea surface height and buoyancy. The formulation for the adiabatic component departs from the classical surface quasi-geostrophic framework considered before since it takes into account the stratification within the surface mixed-layer that is usually much weaker than that in the ocean interior. To achieve this, the model approximates the ocean with two constant-stratification layers : a finite-thickness surface layer (or the mixed-layer) and an infinitely-deep interior layer. It is shown that the leading-order adiabatic circulation is entirely determined if both the surface streamfunction and buoyancy anomalies are considered. The surface layer further includes a diabatic dynamical contribution. Parameterization of diabatic vertical velocities is based on their restoring impacts of the thermal-wind balance that is perturbed by turbulent vertical mixing of momentum and buoyancy. The model skill in reproducing the three-dimensional circulation in the upper ocean from surface data is checked against the output of a high-resolution primitive-equation numerical simulation. Correlation between simulated and diagnosed vertical velocities are significantly improved in the mixed-layer for the new model compared to the classical surface quasi-geostrophic model, reaching 0.9 near the surface.

  17. Development and numerical analysis of low specific speed mixed-flow pump

    NASA Astrophysics Data System (ADS)

    Li, H. F.; Huo, Y. W.; Pan, Z. B.; Zhou, W. C.; He, M. H.

    2012-11-01

    With the development of the city, the market of the mixed flow pump with large flux and high head is prospect. The KSB Shanghai Pump Co., LTD decided to develop low speed specific speed mixed flow pump to meet the market requirements. Based on the centrifugal pump and axial flow pump model, aiming at the characteristics of large flux and high head, a new type of guide vane mixed flow pump was designed. The computational fluid dynamics method was adopted to analyze the internal flow of the new type model and predict its performances. The time-averaged Navier-Stokes equations were closed by SST k-ω turbulent model to adapt internal flow of guide vane with larger curvatures. The multi-reference frame(MRF) method was used to deal with the coupling of rotating impeller and static guide vane, and the SIMPLEC method was adopted to achieve the coupling solution of velocity and pressure. The computational results shows that there is great flow impact on the head of vanes at different working conditions, and there is great flow separation at the tailing of the guide vanes at different working conditions, and all will affect the performance of pump. Based on the computational results, optimizations were carried out to decrease the impact on the head of vanes and flow separation at the tailing of the guide vanes. The optimized model was simulated and its performance was predicted. The computational results show that the impact on the head of vanes and the separation at the tailing of the guide vanes disappeared. The high efficiency of the optimized pump is wide, and it fit the original design destination. The newly designed mixed flow pump is now in modeling and its experimental performance will be getting soon.

  18. A Modified Mixing Length Turbulence Model for Zero and Adverse Pressure Gradients. M.S. Thesis - Akron Univ., 1993

    NASA Technical Reports Server (NTRS)

    Conley, Julianne M.; Leonard, B. P.

    1994-01-01

    The modified mixing length (MML) turbulence model was installed in the Proteus Navier-Stokes code, then modified to make it applicable to a wider range of flows typical of aerospace propulsion applications. The modifications are based on experimental data for three flat-plate flows having zero, mild adverse, and strong adverse pressure gradients. Three transonic diffuser test cases were run with the new version of the model in order to evaluate its performance. All results are compared with experimental data and show improvements over calculations made using the Baldwin-Lomax turbulence model, the standard algebraic model in Proteus.

  19. Model of Values-Based Management Process in Schools: A Mixed Design Study

    ERIC Educational Resources Information Center

    Dogan, Soner

    2016-01-01

    The aim of this paper is to evaluate the school administrators' values-based management behaviours according to the teachers' perceptions and opinions and, accordingly, to build a model of values-based management process in schools. The study was conducted using explanatory design which is inclusive of both quantitative and qualitative methods.…

  20. Mix Models Applied to the Pushered Single Shell Capsules Fired on NIF1

    NASA Astrophysics Data System (ADS)

    Tipton, Robert; Dewald, Eduard; Pino, Jesse; Ralph, Joe; Sacks, Ryan; Salmonson, Jay

    2017-10-01

    The goal of the Pushered Single Shell (PSS) experimental campaign is to study the mix of partially ionized ablator material into the hotspot. To accomplish this goal, we used a uniformly Si doped plastic capsule based on the successful Two-Shock campaign. The inner few microns of the capsule can be doped with a few percent Ge. To diagnose mix, we used the method of separated reactants; deuterating the inner Ge-doped layer, CD/Ge, while using a gas fill of Tritium and Hydrogen. Mix is inferred by measuring the neutron yields from DD, DT, and TT reactions. The PSS implosion is fast ( 400 km/sec), hot ( 3KeV) and round (P2 0). This paper will present the calculations of RANS type mix models such as KL along with LES models such as multicomponent Navier Stokes on several PSS shots. The calculations will be compared to each other and to the measured data. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.

  1. Tools for quantifying isotopic niche space and dietary variation at the individual and population level.

    USGS Publications Warehouse

    Newsome, Seth D.; Yeakel, Justin D.; Wheatley, Patrick V.; Tinker, M. Tim

    2012-01-01

    Ecologists are increasingly using stable isotope analysis to inform questions about variation in resource and habitat use from the individual to community level. In this study we investigate data sets from 2 California sea otter (Enhydra lutris nereis) populations to illustrate the advantages and potential pitfalls of applying various statistical and quantitative approaches to isotopic data. We have subdivided these tools, or metrics, into 3 categories: IsoSpace metrics, stable isotope mixing models, and DietSpace metrics. IsoSpace metrics are used to quantify the spatial attributes of isotopic data that are typically presented in bivariate (e.g., δ13C versus δ15N) 2-dimensional space. We review IsoSpace metrics currently in use and present a technique by which uncertainty can be included to calculate the convex hull area of consumers or prey, or both. We then apply a Bayesian-based mixing model to quantify the proportion of potential dietary sources to the diet of each sea otter population and compare this to observational foraging data. Finally, we assess individual dietary specialization by comparing a previously published technique, variance components analysis, to 2 novel DietSpace metrics that are based on mixing model output. As the use of stable isotope analysis in ecology continues to grow, the field will need a set of quantitative tools for assessing isotopic variance at the individual to community level. Along with recent advances in Bayesian-based mixing models, we hope that the IsoSpace and DietSpace metrics described here will provide another set of interpretive tools for ecologists.

  2. Use of a macroinvertebrate based biotic index to estimate critical metal concentrations for good ecological water quality.

    PubMed

    Van Ael, Evy; De Cooman, Ward; Blust, Ronny; Bervoets, Lieven

    2015-01-01

    Large datasets from total and dissolved metal concentrations in Flemish (Belgium) fresh water systems and the associated macroinvertebrate-based biotic index MMIF (Multimetric Macroinvertebrate Index Flanders) were used to estimate critical metal concentrations for good ecological water quality, as imposed by the European Water Framework Directive (2000). The contribution of different stressors (metals and water characteristics) to the MMIF were studied by constructing generalized linear mixed effect models. Comparison between estimated critical concentrations and the European and Flemish EQS, shows that the EQS for As, Cd, Cu and Zn seem to be sufficient to reach a good ecological quality status as expressed by the invertebrate-based biotic index. In contrast, the EQS for Cr, Hg and Pb are higher than the estimated critical concentrations, which suggests that when environmental concentrations are at the same level as the EQS a good quality status might not be reached. The construction of mixed models that included metal concentrations in their structure did not lead to a significant outcome. However, mixed models showed the primary importance of water characteristics (oxygen level, temperature, ammonium concentration and conductivity) for the MMIF. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Modeling and experimental verification of laser self-mixing interference phenomenon with the structure of two-external-cavity feedback

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Liu, Yuwei; Gao, Bingkun; Jiang, Chunlei

    2018-03-01

    A semiconductor laser employed with two-external-cavity feedback structure for laser self-mixing interference (SMI) phenomenon is investigated and analyzed. The SMI model with two directions based on F-P cavity is deduced, and numerical simulation and experimental verification were conducted. Experimental results show that the SMI with the structure of two-external-cavity feedback under weak light feedback is similar to the sum of two SMIs.

  4. A preliminary case-mix classification system for Medicare home health clients.

    PubMed

    Branch, L G; Goldberg, H B

    1993-04-01

    In this study, a hierarchical case-mix model was developed for grouping Medicare home health beneficiaries homogeneously, based on the allowed charges for their home care. Based on information from a two-page form from 2,830 clients from ten states and using the classification and regression trees method, a four-component model was developed that yielded 11 case-mix groups and explained 22% of the variance for the test sample of 1,929 clients. The four components are rehabilitation, special care, skilled-nurse monitoring, and paralysis; each are categorized as present or absent. The range of mean-allowed charges for the 11 groups in the total sample was $473 to $2,562 with a mean of $847. Of the six groups with mean charges above $1,000, none exceeded 5.2% of clients; thus, the high-cost groups are relatively rare.

  5. Turbulent Mixing of Primary and Secondary Flow Streams in a Rocket-Based Combined Cycle Engine

    NASA Technical Reports Server (NTRS)

    Cramer, J. M.; Greene, M. U.; Pal, S.; Santoro, R. J.; Turner, Jim (Technical Monitor)

    2002-01-01

    This viewgraph presentation gives an overview of the turbulent mixing of primary and secondary flow streams in a rocket-based combined cycle (RBCC) engine. A significant RBCC ejector mode database has been generated, detailing single and twin thruster configurations and global and local measurements. On-going analysis and correlation efforts include Marshall Space Flight Center computational fluid dynamics modeling and turbulent shear layer analysis. Potential follow-on activities include detailed measurements of air flow static pressure and velocity profiles, investigations into other thruster spacing configurations, performing a fundamental shear layer mixing study, and demonstrating single-shot Raman measurements.

  6. Evaluation of modeling NO2 concentrations driven by satellite-derived and bottom-up emission inventories using in situ measurements over China

    NASA Astrophysics Data System (ADS)

    Liu, Fei; van der A, Ronald J.; Eskes, Henk; Ding, Jieying; Mijling, Bas

    2018-03-01

    Chemical transport models together with emission inventories are widely used to simulate NO2 concentrations over China, but validation of the simulations with in situ measurements has been extremely limited. Here we use ground measurements obtained from the air quality monitoring network recently developed by the Ministry of Environmental Protection of China to validate modeling surface NO2 concentrations from the CHIMERE regional chemical transport model driven by the satellite-derived DECSO and the bottom-up MIX emission inventories. We applied a correction factor to the observations to account for the interferences of other oxidized nitrogen compounds (NOz), based on the modeled ratio of NO2 to NOz. The model accurately reproduces the spatial variability in NO2 from in situ measurements, with a spatial correlation coefficient of over 0.7 for simulations based on both inventories. A negative and positive bias is found for the simulation with the DECSO (slope = 0.74 and 0.64 for the daily mean and daytime only) and the MIX (slope = 1.3 and 1.1) inventories, respectively, suggesting an underestimation and overestimation of NOx emissions from corresponding inventories. The bias between observed and modeled concentrations is reduced, with the slope dropping from 1.3 to 1.0 when the spatial distribution of NOx emissions in the DECSO inventory is applied as the spatial proxy for the MIX inventory, which suggests an improvement of the distribution of emissions between urban and suburban or rural areas in the DECSO inventory compared to that used in the bottom-up inventory. A rough estimate indicates that the observed concentrations, from sites predominantly placed in the populated urban areas, may be 10-40 % higher than the corresponding model grid cell mean. This reduces the estimate of the negative bias of the DECSO-based simulation to the range of -30 to 0 % on average and more firmly establishes that the MIX inventory is biased high over major cities. The performance of the model is comparable over seasons, with a slightly worse spatial correlation in summer due to the difficulties in resolving the more active NOx photochemistry and larger concentration gradients in summer by the model. In addition, the model well captures the daytime diurnal cycle but shows more significant disagreement between simulations and measurements during nighttime, which likely produces a positive model bias of about 15 % in the daily mean concentrations. This is most likely related to the uncertainty in vertical mixing in the model at night.

  7. Evaluation of Modeling NO2 Concentrations Driven by Satellite-Derived and Bottom-Up Emission Inventories Using In-Situ Measurements Over China

    NASA Technical Reports Server (NTRS)

    Liu, Fei; van der A, Ronald J.; Eskes, Henk; Ding, Jieying; Mijling, Bas

    2018-01-01

    Chemical transport models together with emission inventories are widely used to simulate NO2 concentrations over China, but validation of the simulations with in situ measurements has been extremely limited. Here we use ground measurements obtained from the air quality monitoring network recently developed by the Ministry of Environmental Protection of China to validate modeling surface NO2 concentrations from the CHIMERE regional chemical transport model driven by the satellite-derived DECSO and the bottom-up MIX emission inventories. We applied a correction factor to the observations to account for the interferences of other oxidized nitrogen compounds (NOz), based on the modeled ratio of NO2 to NOz. The model accurately reproduces the spatial variability in NO2 from in situ measurements, with a spatial correlation coefficient of over 0.7 for simulations based on both inventories. A negative and positive bias is found for the simulation with the DECSO (slopeD0.74 and 0.64 for the daily mean and daytime only) and the MIX (slopeD1.3 and 1.1) inventories, respectively, suggesting an underestimation and overestimation of NOx emissions from corresponding inventories. The bias between observed and modeled concentrations is reduced, with the slope dropping from 1.3 to 1.0 when the spatial distribution of NOx emissions in the DECSO inventory is applied as the spatial proxy for the MIX inventory, which suggests an improvement of the distribution of emissions between urban and suburban or rural areas in the DECSO inventory compared to that used in the bottom-up inventory. A rough estimate indicates that the observed concentrations, from sites predominantly placed in the populated urban areas, may be 10-40% higher than the corresponding model grid cell mean. This reduces the estimate of the negative bias of the DECSO-based simulation to the range of -30 to 0% on average and more firmly establishes that the MIX inventory is biased high over major cities. The performance of the model is comparable over seasons, with a slightly worse spatial correlation in summer due to the difficulties in resolving the more active NOx photochemistry and larger concentration gradients in summer by the model. In addition, the model well captures the daytime diurnal cycle but shows more significant disagreement between simulations and measurements during nighttime, which likely produces a positive model bias of about 15% in the daily mean concentrations. This is most likely related to the uncertainty in vertical mixing in the model at night.

  8. Item Selection and Ability Estimation Procedures for a Mixed-Format Adaptive Test

    ERIC Educational Resources Information Center

    Ho, Tsung-Han; Dodd, Barbara G.

    2012-01-01

    In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…

  9. Turbulent Mixing Chemistry in Disks

    NASA Astrophysics Data System (ADS)

    Semenov, D.; Wiebe, D.

    2006-11-01

    A gas-grain chemical model with surface reaction and 1D/2D turbulent mixing is available for protoplanetary disks and molecular clouds. Current version is based on the updated UMIST'95 database with gas-grain interactions (accretion, desorption, photoevaporation, etc.) and modified rate equation approach to surface chemistry (see also abstract for the static chemistry code).

  10. Southern Ocean vertical iron fluxes; the ocean model effect

    NASA Astrophysics Data System (ADS)

    Schourup-Kristensen, V.; Haucke, J.; Losch, M. J.; Wolf-Gladrow, D.; Voelker, C. D.

    2016-02-01

    The Southern Ocean plays a key role in the climate system, but commonly used large-scale ocean general circulation biogeochemical models give different estimates of current and future Southern Ocean net primary and export production. The representation of the Southern Ocean iron sources plays an important role for the modeled biogeochemistry. Studies of the iron supply to the surface mixed layer have traditionally focused on the aeolian and sediment contributions, but recent work has highlighted the importance of the vertical supply from below. We have performed a model study in which the biogeochemical model REcoM2 was coupled to two different ocean models, the Finite Element Sea-ice Ocean Model (FESOM) and the MIT general circulation model (MITgcm) and analyzed the magnitude of the iron sources to the surface mixed layer from below in the two models. Our results revealed a remarkable difference in terms of mechanism and magnitude of transport. The mean iron supply from below in the Southern Ocean was on average four times higher in MITgcm than in FESOM and the dominant pathway was entrainment in MITgcm, whereas diffusion dominated in FESOM. Differences in the depth and seasonal amplitude of the mixed layer between the models affect on the vertical iron profile, the relative position of the base of the mixed layer and ferricline and thereby also on the iron fluxes. These differences contribute to differences in the phytoplankton composition in the two models, as well as in the timing of the onset of the spring bloom. The study shows that the choice of ocean model has a significant impact on the iron supply to the Southern Ocean mixed layer and thus on the modeled carbon cycle, with possible implications for model runs predicting the future carbon uptake in the region.

  11. Risk adjustment models for short-term outcomes after surgical resection for oesophagogastric cancer.

    PubMed

    Fischer, C; Lingsma, H; Hardwick, R; Cromwell, D A; Steyerberg, E; Groene, O

    2016-01-01

    Outcomes for oesophagogastric cancer surgery are compared with the aim of benchmarking quality of care. Adjusting for patient characteristics is crucial to avoid biased comparisons between providers. The study objective was to develop a case-mix adjustment model for comparing 30- and 90-day mortality and anastomotic leakage rates after oesophagogastric cancer resections. The study reviewed existing models, considered expert opinion and examined audit data in order to select predictors that were consequently used to develop a case-mix adjustment model for the National Oesophago-Gastric Cancer Audit, covering England and Wales. Models were developed on patients undergoing surgical resection between April 2011 and March 2013 using logistic regression. Model calibration and discrimination was quantified using a bootstrap procedure. Most existing risk models for oesophagogastric resections were methodologically weak, outdated or based on detailed laboratory data that are not generally available. In 4882 patients with oesophagogastric cancer used for model development, 30- and 90-day mortality rates were 2·3 and 4·4 per cent respectively, and 6·2 per cent of patients developed an anastomotic leak. The internally validated models, based on predictors selected from the literature, showed moderate discrimination (area under the receiver operating characteristic (ROC) curve 0·646 for 30-day mortality, 0·664 for 90-day mortality and 0·587 for anastomotic leakage) and good calibration. Based on available data, three case-mix adjustment models for postoperative outcomes in patients undergoing curative surgery for oesophagogastric cancer were developed. These models should be used for risk adjustment when assessing hospital performance in the National Health Service, and tested in other large health systems. © 2015 BJS Society Ltd Published by John Wiley & Sons Ltd.

  12. Influences of Ocean Thermohaline Stratification on Arctic Sea Ice

    NASA Astrophysics Data System (ADS)

    Toole, J. M.; Timmermans, M.-L.; Perovich, D. K.; Krishfield, R. A.; Proshutinsky, A.; Richter-Menge, J. A.

    2009-04-01

    The Arctic Ocean's surface mixed layer constitutes the dynamical and thermodynamical link between the sea ice and the underlying waters. Wind stress, acting directly on the surface mixed layer or via wind-forced ice motion, produce surface currents that can in turn drive deep ocean flow. Mixed layer temperature is intimately related to basal sea ice growth and melting. Heat fluxes into or out of the surface mixed layer can occur at both its upper and lower interfaces: the former via air-sea exchange at leads and conduction through the ice, the latter via turbulent mixing and entrainment at the layer base. Variations in Arctic Ocean mixed layer properties are documented based on more than 16,000 temperature and salinity profiles acquired by Ice-Tethered Profilers since summer 2004 and analyzed in conjunction with sea ice observations from Ice Mass Balance Buoys and atmospheric heat flux estimates. Guidance interpreting the observations is provided by a one-dimensional ocean mixed layer model. The study focuses attention on the very strong density stratification about the mixed layer base in the Arctic that, in regions of sea ice melting, is increasing with time. The intense stratification greatly impedes mixed layer deepening by vertical convection and shear mixing, and thus limits the flux of deep ocean heat to the surface that could influence sea ice growth/decay. Consistent with previous work, this study demonstrates that the Arctic sea ice is most sensitive to changes in ocean mixed layer heat resulting from fluxes across its upper (air-sea and/or ice-water) interface.

  13. Ab Initio Modeling of Structure and Properties of Single and Mixed Alkali Silicate Glasses.

    PubMed

    Baral, Khagendra; Li, Aize; Ching, Wai-Yim

    2017-10-12

    A density functional theory (DFT)-based ab initio molecular dynamics (AIMD) has been applied to simulate models of single and mixed alkali silicate glasses with two different molar concentrations of alkali oxides. The structural environments and spatial distributions of alkali ions in the 10 simulated models with 20% and 30% of Li, Na, K and equal proportions of Li-Na and Na-K are studied in detail for subtle variations among the models. Quantum mechanical calculations of electronic structures, interatomic bonding, and mechanical and optical properties are carried out for each of the models, and the results are compared with available experimental observation and other simulations. The calculated results are in good agreement with the experimental data. We have used the novel concept of using the total bond order density (TBOD), a quantum mechanical metric, to characterize internal cohesion in these glass models. The mixed alkali effect (MAE) is visible in the bulk mechanical properties but not obvious in other physical properties studied in this paper. We show that Li doping deviates from expected trend due to the much stronger Li-O bonding than those of Na and K doping. The approach used in this study is in contrast with current studies in alkali-doped silicate glasses based only on geometric characterizations.

  14. Alternative scenarios: harnessing mid-level providers and evidence-based practice in primary dental care in England through operational research.

    PubMed

    Wanyonyi, Kristina L; Radford, David R; Harper, Paul R; Gallagher, Jennifer E

    2015-09-15

    In primary care dentistry, strategies to reconfigure the traditional boundaries of various dental professional groups by task sharing and role substitution have been encouraged in order to meet changing oral health needs. The aim of this research was to investigate the potential for skill mix use in primary dental care in England based on the undergraduate training experience in a primary care team training centre for dentists and mid-level dental providers. An operational research model and four alternative scenarios to test the potential for skill mix use in primary care in England were developed, informed by the model of care at a primary dental care training centre in the south of England, professional policy including scope of practice and contemporary evidence-based preventative practice. The model was developed in Excel and drew on published national timings and salary costs. The scenarios included the following: "No Skill Mix", "Minimal Direct Access", "More Prevention" and "Maximum Delegation". The scenario outputs comprised clinical time, workforce numbers and salary costs required for state-funded primary dental care in England. The operational research model suggested that 73% of clinical time in England's state-funded primary dental care in 2011/12 was spent on tasks that may be delegated to dental care professionals (DCPs), and 45- to 54-year-old patients received the most clinical time overall. Using estimated National Health Service (NHS) clinical working patterns, the model suggested alternative NHS workforce numbers and salary costs to meet the dental demand based on each developed scenario. For scenario 1:"No Skill Mix", the dentist-only scenario, 81% of the dentists currently registered in England would be required to participate. In scenario 2: "Minimal Direct Access", where 70% of examinations were delegated and the primary care training centre delegation patterns for other treatments were practised, 40% of registered dentists and eight times the number of dental therapists currently registered would be required; this would save 38% of current salary costs cf. "No Skill Mix". Scenario 3: "More Prevention", that is, the current model with no direct access and increasing fluoride varnish from 13.1% to 50% and maintaining the same model of delegation as scenario 2 for other care, would require 57% of registered dentists and 4.7 times the number of dental therapists. It would achieve a 1% salary cost saving cf. "No Skill Mix". Scenario 4 "Maximum Delegation" where all care within dental therapists' jurisdiction is delegated at 100%, together with 50% of restorations and radiographs, suggested that only 30% of registered dentists would be required and 10 times the number of dental therapists registered; this scenario would achieve a 52% salary cost saving cf. "No Skill Mix". Alternative scenarios based on wider expressed treatment need in national primary dental care in England, changing regulations on the scope of practice and increased evidence-based preventive practice suggest that the majority of care in primary dental practice may be delegated to dental therapists, and there is potential time and salary cost saving if the majority of diagnostic tasks and prevention are delegated. However, this would require an increase in trained DCPs, including role enhancement, as part of rebalancing the dental workforce.

  15. Prediction of hemoglobin in blood donors using a latent class mixed-effects transition model.

    PubMed

    Nasserinejad, Kazem; van Rosmalen, Joost; de Kort, Wim; Rizopoulos, Dimitris; Lesaffre, Emmanuel

    2016-02-20

    Blood donors experience a temporary reduction in their hemoglobin (Hb) value after donation. At each visit, the Hb value is measured, and a too low Hb value leads to a deferral for donation. Because of the recovery process after each donation as well as state dependence and unobserved heterogeneity, longitudinal data of Hb values of blood donors provide unique statistical challenges. To estimate the shape and duration of the recovery process and to predict future Hb values, we employed three models for the Hb value: (i) a mixed-effects models; (ii) a latent-class mixed-effects model; and (iii) a latent-class mixed-effects transition model. In each model, a flexible function was used to model the recovery process after donation. The latent classes identify groups of donors with fast or slow recovery times and donors whose recovery time increases with the number of donations. The transition effect accounts for possible state dependence in the observed data. All models were estimated in a Bayesian way, using data of new entrant donors from the Donor InSight study. Informative priors were used for parameters of the recovery process that were not identified using the observed data, based on results from the clinical literature. The results show that the latent-class mixed-effects transition model fits the data best, which illustrates the importance of modeling state dependence, unobserved heterogeneity, and the recovery process after donation. The estimated recovery time is much longer than the current minimum interval between donations, suggesting that an increase of this interval may be warranted. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Air quality simulation over South Asia using Hemispheric Transport of Air Pollution version-2 (HTAP-v2) emission inventory and Model for Ozone and Related chemical Tracers (MOZART-4)

    NASA Astrophysics Data System (ADS)

    Surendran, Divya E.; Ghude, Sachin D.; Beig, G.; Emmons, L. K.; Jena, Chinmay; Kumar, Rajesh; Pfister, G. G.; Chate, D. M.

    2015-12-01

    This study presents the distribution of tropospheric ozone and related species for South Asia using the Model for Ozone and Related chemical Tracers (MOZART-4) and Hemispheric Transport of Air Pollution version-2 (HTAP-v2) emission inventory. The model present-day simulated ozone (O3), carbon monoxide (CO) and nitrogen dioxide (NO2) are evaluated against surface-based, balloon-borne and satellite-based (MOPITT and OMI) observations. The model systematically overestimates surface O3 mixing ratios (range of mean bias about: 1-30 ppbv) at different ground-based measurement sites in India. Comparison between simulated and observed vertical profiles of ozone shows a positive bias from the surface up to 600 hPa and a negative bias above 600 hPa. The simulated seasonal variation in surface CO mixing ratio is consistent with the surface observations, but has a negative bias of about 50-200 ppb which can be attributed to a large part to the coarse model resolution. In contrast to the surface evaluation, the model shows a positive bias of about 15-20 × 1017 molecules/cm2 over South Asia when compared to satellite derived CO columns from the MOPITT instrument. The model also overestimates OMI retrieved tropospheric column NO2 abundance by about 100-250 × 1013 molecules/cm2. A response to 20% reduction in all anthropogenic emissions over South Asia shows a decrease in the anuual mean O3 mixing ratios by about 3-12 ppb, CO by about 10-80 ppb and NOX by about 3-6 ppb at the surface level. During summer monsoon, O3 mixing ratios at 200 hPa show a decrease of about 6-12 ppb over South Asia and about 1-4 ppb over the remote northern hemispheric western Pacific region.

  17. Modelling individual tree height to crown base of Norway spruce (Picea abies (L.) Karst.) and European beech (Fagus sylvatica L.)

    PubMed Central

    Jansa, Václav

    2017-01-01

    Height to crown base (HCB) of a tree is an important variable often included as a predictor in various forest models that serve as the fundamental tools for decision-making in forestry. We developed spatially explicit and spatially inexplicit mixed-effects HCB models using measurements from a total 19,404 trees of Norway spruce (Picea abies (L.) Karst.) and European beech (Fagus sylvatica L.) on the permanent sample plots that are located across the Czech Republic. Variables describing site quality, stand density or competition, and species mixing effects were included into the HCB model with use of dominant height (HDOM), basal area of trees larger in diameters than a subject tree (BAL- spatially inexplicit measure) or Hegyi’s competition index (HCI—spatially explicit measure), and basal area proportion of a species of interest (BAPOR), respectively. The parameters describing sample plot-level random effects were included into the HCB model by applying the mixed-effects modelling approach. Among several functional forms evaluated, the logistic function was found most suited to our data. The HCB model for Norway spruce was tested against the data originated from different inventory designs, but model for European beech was tested using partitioned dataset (a part of the main dataset). The variance heteroscedasticity in the residuals was substantially reduced through inclusion of a power variance function into the HCB model. The results showed that spatially explicit model described significantly a larger part of the HCB variations [R2adj = 0.86 (spruce), 0.85 (beech)] than its spatially inexplicit counterpart [R2adj = 0.84 (spruce), 0.83 (beech)]. The HCB increased with increasing competitive interactions described by tree-centered competition measure: BAL or HCI, and species mixing effects described by BAPOR. A test of the mixed-effects HCB model with the random effects estimated using at least four trees per sample plot in the validation data confirmed that the model was precise enough for the prediction of HCB for a range of site quality, tree size, stand density, and stand structure. We therefore recommend measuring of HCB on four randomly selected trees of a species of interest on each sample plot for localizing the mixed-effects model and predicting HCB of the remaining trees on the plot. Growth simulations can be made from the data that lack the values for either crown ratio or HCB using the HCB models. PMID:29049391

  18. Calibrating binary lumped parameter models

    NASA Astrophysics Data System (ADS)

    Morgenstern, Uwe; Stewart, Mike

    2017-04-01

    Groundwater at its discharge point is a mixture of water from short and long flowlines, and therefore has a distribution of ages rather than a single age. Various transfer functions describe the distribution of ages within the water sample. Lumped parameter models (LPMs), which are mathematical models of water transport based on simplified aquifer geometry and flow configuration can account for such mixing of groundwater of different age, usually representing the age distribution with two parameters, the mean residence time, and the mixing parameter. Simple lumped parameter models can often match well the measured time varying age tracer concentrations, and therefore are a good representation of the groundwater mixing at these sites. Usually a few tracer data (time series and/or multi-tracer) can constrain both parameters. With the building of larger data sets of age tracer data throughout New Zealand, including tritium, SF6, CFCs, and recently Halon-1301, and time series of these tracers, we realised that for a number of wells the groundwater ages using a simple lumped parameter model were inconsistent between the different tracer methods. Contamination or degradation of individual tracers is unlikely because the different tracers show consistent trends over years and decades. This points toward a more complex mixing of groundwaters with different ages for such wells than represented by the simple lumped parameter models. Binary (or compound) mixing models are able to represent a more complex mixing, with mixing of water of two different age distributions. The problem related to these models is that they usually have 5 parameters which makes them data-hungry and therefore difficult to constrain all parameters. Two or more age tracers with different input functions, with multiple measurements over time, can provide the required information to constrain the parameters of the binary mixing model. We obtained excellent results using tritium time series encompassing the passage of the bomb-tritium through the aquifer, and SF6 with its steep gradient currently in the input. We will show age tracer data from drinking water wells that enabled identification of young water ingression into wells, which poses the risk of bacteriological contamination from the surface into the drinking water.

  19. Modeling the Bergeron-Findeisen Process Using PDF Methods With an Explicit Representation of Mixing

    NASA Astrophysics Data System (ADS)

    Jeffery, C.; Reisner, J.

    2005-12-01

    Currently, the accurate prediction of cloud droplet and ice crystal number concentration in cloud resolving, numerical weather prediction and climate models is a formidable challenge. The Bergeron-Findeisen process in which ice crystals grow by vapor deposition at the expense of super-cooled droplets is expected to be inhomogeneous in nature--some droplets will evaporate completely in centimeter-scale filaments of sub-saturated air during turbulent mixing while others remain unchanged [Baker et al., QJRMS, 1980]--and is unresolved at even cloud-resolving scales. Despite the large body of observational evidence in support of the inhomogeneous mixing process affecting cloud droplet number [most recently, Brenguier et al., JAS, 2000], it is poorly understood and has yet to be parameterized and incorporated into a numerical model. In this talk, we investigate the Bergeron-Findeisen process using a new approach based on simulations of the probability density function (PDF) of relative humidity during turbulent mixing. PDF methods offer a key advantage over Eulerian (spatial) models of cloud mixing and evaporation: the low probability (cm-scale) filaments of entrained air are explicitly resolved (in probability space) during the mixing event even though their spatial shape, size and location remain unknown. Our PDF approach reveals the following features of the inhomogeneous mixing process during the isobaric turbulent mixing of two parcels containing super-cooled water and ice, respectively: (1) The scavenging of super-cooled droplets is inhomogeneous in nature; some droplets evaporate completely at early times while others remain unchanged. (2) The degree of total droplet evaporation during the initial mixing period depends linearly on the mixing fractions of the two parcels and logarithmically on Damköhler number (Da)---the ratio of turbulent to evaporative time-scales. (3) Our simulations predict that the PDF of Lagrangian (time-integrated) subsaturation (S) goes as S-1 at high Da. This behavior results from a Gaussian mixing closure and requires observational validation.

  20. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach

    PubMed Central

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2018-01-01

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach, and has several attractive features compared to the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, since the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. PMID:26303591

  1. Theoretical and experimental investigation of turbulent mixing on ejector configuration and performance in a solar-driven organic-vapor ejector cycle chiller

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kucha, E.I.

    1984-01-01

    A general method was developed to calculate two dimensional (axisymmetric) mixing of a compressible jet in a variable cross-sectional area mixing channel of the ejector. The analysis considers mixing of the primary and secondary fluids at constant pressure and incorporates finite difference approximations to the conservation equations. The flow model is based on the mixing length approximations. A detailed study and modeling of the flow phenomenon determines the best (optimum) mixing channel geometry of the ejector. The detailed ejector performance characteristics are predicted by incorporating the flow model into a solar-powered ejector cycle cooling system computer model. Freon-11 is usedmore » as both the primary and secondary fluids. Performance evaluation of the cooling system is examined for its coefficient of performance (COP) under a variety of operating conditions. A study is also conducted on a modified ejector cycle in which a secondary pump is introduced at the exit of the evaporator. Results show a significant improvement in the overall performance over that of the conventional ejector cycle (without a secondary pump). Comparison between one and two-dimensional analyses indicates that the two-dimensional ejector fluid flow analysis predicts a better overall system performance. This is true for both the conventional and modified ejector cycles.« less

  2. Medicare and Medicaid Programs; CY 2018 Home Health Prospective Payment System Rate Update and CY 2019 Case-Mix Adjustment Methodology Refinements; Home Health Value-Based Purchasing Model; and Home Health Quality Reporting Requirements. Final rule.

    PubMed

    2017-11-07

    This final rule updates the home health prospective payment system (HH PPS) payment rates, including the national, standardized 60-day episode payment rates, the national per-visit rates, and the non-routine medical supply (NRS) conversion factor, effective for home health episodes of care ending on or after January 1, 2018. This rule also: Updates the HH PPS case-mix weights using the most current, complete data available at the time of rulemaking; implements the third year of a 3-year phase-in of a reduction to the national, standardized 60-day episode payment to account for estimated case-mix growth unrelated to increases in patient acuity (that is, nominal case-mix growth) between calendar year (CY) 2012 and CY 2014; and discusses our efforts to monitor the potential impacts of the rebasing adjustments that were implemented in CY 2014 through CY 2017. In addition, this rule finalizes changes to the Home Health Value-Based Purchasing (HHVBP) Model and to the Home Health Quality Reporting Program (HH QRP). We are not finalizing the implementation of the Home Health Groupings Model (HHGM) in this final rule.

  3. Estimating the variance for heterogeneity in arm-based network meta-analysis.

    PubMed

    Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R

    2018-04-19

    Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Using the Mixed Rasch Model to analyze data from the beliefs and attitudes about memory survey.

    PubMed

    Smith, Everett V; Ying, Yuping; Brown, Scott W

    2012-01-01

    In this study, we used the Mixed Rasch Model (MRM) to analyze data from the Beliefs and Attitudes About Memory Survey (BAMS; Brown, Garry, Silver, and Loftus, 1997). We used the original 5-point BAMS data to investigate the functioning of the "Neutral" category via threshold analysis under a 2-class MRM solution. The "Neutral" category was identified as not eliciting the model expected responses and observations in the "Neutral" category were subsequently treated as missing data. For the BAMS data without the "Neutral" category, exploratory MRM analyses specifying up to 5 latent classes were conducted to evaluate data-model fit using the consistent Akaike information criterion (CAIC). For each of three BAMS subscales, a two latent class solution was identified as fitting the mixed Rasch rating scale model the best. Results regarding threshold analysis, person parameters, and item fit based on the final models are presented and discussed as well as the implications of this study.

  5. Ancestral haplotype-based association mapping with generalized linear mixed models accounting for stratification.

    PubMed

    Zhang, Z; Guillaume, F; Sartelet, A; Charlier, C; Georges, M; Farnir, F; Druet, T

    2012-10-01

    In many situations, genome-wide association studies are performed in populations presenting stratification. Mixed models including a kinship matrix accounting for genetic relatedness among individuals have been shown to correct for population and/or family structure. Here we extend this methodology to generalized linear mixed models which properly model data under various distributions. In addition we perform association with ancestral haplotypes inferred using a hidden Markov model. The method was shown to properly account for stratification under various simulated scenari presenting population and/or family structure. Use of ancestral haplotypes resulted in higher power than SNPs on simulated datasets. Application to real data demonstrates the usefulness of the developed model. Full analysis of a dataset with 4600 individuals and 500 000 SNPs was performed in 2 h 36 min and required 2.28 Gb of RAM. The software GLASCOW can be freely downloaded from www.giga.ulg.ac.be/jcms/prod_381171/software. francois.guillaume@jouy.inra.fr Supplementary data are available at Bioinformatics online.

  6. Hydrogeology, ground-water quality, and source of ground water causing water-quality changes in the Davis well field at Memphis, Tennessee

    USGS Publications Warehouse

    Parks, William S.; Mirecki, June E.; Kingsbury, James A.

    1995-01-01

    NETPATH geochemical model code was used to mix waters from the alluvial aquifer with water from the Memphis aquifer using chloride as a conservative tracer. The resulting models indicated that a mixture containing 3 percent alluvial aquifer water mixed with 97 percent unaffected Memphis aquifer water would produce the chloride concentration measured in water from the Memphis aquifer well most affected by water-quality changes. NETPATH also was used to calculate mixing percentages of alluvial and Memphis aquifer Abstract waters based on changes in the concentrations of selected dissolved major inorganic and trace element constituents that define the dominant reactions that occur during mixing. These models indicated that a mixture containing 18 percent alluvial aquifer water and 82 percent unaffected Memphis aquifer water would produce the major constituent and trace element concentrations measured in water from the Memphis aquifer well most affected by water-quality changes. However, these model simulations predicted higher dissolved methane concentrations than were measured in water samples from the Memphis aquifer wells.

  7. Improved estimation of sediment source contributions by concentration-dependent Bayesian isotopic mixing model

    NASA Astrophysics Data System (ADS)

    Ram Upadhayay, Hari; Bodé, Samuel; Griepentrog, Marco; Bajracharya, Roshan Man; Blake, Will; Cornelis, Wim; Boeckx, Pascal

    2017-04-01

    The implementation of compound-specific stable isotope (CSSI) analyses of biotracers (e.g. fatty acids, FAs) as constraints on sediment-source contributions has become increasingly relevant to understand the origin of sediments in catchments. The CSSI fingerprinting of sediment utilizes CSSI signature of biotracer as input in an isotopic mixing model (IMM) to apportion source soil contributions. So far source studies relied on the linear mixing assumptions of CSSI signature of sources to the sediment without accounting for potential effects of source biotracer concentration. Here we evaluated the effect of FAs concentration in sources on the accuracy of source contribution estimations in artificial soil mixture of three well-separated land use sources. Soil samples from land use sources were mixed to create three groups of artificial mixture with known source contributions. Sources and artificial mixture were analysed for δ13C of FAs using gas chromatography-combustion-isotope ratio mass spectrometry. The source contributions to the mixture were estimated using with and without concentration-dependent MixSIAR, a Bayesian isotopic mixing model. The concentration-dependent MixSIAR provided the closest estimates to the known artificial mixture source contributions (mean absolute error, MAE = 10.9%, and standard error, SE = 1.4%). In contrast, the concentration-independent MixSIAR with post mixing correction of tracer proportions based on aggregated concentration of FAs of sources biased the source contributions (MAE = 22.0%, SE = 3.4%). This study highlights the importance of accounting the potential effect of a source FA concentration for isotopic mixing in sediments that adds realisms to mixing model and allows more accurate estimates of contributions of sources to the mixture. The potential influence of FA concentration on CSSI signature of sediments is an important underlying factor that determines whether the isotopic signature of a given source is observable even after equilibrium. Therefore inclusion of FA concentrations of the sources in the IMM formulation is standard procedure for accurate estimation of source contributions. The post model correction approach that dominates the CSSI fingerprinting causes bias, especially if the FAs concentration of sources differs substantially.

  8. Oxygen diffusion model of the mixed (U,Pu)O2 ± x: Assessment and application

    NASA Astrophysics Data System (ADS)

    Moore, Emily; Guéneau, Christine; Crocombette, Jean-Paul

    2017-03-01

    The uranium-plutonium (U,Pu)O2 ± x mixed oxide (MOX) is used as a nuclear fuel in some light water reactors and considered for future reactor generations. To gain insight into fuel restructuring, which occurs during the fuel lifetime as well as possible accident scenarios understanding of the thermodynamic and kinetic behavior is crucial. A comprehensive evaluation of thermo-kinetic properties is incorporated in a computational CALPHAD type model. The present DICTRA based model describes oxygen diffusion across the whole range of plutonium, uranium and oxygen compositions and temperatures by incorporating vacancy and interstitial migration pathways for oxygen. The self and chemical diffusion coefficients are assessed for the binary UO2 ± x and PuO2 - x systems and the description is extended to the ternary mixed oxide (U,Pu)O2 ± x by extrapolation. A simulation to validate the applicability of this model is considered.

  9. A mixing timescale model for TPDF simulations of turbulent premixed flames

    DOE PAGES

    Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.; ...

    2017-02-06

    Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive scalar mixing across different flame regimes are appropriately accounted for.« less

  10. A mixing timescale model for TPDF simulations of turbulent premixed flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.

    Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive scalar mixing across different flame regimes are appropriately accounted for.« less

  11. Analysis of mixed traffic flow with human-driving and autonomous cars based on car-following model

    NASA Astrophysics Data System (ADS)

    Zhu, Wen-Xing; Zhang, H. M.

    2018-04-01

    We investigated the mixed traffic flow with human-driving and autonomous cars. A new mathematical model with adjustable sensitivity and smooth factor was proposed to describe the autonomous car's moving behavior in which smooth factor is used to balance the front and back headway in a flow. A lemma and a theorem were proved to support the stability criteria in traffic flow. A series of simulations were carried out to analyze the mixed traffic flow. The fundamental diagrams were obtained from the numerical simulation results. The varying sensitivity and smooth factor of autonomous cars affect traffic flux, which exhibits opposite varying tendency with increasing parameters before and after the critical density. Moreover, the sensitivity of sensors and smooth factors play an important role in stabilizing the mixed traffic flow and suppressing the traffic jam.

  12. Neutrino masses and mixing from S4 flavor twisting

    NASA Astrophysics Data System (ADS)

    Ishimori, Hajime; Shimizu, Yusuke; Tanimoto, Morimitsu; Watanabe, Atsushi

    2011-02-01

    We discuss a neutrino mass model based on the S4 discrete symmetry where the symmetry breaking is triggered by the boundary conditions of the bulk right-handed neutrino in the fifth spacial dimension. The three generations of the left-handed lepton doublets and the right-handed neutrinos are assigned to be the triplets of S4. The magnitudes of the lepton mixing angles, especially the reactor angle, are related to the neutrino mass patterns, and the model will be tested in future neutrino experiments, e.g., an early discovery of the reactor angle favors the normal hierarchy. For the inverted hierarchy, the lepton mixing is predicted to be almost the tribimaximal mixing. The size of the extra dimension has a connection to the possible mass spectrum; a small (large) volume corresponds to the normal (inverted) mass hierarchy.

  13. Functional Additive Mixed Models

    PubMed Central

    Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja

    2014-01-01

    We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach. PMID:26347592

  14. Functional Additive Mixed Models.

    PubMed

    Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja

    2015-04-01

    We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach.

  15. Numerical simulation of life cycles of advection warm fog

    NASA Technical Reports Server (NTRS)

    Hung, R. J.; Vaughan, O. H.

    1977-01-01

    The formation, development and dissipation of advection warm fog is investigated. The equations employed in the model include the equation of continuity, momentum and energy for the descriptions of density, wind component and potential temperature, respectively, together with two diffusion equations for the modification of water-vapor mixing ratio and liquid-water mixing ratios. A description of the vertical turbulent transfer of heat, moisture and momentum has been taken into consideration. The turbulent exchange coefficients adopted in the model are based on empirical flux-gradient relations.

  16. Uniqueness of Nash equilibrium in vaccination games.

    PubMed

    Bai, Fan

    2016-12-01

    One crucial condition for the uniqueness of Nash equilibrium set in vaccination games is that the attack ratio monotonically decreases as the vaccine coverage level increasing. We consider several deterministic vaccination models in homogeneous mixing population and in heterogeneous mixing population. Based on the final size relations obtained from the deterministic epidemic models, we prove that the attack ratios can be expressed in terms of the vaccine coverage levels, and also prove that the attack ratios are decreasing functions of vaccine coverage levels. Some thresholds are presented, which depend on the vaccine efficacy. It is proved that for vaccination games in homogeneous mixing population, there is a unique Nash equilibrium for each game.

  17. Enhanced model-based design of a high-throughput three dimensional micromixer driven by alternating-current electrothermal flow.

    PubMed

    Wu, Yupan; Ren, Yukun; Jiang, Hongyuan

    2017-01-01

    We propose a 3D microfluidic mixer based on the alternating current electrothermal (ACET) flow. The ACET vortex is produced by 3D electrodes embedded in the sidewall of the microchannel and is used to stir the fluidic sample throughout the entire channel depth. An optimized geometrical structure of the proposed 3D micromixer device is obtained based on the enhanced theoretical model of ACET flow and natural convection. We quantitatively analyze the flow field driven by the ACET, and a pattern of electrothermal microvortex is visualized by the micro-particle imaging velocimetry. Then, the mixing experiment is conducted using dye solutions with varying solution conductivities. Mixing efficiency can exceed 90% for electrolytes with 0.2 S/m (1 S/m) when the flow rate is 0.364 μL/min (0.728 μL/min) and the imposed peak-to-peak voltage is 52.5 V (35 V). A critical analysis of our micromixer in comparison with different mixer designs using a comparative mixing index is also performed. The ACET micromixer embedded with sidewall 3D electrodes can achieve a highly effective mixing performance and can generate high throughput in the continuous-flow condition. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Multi-objective shape optimization of plate structure under stress criteria based on sub-structured mixed FEM and genetic algorithms

    NASA Astrophysics Data System (ADS)

    Garambois, Pierre; Besset, Sebastien; Jézéquel, Louis

    2015-07-01

    This paper presents a methodology for the multi-objective (MO) shape optimization of plate structure under stress criteria, based on a mixed Finite Element Model (FEM) enhanced with a sub-structuring method. The optimization is performed with a classical Genetic Algorithm (GA) method based on Pareto-optimal solutions and considers thickness distributions parameters and antagonist objectives among them stress criteria. We implement a displacement-stress Dynamic Mixed FEM (DM-FEM) for plate structure vibrations analysis. Such a model gives a privileged access to the stress within the plate structure compared to primal classical FEM, and features a linear dependence to the thickness parameters. A sub-structuring reduction method is also computed in order to reduce the size of the mixed FEM and split the given structure into smaller ones with their own thickness parameters. Those methods combined enable a fast and stress-wise efficient structure analysis, and improve the performance of the repetitive GA. A few cases of minimizing the mass and the maximum Von Mises stress within a plate structure under a dynamic load put forward the relevance of our method with promising results. It is able to satisfy multiple damage criteria with different thickness distributions, and use a smaller FEM.

  19. Neutrino CP violation and sign of baryon asymmetry in the minimal seesaw model

    NASA Astrophysics Data System (ADS)

    Shimizu, Yusuke; Takagi, Kenta; Tanimoto, Morimitsu

    2018-03-01

    We discuss the correlation between the CP violating Dirac phase of the lepton mixing matrix and the cosmological baryon asymmetry based on the leptogenesis in the minimal seesaw model with two right-handed Majorana neutrinos and the trimaximal mixing for neutrino flavors. The sign of the CP violating Dirac phase at low energy is fixed by the observed cosmological baryon asymmetry since there is only one phase parameter in the model. According to the recent T2K and NOνA data of the CP violation, the Dirac neutrino mass matrix of our model is fixed only for the normal hierarchy of neutrino masses.

  20. A generalized interval fuzzy mixed integer programming model for a multimodal transportation problem under uncertainty

    NASA Astrophysics Data System (ADS)

    Tian, Wenli; Cao, Chengxuan

    2017-03-01

    A generalized interval fuzzy mixed integer programming model is proposed for the multimodal freight transportation problem under uncertainty, in which the optimal mode of transport and the optimal amount of each type of freight transported through each path need to be decided. For practical purposes, three mathematical methods, i.e. the interval ranking method, fuzzy linear programming method and linear weighted summation method, are applied to obtain equivalents of constraints and parameters, and then a fuzzy expected value model is presented. A heuristic algorithm based on a greedy criterion and the linear relaxation algorithm are designed to solve the model.

  1. An approach for accurate simulation of liquid mixing in a T-shaped micromixer.

    PubMed

    Matsunaga, Takuya; Lee, Ho-Joon; Nishino, Koichi

    2013-04-21

    In this paper, we propose a new computational method for efficient evaluation of the fluid mixing behaviour in a T-shaped micromixer with a rectangular cross section at high Schmidt number under steady state conditions. Our approach enables a low-cost high-quality simulation based on tracking of fluid particles for convective fluid mixing and posterior solving of a model of the species equation for molecular diffusion. The examined parameter range is Re = 1.33 × 10(-2) to 240 at Sc = 3600. The proposed method is shown to simulate well the mixing quality even in the engulfment regime, where the ordinary grid-based simulation is not able to obtain accurate solutions with affordable mesh sizes due to the numerical diffusion at high Sc. The obtained results agree well with a backward random-walk Monte Carlo simulation, by which the accuracy of the proposed method is verified. For further investigation of the characteristics of the proposed method, the Sc dependency is examined in a wide range of Sc from 10 to 3600 at Re = 200. The study reveals that the model discrepancy error emerges more significantly in the concentration distribution at lower Sc, while the resulting mixing quality is accurate over the entire range.

  2. Computational Analyses of Pressurization in Cryogenic Tanks

    NASA Technical Reports Server (NTRS)

    Ahuja, Vineet; Hosangadi, Ashvin; Lee, Chun P.; Field, Robert E.; Ryan, Harry

    2010-01-01

    A comprehensive numerical framework utilizing multi-element unstructured CFD and rigorous real fluid property routines has been developed to carry out analyses of propellant tank and delivery systems at NASA SSC. Traditionally CFD modeling of pressurization and mixing in cryogenic tanks has been difficult primarily because the fluids in the tank co-exist in different sub-critical and supercritical states with largely varying properties that have to be accurately accounted for in order to predict the correct mixing and phase change between the ullage and the propellant. For example, during tank pressurization under some circumstances, rapid mixing of relatively warm pressurant gas with cryogenic propellant can lead to rapid densification of the gas and loss of pressure in the tank. This phenomenon can cause serious problems during testing because of the resulting decrease in propellant flow rate. With proper physical models implemented, CFD can model the coupling between the propellant and pressurant including heat transfer and phase change effects and accurately capture the complex physics in the evolving flowfields. This holds the promise of allowing the specification of operational conditions and procedures that could minimize the undesirable mixing and heat transfer inherent in propellant tank operation. In our modeling framework, we incorporated two different approaches to real fluids modeling: (a) the first approach is based on the HBMS model developed by Hirschfelder, Beuler, McGee and Sutton and (b) the second approach is based on a cubic equation of state developed by Soave, Redlich and Kwong (SRK). Both approaches cover fluid properties and property variation spanning sub-critical gas and liquid states as well as the supercritical states. Both models were rigorously tested and properties for common fluids such as oxygen, nitrogen, hydrogen etc were compared against NIST data in both the sub-critical as well as supercritical regimes.

  3. Comparing Bayesian stable isotope mixing models: Which tools are best for sediments?

    NASA Astrophysics Data System (ADS)

    Morris, David; Macko, Stephen

    2016-04-01

    Bayesian stable isotope mixing models have received much attention as a means of coping with multiple sources and uncertainty in isotope ecology (e.g. Phillips et al., 2014), enabling the probabilistic determination of the contributions made by each food source to the total diet of the organism in question. We have applied these techniques to marine sediments for the first time. The sediments of the Chukchi Sea and Beaufort Sea offer an opportunity to utilize these models for organic geochemistry, as there are three likely sources of organic carbon; pelagic phytoplankton, sea ice algae and terrestrial material from rivers and coastal erosion, as well as considerable variation in the marine δ13C values. Bayesian mixing models using bulk δ13C and δ15N data from Shelf Basin Interaction samples allow for the probabilistic determination of the contributions made by each of the sources to the organic carbon budget, and can be compared with existing source contribution estimates based upon biomarker models (e.g. Belicka & Harvey, 2009, Faux, Belicka, & Rodger Harvey, 2011). The δ13C of this preserved material varied from -22.1 to -16.7‰ (mean -19.4±1.3‰), while δ15N varied from 4.1 to 7.6‰ (mean 5.7±1.1‰). Using the SIAR model, we found that water column productivity was the source of between 50 and 70% of the organic carbon buried in this portion of the western Arctic with the remainder mainly supplied by sea ice algal productivity (25-35%) and terrestrial inputs (15%). With many mixing models now available, this study will compare SIAR with MixSIAR and the new FRUITS model. Monte Carlo modeling of the mixing polygon will be used to validate the models, and hierarchical models will be utilised to glean more information from the data set.

  4. Determining the impact of cell mixing on signaling during development.

    PubMed

    Uriu, Koichiro; Morelli, Luis G

    2017-06-01

    Cell movement and intercellular signaling occur simultaneously to organize morphogenesis during embryonic development. Cell movement can cause relative positional changes between neighboring cells. When intercellular signals are local such cell mixing may affect signaling, changing the flow of information in developing tissues. Little is known about the effect of cell mixing on intercellular signaling in collective cellular behaviors and methods to quantify its impact are lacking. Here we discuss how to determine the impact of cell mixing on cell signaling drawing an example from vertebrate embryogenesis: the segmentation clock, a collective rhythm of interacting genetic oscillators. We argue that comparing cell mixing and signaling timescales is key to determining the influence of mixing. A signaling timescale can be estimated by combining theoretical models with cell signaling perturbation experiments. A mixing timescale can be obtained by analysis of cell trajectories from live imaging. After comparing cell movement analyses in different experimental settings, we highlight challenges in quantifying cell mixing from embryonic timelapse experiments, especially a reference frame problem due to embryonic motions and shape changes. We propose statistical observables characterizing cell mixing that do not depend on the choice of reference frames. Finally, we consider situations in which both cell mixing and signaling involve multiple timescales, precluding a direct comparison between single characteristic timescales. In such situations, physical models based on observables of cell mixing and signaling can simulate the flow of information in tissues and reveal the impact of observed cell mixing on signaling. © 2017 Japanese Society of Developmental Biologists.

  5. Quantitative assessment of the flow pattern in the southern Arava Valley (Israel) by environmental tracers and a mixing cell model

    NASA Astrophysics Data System (ADS)

    Adar, E. M.; Rosenthal, E.; Issar, A. S.; Batelaan, O.

    1992-08-01

    This paper demonstrates the implementation of a novel mathematical model to quantify subsurface inflows from various sources into the arid alluvial basin of the southern Arava Valley divided between Israel and Jordan. The model is based on spatial distribution of environmental tracers and is aimed for use on basins with complex hydrogeological structure and/or with scarce physical hydrologic information. However, a sufficient qualified number of wells and springs are required to allow water sampling for chemical and isotopic analyses. Environmental tracers are used in a multivariable cluster analysis to define potential sources of recharge, and also to delimit homogeneous mixing compartments within the modeled aquifer. Six mixing cells were identified based on 13 constituents. A quantitative assessment of 11 significant subsurface inflows was obtained. Results revealed that the total recharge into the southern Arava basin is around 12.52 × 10 6m3year-1. The major source of inflow into the alluvial aquifer is from the Nubian sandstone aquifer which comprises 65-75% of the total recharge. Only 19-24% of the recharge, but the most important source of fresh water, originates over the eastern Jordanian mountains and alluvial fans.

  6. Mixed Convective Peristaltic Flow of Water Based Nanofluids with Joule Heating and Convective Boundary Conditions

    PubMed Central

    Hayat, Tasawar; Nawaz, Sadaf; Alsaedi, Ahmed; Rafiq, Maimona

    2016-01-01

    Main objective of present study is to analyze the mixed convective peristaltic transport of water based nanofluids using five different nanoparticles i.e. (Al2O3, CuO, Cu, Ag and TiO2). Two thermal conductivity models namely the Maxwell's and Hamilton-Crosser's are used in this study. Hall and Joule heating effects are also given consideration. Convection boundary conditions are employed. Furthermore, viscous dissipation and heat generation/absorption are used to model the energy equation. Problem is simplified by employing lubrication approach. System of equations are solved numerically. Influence of pertinent parameters on the velocity and temperature are discussed. Also the heat transfer rate at the wall is observed for considered five nanofluids using the two phase models via graphs. PMID:27104596

  7. Mixed Transportation Network Design under a Sustainable Development Perspective

    PubMed Central

    Qin, Jin; Ni, Ling-lin; Shi, Feng

    2013-01-01

    A mixed transportation network design problem considering sustainable development was studied in this paper. Based on the discretization of continuous link-grade decision variables, a bilevel programming model was proposed to describe the problem, in which sustainability factors, including vehicle exhaust emissions, land-use scale, link load, and financial budget, are considered. The objective of the model is to minimize the total amount of resources exploited under the premise of meeting all the construction goals. A heuristic algorithm, which combined the simulated annealing and path-based gradient projection algorithm, was developed to solve the model. The numerical example shows that the transportation network optimized with the method above not only significantly alleviates the congestion on the link, but also reduces vehicle exhaust emissions within the network by up to 41.56%. PMID:23476142

  8. Mixed transportation network design under a sustainable development perspective.

    PubMed

    Qin, Jin; Ni, Ling-lin; Shi, Feng

    2013-01-01

    A mixed transportation network design problem considering sustainable development was studied in this paper. Based on the discretization of continuous link-grade decision variables, a bilevel programming model was proposed to describe the problem, in which sustainability factors, including vehicle exhaust emissions, land-use scale, link load, and financial budget, are considered. The objective of the model is to minimize the total amount of resources exploited under the premise of meeting all the construction goals. A heuristic algorithm, which combined the simulated annealing and path-based gradient projection algorithm, was developed to solve the model. The numerical example shows that the transportation network optimized with the method above not only significantly alleviates the congestion on the link, but also reduces vehicle exhaust emissions within the network by up to 41.56%.

  9. Using nitrate to quantify quick flow in a karst aquifer

    USGS Publications Warehouse

    Mahler, B.J.; Garner, B.D.

    2009-01-01

    In karst aquifers, contaminated recharge can degrade spring water quality, but quantifying the rapid recharge (quick flow) component of spring flow is challenging because of its temporal variability. Here, we investigate the use of nitrate in a two-endmember mixing model to quantify quick flow in Barton Springs, Austin, Texas. Historical nitrate data from recharging creeks and Barton Springs were evaluated to determine a representative nitrate concentration for the aquifer water endmember (1.5 mg/L) and the quick flow endmember (0.17 mg/L for nonstormflow conditions and 0.25 mg/L for stormflow conditions). Under nonstormflow conditions for 1990 to 2005, model results indicated that quick flow contributed from 0% to 55% of spring flow. The nitrate-based two-endmember model was applied to the response of Barton Springs to a storm and results compared to those produced using the same model with ??18O and specific conductance (SC) as tracers. Additionally, the mixing model was modified to allow endmember quick flow values to vary over time. Of the three tracers, nitrate appears to be the most advantageous because it is conservative and because the difference between the concentrations in the two endmembers is large relative to their variance. The ??18O- based model was very sensitive to variability within the quick flow endmember, and SC was not conservative over the timescale of the storm response. We conclude that a nitrate-based two-endmember mixing model might provide a useful approach for quantifying the temporally variable quick flow component of spring flow in some karst systems. ?? 2008 National Ground Water Association.

  10. A new mixed subgrid-scale model for large eddy simulation of turbulent drag-reducing flows of viscoelastic fluids

    NASA Astrophysics Data System (ADS)

    Li, Feng-Chen; Wang, Lu; Cai, Wei-Hua

    2015-07-01

    A mixed subgrid-scale (SGS) model based on coherent structures and temporal approximate deconvolution (MCT) is proposed for turbulent drag-reducing flows of viscoelastic fluids. The main idea of the MCT SGS model is to perform spatial filtering for the momentum equation and temporal filtering for the conformation tensor transport equation of turbulent flow of viscoelastic fluid, respectively. The MCT model is suitable for large eddy simulation (LES) of turbulent drag-reducing flows of viscoelastic fluids in engineering applications since the model parameters can be easily obtained. The LES of forced homogeneous isotropic turbulence (FHIT) with polymer additives and turbulent channel flow with surfactant additives based on MCT SGS model shows excellent agreements with direct numerical simulation (DNS) results. Compared with the LES results using the temporal approximate deconvolution model (TADM) for FHIT with polymer additives, this mixed SGS model MCT behaves better, regarding the enhancement of calculating parameters such as the Reynolds number. For scientific and engineering research, turbulent flows at high Reynolds numbers are expected, so the MCT model can be a more suitable model for the LES of turbulent drag-reducing flows of viscoelastic fluid with polymer or surfactant additives. Project supported by the China Postdoctoral Science Foundation (Grant No. 2011M500652), the National Natural Science Foundation of China (Grant Nos. 51276046 and 51206033), and the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20112302110020).

  11. Model for toroidal velocity in H-mode plasmas in the presence of internal transport barriers

    NASA Astrophysics Data System (ADS)

    Chatthong, B.; Onjun, T.; Singhsomroje, W.

    2010-06-01

    A model for predicting toroidal velocity in H-mode plasmas in the presence of internal transport barriers (ITBs) is developed using an empirical approach. In this model, it is assumed that the toroidal velocity is directly proportional to the local ion temperature. This model is implemented in the BALDUR integrated predictive modelling code so that simulations of ITB plasmas can be carried out self-consistently. In these simulations, a combination of a semi-empirical mixed Bohm/gyro-Bohm (mixed B/gB) core transport model that includes ITB effects and NCLASS neoclassical transport is used to compute a core transport. The boundary is taken to be at the top of the pedestal, where the pedestal values are described using a theory-based pedestal model based on a combination of magnetic and flow shear stabilization pedestal width scaling and an infinite-n ballooning pressure gradient model. The combination of the mixed B/gB core transport model with ITB effects, together with the pedestal and the toroidal velocity models, is used to simulate the time evolution of plasma current, temperature and density profiles of 10 JET optimized shear discharges. It is found that the simulations can reproduce an ITB formation in these discharges. Statistical analyses including root mean square error (RMSE) and offset are used to quantify the agreement. It is found that the averaged RMSE and offset among these discharges are about 24.59% and -0.14%, respectively.

  12. The Development and Evaluation of Speaking Learning Model by Cooperative Approach

    ERIC Educational Resources Information Center

    Darmuki, Agus; Andayani; Nurkamto, Joko; Saddhono, Kundharu

    2018-01-01

    A cooperative approach-based Speaking Learning Model (SLM) has been developed to improve speaking skill of Higher Education students. This research aimed at evaluating the effectiveness of cooperative-based SLM viewed from the development of student's speaking ability and its effectiveness on speaking activity. This mixed method study combined…

  13. Dig into Learning: A Program Evaluation of an Agricultural Literacy Innovation

    ERIC Educational Resources Information Center

    Edwards, Erica Brown

    2016-01-01

    This study is a mixed-methods program evaluation of an agricultural literacy innovation in a local school district in rural eastern North Carolina. This evaluation describes the use of a theory-based framework, the Concerns-Based Adoption Model (CBAM), in accordance with Stufflebeam's Context, Input, Process, Product (CIPP) model by evaluating the…

  14. Evaluation of Turkish and Mathematics Curricula According to Value-Based Evaluation Model

    ERIC Educational Resources Information Center

    Duman, Serap Nur; Akbas, Oktay

    2017-01-01

    This study evaluated secondary school seventh-grade Turkish and mathematics programs using the Context-Input-Process-Product Evaluation Model based on student, teacher, and inspector views. The convergent parallel mixed method design was used in the study. Student values were identified using the scales for socio-level identification, traditional…

  15. Analyzing Association Mapping in Pedigree-Based GWAS Using a Penalized Multitrait Mixed Model

    PubMed Central

    Liu, Jin; Yang, Can; Shi, Xingjie; Li, Cong; Huang, Jian; Zhao, Hongyu; Ma, Shuangge

    2017-01-01

    Genome-wide association studies (GWAS) have led to the identification of many genetic variants associated with complex diseases in the past 10 years. Penalization methods, with significant numerical and statistical advantages, have been extensively adopted in analyzing GWAS. This study has been partly motivated by the analysis of Genetic Analysis Workshop (GAW) 18 data, which have two notable characteristics. First, the subjects are from a small number of pedigrees and hence related. Second, for each subject, multiple correlated traits have been measured. Most of the existing penalization methods assume independence between subjects and traits and can be suboptimal. There are a few methods in the literature based on mixed modeling that can accommodate correlations. However, they cannot fully accommodate the two types of correlations while conducting effective marker selection. In this study, we develop a penalized multitrait mixed modeling approach. It accommodates the two different types of correlations and includes several existing methods as special cases. Effective penalization is adopted for marker selection. Simulation demonstrates its satisfactory performance. The GAW 18 data are analyzed using the proposed method. PMID:27247027

  16. An analysis of the adoption of managerial innovation: cost accounting systems in hospitals.

    PubMed

    Glandon, G L; Counte, M A

    1995-11-01

    The adoption of new medical technologies has received significant attention in the hospital industry, in part, because of its observed relation to hospital cost increases. However, few comprehensive studies exist regarding the adoption of non-medical technologies in the hospital setting. This paper develops and tests a model of the adoption of a managerial innovation, new to the hospital industry, that of cost accounting systems based upon standard costs. The conceptual model hypothesizes that four organizational context factors (size, complexity, ownership and slack resources) and two environmental factors (payor mix and interorganizational dependency) influence hospital adoption of cost accounting systems. Based on responses to a mail survey of hospitals in the Chicago area and AHA annual survey information for 1986, a sample of 92 hospitals was analyzed. Greater hospital size, complexity, slack resources, and interorganizational dependency all were associated with adoption. Payor mix had no significant influence and the hospital ownership variables had a mixed influence. The logistic regression model was significant overall and explained over 15% of the variance in the adoption decision.

  17. A Comparative Analysis of Reynolds-Averaged Navier-Stokes Model Predictions for Rayleigh-Taylor Instability and Mixing with Constant and Complex Accelerations

    NASA Astrophysics Data System (ADS)

    Schilling, Oleg

    2016-11-01

    Two-, three- and four-equation, single-velocity, multicomponent Reynolds-averaged Navier-Stokes (RANS) models, based on the turbulent kinetic energy dissipation rate or lengthscale, are used to simulate At = 0 . 5 Rayleigh-Taylor turbulent mixing with constant and complex accelerations. The constant acceleration case is inspired by the Cabot and Cook (2006) DNS, and the complex acceleration cases are inspired by the unstable/stable and unstable/neutral cases simulated using DNS (Livescu, Wei & Petersen 2011) and the unstable/stable/unstable case simulated using ILES (Ramaprabhu, Karkhanis & Lawrie 2013). The four-equation models couple equations for the mass flux a and negative density-specific volume correlation b to the K- ɛ or K- L equations, while the three-equation models use a two-fluid algebraic closure for b. The lengthscale-based models are also applied with no buoyancy production in the L equation to explore the consequences of neglecting this term. Predicted mixing widths, turbulence statistics, fields, and turbulent transport equation budgets are compared among these models to identify similarities and differences in the turbulence production, dissipation and diffusion physics represented by the closures used in these models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  18. Effect of exercise on patient specific abdominal aortic aneurysm flow topology and mixing.

    PubMed

    Arzani, Amirhossein; Les, Andrea S; Dalman, Ronald L; Shadden, Shawn C

    2014-02-01

    Computational fluid dynamics modeling was used to investigate changes in blood transport topology between rest and exercise conditions in five patient-specific abdominal aortic aneurysm models. MRI was used to provide the vascular anatomy and necessary boundary conditions for simulating blood velocity and pressure fields inside each model. Finite-time Lyapunov exponent fields and associated Lagrangian coherent structures were computed from blood velocity data and were used to compare features of the transport topology between rest and exercise both mechanistically and qualitatively. A mix-norm and mix-variance measure based on fresh blood distribution throughout the aneurysm over time were implemented to quantitatively compare mixing between rest and exercise. Exercise conditions resulted in higher and more uniform mixing and reduced the overall residence time in all aneurysms. Separated regions of recirculating flow were commonly observed in rest, and these regions were either reduced or removed by attached and unidirectional flow during exercise, or replaced with regional chaotic and transiently turbulent mixing, or persisted and even extended during exercise. The main factor that dictated the change in flow topology from rest to exercise was the behavior of the jet of blood penetrating into the aneurysm during systole. Copyright © 2013 John Wiley & Sons, Ltd.

  19. An ocean large-eddy simulation of Langmuir circulations and convection in the surface mixed layer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skyllingstad, E.D.; Denbo, D.W.

    Numerical experiments were performed using a three-dimensional large-eddy simulation model of the ocean surface mixed layer that includes the Craik-Leibovich vortex force to parameterize the interaction of surface waves with mean currents. Results from the experiments show that the vortex force generates Langmuir circulations that can dominate vertical mixing. The simulated vertical velocity fields show linear, small-scale, coherent structures near the surface that extend downwind across the model domain. In the interior of the mixed layer, scales of motion increase to eddy sizes that are roughly equivalent to the mixed-layer depth. Cases with the vortex force have stronger circulations nearmore » the surface in contrast to cases with only heat flux and wind stress, particularly when the heat flux is positive. Calculations of the velocity variance and turbulence dissipation rates for cases with and without the vortex force, surface cooling, and wind stress indicate that wave-current interactions are a dominant mixing process in the upper mixed layer. Heat flux calculations show that the entrainment rate at the mixed-layer base can be up to two times greater when the vortex force is included. In a case with reduced wind stress, turbulence dissipation rates remained high near the surface because of the vortex force interaction with preexisting inertial currents. In deep mixed layers ({approximately}250 m) the simulations show that Langmuir circulations can vertically transport water 145 m during conditions of surface heating. Observations of turbulence dissipation rates and the vertical temperature structure support the model results. 42 refs., 20 figs., 21 tabs.« less

  20. The effect of different methods to compute N on estimates of mixing in stratified flows

    NASA Astrophysics Data System (ADS)

    Fringer, Oliver; Arthur, Robert; Venayagamoorthy, Subhas; Koseff, Jeffrey

    2017-11-01

    The background stratification is typically well defined in idealized numerical models of stratified flows, although it is more difficult to define in observations. This may have important ramifications for estimates of mixing which rely on knowledge of the background stratification against which turbulence must work to mix the density field. Using direct numerical simulation data of breaking internal waves on slopes, we demonstrate a discrepancy in ocean mixing estimates depending on the method in which the background stratification is computed. Two common methods are employed to calculate the buoyancy frequency N, namely a three-dimensionally resorted density field (often used in numerical models) and a locally-resorted vertical density profile (often used in the field). We show that how N is calculated has a significant effect on the flux Richardson number Rf, which is often used to parameterize turbulent mixing, and the turbulence activity number Gi, which leads to errors when estimating the mixing efficiency using Gi-based parameterizations. Supported by ONR Grant N00014-08-1-0904 and LLNL Contract DE-AC52-07NA27344.

  1. Numerical Study of Mixing Thermal Conductivity Models for Nanofluid Heat Transfer Enhancement

    NASA Astrophysics Data System (ADS)

    Pramuanjaroenkij, A.; Tongkratoke, A.; Kakaç, S.

    2018-01-01

    Researchers have paid attention to nanofluid applications, since nanofluids have revealed their potentials as working fluids in many thermal systems. Numerical studies of convective heat transfer in nanofluids can be based on considering them as single- and two-phase fluids. This work is focused on improving the single-phase nanofluid model performance, since the employment of this model requires less calculation time and it is less complicated due to utilizing the mixing thermal conductivity model, which combines static and dynamic parts used in the simulation domain alternately. The in-house numerical program has been developed to analyze the effects of the grid nodes, effective viscosity model, boundary-layer thickness, and of the mixing thermal conductivity model on the nanofluid heat transfer enhancement. CuO-water, Al2O3-water, and Cu-water nanofluids are chosen, and their laminar fully developed flows through a rectangular channel are considered. The influence of the effective viscosity model on the nanofluid heat transfer enhancement is estimated through the average differences between the numerical and experimental results for the nanofluids mentioned. The nanofluid heat transfer enhancement results show that the mixing thermal conductivity model consisting of the Maxwell model as the static part and the Yu and Choi model as the dynamic part, being applied to all three nanofluids, brings the numerical results closer to the experimental ones. The average differences between those results for CuO-water, Al2O3-water, and CuO-water nanofluid flows are 3.25, 2.74, and 3.02%, respectively. The mixing thermal conductivity model has been proved to increase the accuracy of the single-phase nanofluid simulation and to reveal its potentials in the single-phase nanofluid numerical studies.

  2. Molecular Dynamics Evaluation of Dielectric-Constant Mixing Rules for H2O-CO2 at Geologic Conditions

    PubMed Central

    Mountain, Raymond D.; Harvey, Allan H.

    2015-01-01

    Modeling of mineral reaction equilibria and aqueous-phase speciation of C-O-H fluids requires the dielectric constant of the fluid mixture, which is not known from experiment and is typically estimated by some rule for mixing pure-component values. In order to evaluate different proposed mixing rules, we use molecular dynamics simulation to calculate the dielectric constant of a model H2O–CO2 mixture at temperatures of 700 K and 1000 K at pressures up to 3 GPa. We find that theoretically based mixing rules that depend on combining the molar polarizations of the pure fluids systematically overestimate the dielectric constant of the mixture, as would be expected for mixtures of nonpolar and strongly polar components. The commonly used semiempirical mixing rule due to Looyenga works well for this system at the lower pressures studied, but somewhat underestimates the dielectric constant at higher pressures and densities, especially at the water-rich end of the composition range. PMID:26664009

  3. Molecular Dynamics Evaluation of Dielectric-Constant Mixing Rules for H2O-CO2 at Geologic Conditions.

    PubMed

    Mountain, Raymond D; Harvey, Allan H

    2015-10-01

    Modeling of mineral reaction equilibria and aqueous-phase speciation of C-O-H fluids requires the dielectric constant of the fluid mixture, which is not known from experiment and is typically estimated by some rule for mixing pure-component values. In order to evaluate different proposed mixing rules, we use molecular dynamics simulation to calculate the dielectric constant of a model H 2 O-CO 2 mixture at temperatures of 700 K and 1000 K at pressures up to 3 GPa. We find that theoretically based mixing rules that depend on combining the molar polarizations of the pure fluids systematically overestimate the dielectric constant of the mixture, as would be expected for mixtures of nonpolar and strongly polar components. The commonly used semiempirical mixing rule due to Looyenga works well for this system at the lower pressures studied, but somewhat underestimates the dielectric constant at higher pressures and densities, especially at the water-rich end of the composition range.

  4. Improvement on vibration measurement performance of laser self-mixing interference by using a pre-feedback mirror

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Chen, Qianghua; Wang, Yanghong; Luo, Huifu; Wu, Huan; Ma, Binwu

    2018-06-01

    In the laser self-mixing interference vibration measurement system, the self mixing interference signal is usually weak so that it can be hardly distinguished from the environmental noise. In order to solve this problem, we present a self-mixing interference optical path with a pre-feedback mirror, a pre-feedback mirror is added between the object and the collimator lens, corresponding feedback light enters into the inner cavity of the laser and the interference by the pre-feedback mirror occurs. The pre-feedback system is established after that. The self-mixing interference theoretical model with a pre-feedback based on the F-P model is derived. The theoretical analysis shows that the amplitude of the intensity of the interference signal can be improved by 2-4 times. The influence factors of system are also discussed. The experiment results show that the amplitude of the signal is greatly improved, which agrees with the theoretical analysis.

  5. Classification of longitudinal data through a semiparametric mixed-effects model based on lasso-type estimators.

    PubMed

    Arribas-Gil, Ana; De la Cruz, Rolando; Lebarbier, Emilie; Meza, Cristian

    2015-06-01

    We propose a classification method for longitudinal data. The Bayes classifier is classically used to determine a classification rule where the underlying density in each class needs to be well modeled and estimated. This work is motivated by a real dataset of hormone levels measured at the early stages of pregnancy that can be used to predict normal versus abnormal pregnancy outcomes. The proposed model, which is a semiparametric linear mixed-effects model (SLMM), is a particular case of the semiparametric nonlinear mixed-effects class of models (SNMM) in which finite dimensional (fixed effects and variance components) and infinite dimensional (an unknown function) parameters have to be estimated. In SNMM's maximum likelihood estimation is performed iteratively alternating parametric and nonparametric procedures. However, if one can make the assumption that the random effects and the unknown function interact in a linear way, more efficient estimation methods can be used. Our contribution is the proposal of a unified estimation procedure based on a penalized EM-type algorithm. The Expectation and Maximization steps are explicit. In this latter step, the unknown function is estimated in a nonparametric fashion using a lasso-type procedure. A simulation study and an application on real data are performed. © 2015, The International Biometric Society.

  6. Prediction of reaction knockouts to maximize succinate production by Actinobacillus succinogenes

    PubMed Central

    Nag, Ambarish; St. John, Peter C.; Crowley, Michael F.

    2018-01-01

    Succinate is a precursor of multiple commodity chemicals and bio-based succinate production is an active area of industrial bioengineering research. One of the most important microbial strains for bio-based production of succinate is the capnophilic gram-negative bacterium Actinobacillus succinogenes, which naturally produces succinate by a mixed-acid fermentative pathway. To engineer A. succinogenes to improve succinate yields during mixed acid fermentation, it is important to have a detailed understanding of the metabolic flux distribution in A. succinogenes when grown in suitable media. To this end, we have developed a detailed stoichiometric model of the A. succinogenes central metabolism that includes the biosynthetic pathways for the main components of biomass—namely glycogen, amino acids, DNA, RNA, lipids and UDP-N-Acetyl-α-D-glucosamine. We have validated our model by comparing model predictions generated via flux balance analysis with experimental results on mixed acid fermentation. Moreover, we have used the model to predict single and double reaction knockouts to maximize succinate production while maintaining growth viability. According to our model, succinate production can be maximized by knocking out either of the reactions catalyzed by the PTA (phosphate acetyltransferase) and ACK (acetyl kinase) enzymes, whereas the double knockouts of PEPCK (phosphoenolpyruvate carboxykinase) and PTA or PEPCK and ACK enzymes are the most effective in increasing succinate production. PMID:29381705

  7. Prediction of reaction knockouts to maximize succinate production by Actinobacillus succinogenes.

    PubMed

    Nag, Ambarish; St John, Peter C; Crowley, Michael F; Bomble, Yannick J

    2018-01-01

    Succinate is a precursor of multiple commodity chemicals and bio-based succinate production is an active area of industrial bioengineering research. One of the most important microbial strains for bio-based production of succinate is the capnophilic gram-negative bacterium Actinobacillus succinogenes, which naturally produces succinate by a mixed-acid fermentative pathway. To engineer A. succinogenes to improve succinate yields during mixed acid fermentation, it is important to have a detailed understanding of the metabolic flux distribution in A. succinogenes when grown in suitable media. To this end, we have developed a detailed stoichiometric model of the A. succinogenes central metabolism that includes the biosynthetic pathways for the main components of biomass-namely glycogen, amino acids, DNA, RNA, lipids and UDP-N-Acetyl-α-D-glucosamine. We have validated our model by comparing model predictions generated via flux balance analysis with experimental results on mixed acid fermentation. Moreover, we have used the model to predict single and double reaction knockouts to maximize succinate production while maintaining growth viability. According to our model, succinate production can be maximized by knocking out either of the reactions catalyzed by the PTA (phosphate acetyltransferase) and ACK (acetyl kinase) enzymes, whereas the double knockouts of PEPCK (phosphoenolpyruvate carboxykinase) and PTA or PEPCK and ACK enzymes are the most effective in increasing succinate production.

  8. Case mix-adjusted cost of colectomy at low-, middle-, and high-volume academic centers.

    PubMed

    Chang, Alex L; Kim, Young; Ertel, Audrey E; Hoehn, Richard S; Wima, Koffi; Abbott, Daniel E; Shah, Shimul A

    2017-05-01

    Efforts to regionalize surgery based on thresholds in procedure volume may have consequences on the cost of health care delivery. This study aims to delineate the relationship between hospital volume, case mix, and variability in the cost of operative intervention using colectomy as the model. All patients undergoing colectomy (n = 90,583) at 183 academic hospitals from 2009-2012 in The University HealthSystems Consortium Database were studied. Patient and procedure details were used to generate a case mix-adjusted predictive model of total direct costs. Observed to expected costs for each center were evaluated between centers based on overall procedure volume. Patient and procedure characteristics were significantly different between volume tertiles. Observed costs at high-volume centers were less than at middle- and low-volume centers. According to our predictive model, high-volume centers cared for a less expensive case mix than middle- and low-volume centers ($12,786 vs $13,236 and $14,497, P < .01). Our predictive model accounted for 44% of the variation in costs. Overall efficiency (standardized observed to expected costs) was greatest at high-volume centers compared to middle- and low-volume tertiles (z score -0.16 vs 0.02 and -0.07, P < .01). Hospital costs and cost efficiency after an elective colectomy varies significantly between centers and may be attributed partially to the patient differences at those centers. These data demonstrate that a significant proportion of the cost variation is due to a distinct case mix at low-volume centers, which may lead to perceived poor performance at these centers. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Case mix management education in a Canadian hospital.

    PubMed

    Moffat, M; Prociw, M

    1992-01-01

    The Sunnybrook Health Science Centre's matrix organization model includes a traditional departmental structure, a strategic program-based structure and a case management-based structure--the Clinical Unit structure. The Clinical Unit structure allows the centre to give responsibility for the management of case mix and volume to decentralized Clinical Unit teams, each of which manages its own budget. To train physicians and nurses in their respective roles of Medical Unit directors and Nursing Unit directors, Sunnybrook designed unique short courses on financial management and budgeting, and case-costing and case mix management. This paper discusses how these courses were organized, details their contents and explains how they fit into Sunnybrook's program of decentralized management.

  10. Statistical classification of drug incidents due to look-alike sound-alike mix-ups.

    PubMed

    Wong, Zoie Shui Yee

    2016-06-01

    It has been recognised that medication names that look or sound similar are a cause of medication errors. This study builds statistical classifiers for identifying medication incidents due to look-alike sound-alike mix-ups. A total of 227 patient safety incident advisories related to medication were obtained from the Canadian Patient Safety Institute's Global Patient Safety Alerts system. Eight feature selection strategies based on frequent terms, frequent drug terms and constituent terms were performed. Statistical text classifiers based on logistic regression, support vector machines with linear, polynomial, radial-basis and sigmoid kernels and decision tree were trained and tested. The models developed achieved an average accuracy of above 0.8 across all the model settings. The receiver operating characteristic curves indicated the classifiers performed reasonably well. The results obtained in this study suggest that statistical text classification can be a feasible method for identifying medication incidents due to look-alike sound-alike mix-ups based on a database of advisories from Global Patient Safety Alerts. © The Author(s) 2014.

  11. Reciprocal Peer Assessment as a Learning Tool for Secondary School Students in Modeling-Based Learning

    ERIC Educational Resources Information Center

    Tsivitanidou, Olia E.; Constantinou, Costas P.; Labudde, Peter; Rönnebeck, Silke; Ropohl, Mathias

    2018-01-01

    The aim of this study was to investigate how reciprocal peer assessment in modeling-based learning can serve as a learning tool for secondary school learners in a physics course. The participants were 22 upper secondary school students from a gymnasium in Switzerland. They were asked to model additive and subtractive color mixing in groups of two,…

  12. Groundwater contamination from an inactive uranium mill tailings pile: 2. Application of a dynamic mixing model

    NASA Astrophysics Data System (ADS)

    Narasimhan, T. N.; White, A. F.; Tokunaga, T.

    1986-12-01

    At Riverton, Wyoming, low pH process waters from an abandoned uranium mill tailings pile have been infiltrating into and contaminating the shallow water table aquifer. The contamination process has been governed by transient infiltration rates, saturated-unsaturated flow, as well as transient chemical reactions between the many chemical species present in the mixing waters and the sediments. In the first part of this two-part series [White et al., 1984] we presented field data as well as an interpretation based on a static mixing model. As an upper bound, we estimated that 1.7% of the tailings water had mixed with the native groundwater. In the present work we present the results of numerical investigation of the dynamic mixing process. The model, DYNAMIX (DYNAmic MIXing), couples a chemical speciation algorithm, PHREEQE, with a modified form of the transport algorithm, TRUMP, specifically designed to handle the simultaneous migration of several chemical constituents. The overall problem of simulating the evolution and migration of the contaminant plume was divided into three sub problems that were solved in sequential stages. These were the infiltration problem, the reactive mixing problem, and the plume-migration problem. The results of the application agree reasonably with the detailed field data. The methodology developed in the present study demonstrates the feasibility of analyzing the evolution of natural hydrogeochemical systems through a coupled analysis of transient fluid flow as well as chemical reactions. It seems worthwhile to devote further effort toward improving the physicochemical capabilities of the model as well as to enhance its computational efficiency.

  13. Groundwater contamination from an inactive uranium mill tailings pile. 2. Application of a dynamic mixing model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narashimhan, T.N.; White, A.F.; Tokunaga, T.

    1986-12-01

    At Riverton, Wyoming, low pH process waters from an abandoned uranium mill tailings pile have been infiltrating into and contaminating the shallow water table aquifer. The contamination process has been governed by transient infiltration rates, saturated-unsaturated flow, as well as transient chemical reactions between the many chemical species present in the mixing waters and the sediments. In the first part of this two-part series the authors presented field data as well as an interpretation based on a static mixing models. As an upper bound, the authors estimated that 1.7% of the tailings water had mixed with the native groundwater. Inmore » the present work they present the results of numerical investigation of the dynamic mixing process. The model, DYNAMIX (DYNamic MIXing), couples a chemical speciation algorithm, PHREEQE, with a modified form of the transport algorithm, TRUMP, specifically designed to handle the simultaneous migration of several chemical constituents. The overall problem of simulating the evolution and migration of the contaminant plume was divided into three sub problems that were solved in sequential stages. These were the infiltration problem, the reactive mixing problem, and the plume-migration problem. The results of the application agree reasonably with the detailed field data. The methodology developed in the present study demonstrates the feasibility of analyzing the evolution of natural hydrogeochemical systems through a coupled analysis of transient fluid flow as well as chemical reactions. It seems worthwhile to devote further effort toward improving the physicochemical capabilities of the model as well as to enhance its computational efficiency.« less

  14. Development of fuel oil management system software: Phase 1, Tank management module. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lange, H.B.; Baker, J.P.; Allen, D.

    1992-01-01

    The Fuel Oil Management System (FOMS) is a micro-computer based software system being developed to assist electric utilities that use residual fuel oils with oil purchase and end-use decisions. The Tank Management Module (TMM) is the first FOMS module to be produced. TMM enables the user to follow the mixing status of oils contained in a number of oil storage tanks. The software contains a computational model of residual fuel oil mixing which addresses mixing that occurs as one oil is added to another in a storage tank and also purposeful mixing of the tank by propellers, recirculation or convection.Themore » model also addresses the potential for sludge formation due to incompatibility of oils being mixed. Part 1 of the report presents a technical description of the mixing model and a description of its development. Steps followed in developing the mixing model included: (1) definition of ranges of oil properties and tank design factors used by utilities; (2) review and adaption of prior applicable work; (3) laboratory development; and (4) field verification. Also, a brief laboratory program was devoted to exploring the suitability of suggested methods for predicting viscosities, flash points and pour points of oil mixtures. Part 2 of the report presents a functional description of the TMM software and a description of its development. The software development program consisted of the following steps: (1) on-site interviews at utilities to prioritize needs and characterize user environments; (2) construction of the user interface; and (3) field testing the software.« less

  15. Development of fuel oil management system software: Phase 1, Tank management module

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lange, H.B.; Baker, J.P.; Allen, D.

    1992-01-01

    The Fuel Oil Management System (FOMS) is a micro-computer based software system being developed to assist electric utilities that use residual fuel oils with oil purchase and end-use decisions. The Tank Management Module (TMM) is the first FOMS module to be produced. TMM enables the user to follow the mixing status of oils contained in a number of oil storage tanks. The software contains a computational model of residual fuel oil mixing which addresses mixing that occurs as one oil is added to another in a storage tank and also purposeful mixing of the tank by propellers, recirculation or convection.Themore » model also addresses the potential for sludge formation due to incompatibility of oils being mixed. Part 1 of the report presents a technical description of the mixing model and a description of its development. Steps followed in developing the mixing model included: (1) definition of ranges of oil properties and tank design factors used by utilities; (2) review and adaption of prior applicable work; (3) laboratory development; and (4) field verification. Also, a brief laboratory program was devoted to exploring the suitability of suggested methods for predicting viscosities, flash points and pour points of oil mixtures. Part 2 of the report presents a functional description of the TMM software and a description of its development. The software development program consisted of the following steps: (1) on-site interviews at utilities to prioritize needs and characterize user environments; (2) construction of the user interface; and (3) field testing the software.« less

  16. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    PubMed

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  17. Competing regression models for longitudinal data.

    PubMed

    Alencar, Airlane P; Singer, Julio M; Rocha, Francisco Marcelo M

    2012-03-01

    The choice of an appropriate family of linear models for the analysis of longitudinal data is often a matter of concern for practitioners. To attenuate such difficulties, we discuss some issues that emerge when analyzing this type of data via a practical example involving pretest-posttest longitudinal data. In particular, we consider log-normal linear mixed models (LNLMM), generalized linear mixed models (GLMM), and models based on generalized estimating equations (GEE). We show how some special features of the data, like a nonconstant coefficient of variation, may be handled in the three approaches and evaluate their performance with respect to the magnitude of standard errors of interpretable and comparable parameters. We also show how different diagnostic tools may be employed to identify outliers and comment on available software. We conclude by noting that the results are similar, but that GEE-based models may be preferable when the goal is to compare the marginal expected responses. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. A general method to determine sampling windows for nonlinear mixed effects models with an application to population pharmacokinetic studies.

    PubMed

    Foo, Lee Kien; McGree, James; Duffull, Stephen

    2012-01-01

    Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.

  19. Interactions between a fractal tree-like object and hydrodynamic turbulence: flow structure and characteristic mixing length

    NASA Astrophysics Data System (ADS)

    Meneveau, C. V.; Bai, K.; Katz, J.

    2011-12-01

    The vegetation canopy has a significant impact on various physical and biological processes such as forest microclimate, rainfall evaporation distribution and climate change. Most scaled laboratory experimental studies have used canopy element models that consist of rigid vertical strips or cylindrical rods that can be typically represented through only one or a few characteristic length scales, for example the diameter and height for cylindrical rods. However, most natural canopies and vegetation are highly multi-scale with branches and sub-branches, covering a wide range of length scales. Fractals provide a convenient idealization of multi-scale objects, since their multi-scale properties can be described in simple ways (Mandelbrot 1982). While fractal aspects of turbulence have been studied in several works in the past decades, research on turbulence generated by fractal objects started more recently. We present an experimental study of boundary layer flow over fractal tree-like objects. Detailed Particle-Image-Velocimetry (PIV) measurements are carried out in the near-wake of a fractal-like tree. The tree is a pre-fractal with five generations, with three branches and a scale reduction factor 1/2 at each generation. Its similarity fractal dimension (Mandelbrot 1982) is D ~ 1.58. Detailed mean velocity and turbulence stress profiles are documented, as well as their downstream development. We then turn attention to the turbulence mixing properties of the flow, specifically to the question whether a mixing length-scale can be identified in this flow, and if so, how it relates to the geometric length-scales in the pre-fractal object. Scatter plots of mean velocity gradient (shear) and Reynolds shear stress exhibit good linear relation at all locations in the flow. Therefore, in the transverse direction of the wake evolution, the Boussinesq eddy viscosity concept is appropriate to describe the mixing. We find that the measured mixing length increases with increasing streamwise locations. Conversely, the measured eddy viscosity and mixing length decrease with increasing elevation, which differs from eddy viscosity and mixing length behaviors of traditional boundary layers or canopies studied before. In order to find an appropriate length for the flow, several models based on the notion of superposition of scales are proposed and examined. One approach is based on spectral distributions. Another more practical approach is based on length-scale distributions evaluated using fractal geometry tools. These proposed models agree well with the measured mixing length. The results indicate that information about multi-scale clustering of branches as it occurs in fractals has to be incorporated into models of the mixing length for flows through canopies with multiple scales. The research is supported by National Science Foundation grant ATM-0621396 and AGS-1047550.

  20. Improving deep convolutional neural networks with mixed maxout units.

    PubMed

    Zhao, Hui-Zhen; Liu, Fu-Xian; Li, Long-Yue

    2017-01-01

    Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that "non-maximal features are unable to deliver" and "feature mapping subspace pooling is insufficient," we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance.

  1. Interannual variability of primary production and air-sea CO2 flux in the Atlantic and Indian sectors of the Southern Ocean.

    NASA Astrophysics Data System (ADS)

    Dufour, Carolina; Merlivat, Liliane; Le Sommer, Julien; Boutin, Jacqueline; Antoine, David

    2013-04-01

    As one of the major oceanic sinks of anthropogenic CO2, the Southern Ocean plays a critical role in the climate system. However, due to the scarcity of observations, little is known about physical and biological processes that control air-sea CO2 fluxes and how these processes might respond to climate change. It is well established that primary production is one of the major drivers of air-sea CO2 fluxes, consuming surface Dissolved Inorganic Carbon (DIC) during Summer. Southern Ocean primary production is though constrained by several limiting factors such as iron and light availability, which are both sensitive to mixed layer depth. Mixed layer depth is known to be affected by current changes in wind stress or freshwater fluxes over the Southern Ocean. But we still don't know how primary production may respond to anomalous mixed layer depth neither how physical processes may balance this response to set the seasonal cycle of air-sea CO2 fluxes. In this study, we investigate the impact of anomalous mixed layer depth on surface DIC in the Atlantic and Indian sectors of the Subantarctic zone of the Southern Ocean (60W-60E, 38S-55S) with a combination of in situ data, satellite data and model experiment. We use both a regional eddy permitting ocean biogeochemical model simulation based on NEMO-PISCES and data-based reconstruction of biogeochemical fields based on CARIOCA buoys and SeaWiFS data. A decomposition of the physical and biological processes driving the seasonal variability of surface DIC is performed with both the model data and observations. A good agreement is found between the model and the data for the amplitude of biological and air-sea flux contributions. The model data are further used to investigate the impact of winter and summer anomalies in mixed layer depth on surface DIC over the period 1990-2004. The relative changes of each physical and biological process contribution are quantified and discussed.

  2. A corrected formulation for marginal inference derived from two-part mixed models for longitudinal semi-continuous data

    PubMed Central

    Su, Li; Farewell, Vernon T

    2013-01-01

    For semi-continuous data which are a mixture of true zeros and continuously distributed positive values, the use of two-part mixed models provides a convenient modelling framework. However, deriving population-averaged (marginal) effects from such models is not always straightforward. Su et al. presented a model that provided convenient estimation of marginal effects for the logistic component of the two-part model but the specification of marginal effects for the continuous part of the model presented in that paper was based on an incorrect formulation. We present a corrected formulation and additionally explore the use of the two-part model for inferences on the overall marginal mean, which may be of more practical relevance in our application and more generally. PMID:24201470

  3. Impact of Antarctic mixed-phase clouds on climate.

    PubMed

    Lawson, R Paul; Gettelman, Andrew

    2014-12-23

    Precious little is known about the composition of low-level clouds over the Antarctic Plateau and their effect on climate. In situ measurements at the South Pole using a unique tethered balloon system and ground-based lidar reveal a much higher than anticipated incidence of low-level, mixed-phase clouds (i.e., consisting of supercooled liquid water drops and ice crystals). The high incidence of mixed-phase clouds is currently poorly represented in global climate models (GCMs). As a result, the effects that mixed-phase clouds have on climate predictions are highly uncertain. We modify the National Center for Atmospheric Research (NCAR) Community Earth System Model (CESM) GCM to align with the new observations and evaluate the radiative effects on a continental scale. The net cloud radiative effects (CREs) over Antarctica are increased by +7.4 Wm(-2), and although this is a significant change, a much larger effect occurs when the modified model physics are extended beyond the Antarctic continent. The simulations show significant net CRE over the Southern Ocean storm tracks, where recent measurements also indicate substantial regions of supercooled liquid. These sensitivity tests confirm that Southern Ocean CREs are strongly sensitive to mixed-phase clouds colder than -20 °C.

  4. Impact of Antarctic mixed-phase clouds on climate

    PubMed Central

    Lawson, R. Paul; Gettelman, Andrew

    2014-01-01

    Precious little is known about the composition of low-level clouds over the Antarctic Plateau and their effect on climate. In situ measurements at the South Pole using a unique tethered balloon system and ground-based lidar reveal a much higher than anticipated incidence of low-level, mixed-phase clouds (i.e., consisting of supercooled liquid water drops and ice crystals). The high incidence of mixed-phase clouds is currently poorly represented in global climate models (GCMs). As a result, the effects that mixed-phase clouds have on climate predictions are highly uncertain. We modify the National Center for Atmospheric Research (NCAR) Community Earth System Model (CESM) GCM to align with the new observations and evaluate the radiative effects on a continental scale. The net cloud radiative effects (CREs) over Antarctica are increased by +7.4 Wm−2, and although this is a significant change, a much larger effect occurs when the modified model physics are extended beyond the Antarctic continent. The simulations show significant net CRE over the Southern Ocean storm tracks, where recent measurements also indicate substantial regions of supercooled liquid. These sensitivity tests confirm that Southern Ocean CREs are strongly sensitive to mixed-phase clouds colder than −20 °C. PMID:25489069

  5. Robotic and open radical prostatectomy in the public health sector: cost comparison.

    PubMed

    Hall, Rohan Matthew; Linklater, Nicholas; Coughlin, Geoff

    2014-06-01

    During 2008, the Royal Brisbane and Women's Hospital became the first public hospital in Australia to have a da Vinci Surgical Robot purchased by government funding. The cost of performing robotic surgery in the public sector is a contentious issue. This study is a single centre, cost analysis comparing open radical prostatectomy (RRP) and robotic-assisted radical prostatectomy (RALP) based on the newly introduced pure case-mix funding model. A retrospective chart review was performed for the first 100 RALPs and the previous 100 RRPs. Estimates of tangible costing and funding were generated for each admission and readmission, using the Royal Brisbane Hospital Transition II database, based on pure case-mix funding. The average cost for admission for RRP was A$13 605, compared to A$17 582 for the RALP. The average funding received for a RRP was A$11 781 compared to A$5496 for a RALP based on the newly introduced case-mix model. The average length of stay for RRP was 4.4 days (2-14) and for RALP, 1.2 days (1-4). The total cost of readmissions for RRP patients was A$70 487, compared to that of the RALP patients, A$7160. These were funded at A$55 639 and A$7624, respectively. RALP has shown a significant advantage with respect to length of stay and readmission rate. Based on the case-mix funding model RALP is poorly funded compared to its open equivalent. Queensland Health needs to plan on how robotic surgery is implemented and assess whether this technology is truly affordable in the public sector. © 2013 The Authors. ANZ Journal of Surgery © 2013 Royal Australasian College of Surgeons.

  6. Doing Interdisciplinary Mixed Methods Health Care Research: Working the Boundaries, Tensions, and Synergistic Potential of Team-Based Research.

    PubMed

    Hesse-Biber, Sharlene

    2016-04-01

    Current trends in health care research point to a shift from disciplinary models to interdisciplinary team-based mixed methods inquiry designs. This keynote address discusses the problems and prospects of creating vibrant mixed methods health care interdisciplinary research teams that can harness their potential synergy that holds the promise of addressing complex health care issues. We examine the range of factors and issues these types of research teams need to consider to facilitate efficient interdisciplinary mixed methods team-based research. It is argued that concepts such as disciplinary comfort zones, a lack of attention to team dynamics, and low levels of reflexivity among interdisciplinary team members can inhibit the effectiveness of a research team. This keynote suggests a set of effective strategies to address the issues that emanate from the new field of research inquiry known as team science as well as lessons learned from tapping into research on organizational dynamics. © The Author(s) 2016.

  7. The Contribution of Emotional Intelligence to Decisional Styles among Italian High School Students

    ERIC Educational Resources Information Center

    Di Fabio, Annamaria; Kenny, Maureen E.

    2012-01-01

    This study examined the relationship between emotional intelligence (EI) and styles of decision making. Two hundred and six Italian high school students completed two measures of EI, the Bar-On EI Inventory, based on a mixed model of EI, and the Mayer Salovey Caruso EI Test, based on an ability-based model of EI, in addition to the General…

  8. Optimum use of air tankers in initial attack: selection, basing, and transfer rules

    Treesearch

    Francis E. Greulich; William G. O' Regan

    1982-01-01

    Fire managers face two interrelated problems in deciding the most efficient use of air tankers: where best to base them, and how best to reallocate them each day in anticipation of fire occurrence. A computerized model based on a mixed integer linear program can help in assigning air tankers throughout the fire season. The model was tested using information from...

  9. Using Poisson mixed-effects model to quantify transcript-level gene expression in RNA-Seq.

    PubMed

    Hu, Ming; Zhu, Yu; Taylor, Jeremy M G; Liu, Jun S; Qin, Zhaohui S

    2012-01-01

    RNA sequencing (RNA-Seq) is a powerful new technology for mapping and quantifying transcriptomes using ultra high-throughput next-generation sequencing technologies. Using deep sequencing, gene expression levels of all transcripts including novel ones can be quantified digitally. Although extremely promising, the massive amounts of data generated by RNA-Seq, substantial biases and uncertainty in short read alignment pose challenges for data analysis. In particular, large base-specific variation and between-base dependence make simple approaches, such as those that use averaging to normalize RNA-Seq data and quantify gene expressions, ineffective. In this study, we propose a Poisson mixed-effects (POME) model to characterize base-level read coverage within each transcript. The underlying expression level is included as a key parameter in this model. Since the proposed model is capable of incorporating base-specific variation as well as between-base dependence that affect read coverage profile throughout the transcript, it can lead to improved quantification of the true underlying expression level. POME can be freely downloaded at http://www.stat.purdue.edu/~yuzhu/pome.html. yuzhu@purdue.edu; zhaohui.qin@emory.edu Supplementary data are available at Bioinformatics online.

  10. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.

    PubMed

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2016-01-15

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Experimental and computational fluid dynamics studies of mixing of complex oral health products

    NASA Astrophysics Data System (ADS)

    Cortada-Garcia, Marti; Migliozzi, Simona; Weheliye, Weheliye Hashi; Dore, Valentina; Mazzei, Luca; Angeli, Panagiota; ThAMes Multiphase Team

    2017-11-01

    Highly viscous non-Newtonian fluids are largely used in the manufacturing of specialized oral care products. Mixing often takes place in mechanically stirred vessels where the flow fields and mixing times depend on the geometric configuration and the fluid physical properties. In this research, we study the mixing performance of complex non-Newtonian fluids using Computational Fluid Dynamics models and validate them against experimental laser-based optical techniques. To this aim, we developed a scaled-down version of an industrial mixer. As test fluids, we used mixtures of glycerol and a Carbomer gel. The viscosities of the mixtures against shear rate at different temperatures and phase ratios were measured and found to be well described by the Carreau model. The numerical results were compared against experimental measurements of velocity fields from Particle Image Velocimetry (PIV) and concentration profiles from Planar Laser Induced Fluorescence (PLIF).

  12. Modeling polychlorinated biphenyl mass transfer after amendment of contaminated sediment with activated carbon.

    PubMed

    Werner, David; Ghosh, Upal; Luthy, Richard G

    2006-07-01

    The sorption kinetics and concentration of polychlorinated biphenyls (PCBs) in historically polluted sediment is modeled to assess a remediation strategy based on in situ PCB sequestration by mixing with activated carbon (AC). We extend our evaluation of a model based on intraparticle diffusion by including a biomimetic semipermeable membrane device (SPMD) and a first-order degradation rate for the aqueous phase. The model predictions are compared with the previously reported experimental PCB concentrations in the bulk water phase and in SPMDs. The simulated scenarios comprise a marine and a freshwater sediment, four PCB congeners, two AC grain sizes, four doses of AC, and comparison with laboratory experiments for up to 540 days of AC amendment slowly mixed with sediment. The model qualitatively reproduces the observed shifts in the PCB distribution during repartitioning after AC amendment but systematically overestimates the overall effect of the treatment in reducing aqueous and SPMD concentrations of PCBs by a factor of 2-6. For our AC application in sediment, competitive sorption of the various solutes apparently requires a reduction by a factor of 16 of the literature values for the AC-water partitioning coefficient measured in pure aqueous systems. With this correction, model results and measurements agree within a factor of 3. We also discuss the impact of the nonlinearity of the AC sorption isotherm and first-order degradation in the aqueous phase. Regular mixing of the sediment accelerates the benefit of the proposed amendment substantially. But according to our scenario, after AC amendment is homogeneously mixed into the sediment and then left undisturbed, aqueous PCB concentrations tend toward the same reduction after approximately 5 or more years.

  13. Genome-Assisted Prediction of Quantitative Traits Using the R Package sommer.

    PubMed

    Covarrubias-Pazaran, Giovanny

    2016-01-01

    Most traits of agronomic importance are quantitative in nature, and genetic markers have been used for decades to dissect such traits. Recently, genomic selection has earned attention as next generation sequencing technologies became feasible for major and minor crops. Mixed models have become a key tool for fitting genomic selection models, but most current genomic selection software can only include a single variance component other than the error, making hybrid prediction using additive, dominance and epistatic effects unfeasible for species displaying heterotic effects. Moreover, Likelihood-based software for fitting mixed models with multiple random effects that allows the user to specify the variance-covariance structure of random effects has not been fully exploited. A new open-source R package called sommer is presented to facilitate the use of mixed models for genomic selection and hybrid prediction purposes using more than one variance component and allowing specification of covariance structures. The use of sommer for genomic prediction is demonstrated through several examples using maize and wheat genotypic and phenotypic data. At its core, the program contains three algorithms for estimating variance components: Average information (AI), Expectation-Maximization (EM) and Efficient Mixed Model Association (EMMA). Kernels for calculating the additive, dominance and epistatic relationship matrices are included, along with other useful functions for genomic analysis. Results from sommer were comparable to other software, but the analysis was faster than Bayesian counterparts in the magnitude of hours to days. In addition, ability to deal with missing data, combined with greater flexibility and speed than other REML-based software was achieved by putting together some of the most efficient algorithms to fit models in a gentle environment such as R.

  14. Scaling laws and reduced-order models for mixing and reactive-transport in heterogeneous anisotropic porous media

    NASA Astrophysics Data System (ADS)

    Mudunuru, M. K.; Karra, S.; Nakshatrala, K. B.

    2016-12-01

    Fundamental to enhancement and control of the macroscopic spreading, mixing, and dilution of solute plumes in porous media structures is the topology of flow field and underlying heterogeneity and anisotropy contrast of porous media. Traditionally, in literature, the main focus was limited to the shearing effects of flow field (i.e., flow has zero helical density, meaning that flow is always perpendicular to vorticity vector) on scalar mixing [2]. However, the combined effect of anisotropy of the porous media and the helical structure (or chaotic nature) of the flow field on the species reactive-transport and mixing has been rarely studied. Recently, it has been experimentally shown that there is an irrefutable evidence that chaotic advection and helical flows are inherent in porous media flows [1,2]. In this poster presentation, we present a non-intrusive physics-based model-order reduction framework to quantify the effects of species mixing in-terms of reduced-order models (ROMs) and scaling laws. The ROM framework is constructed based on the recent advancements in non-negative formulations for reactive-transport in heterogeneous anisotropic porous media [3] and non-intrusive ROM methods [4]. The objective is to generate computationally efficient and accurate ROMs for species mixing for different values of input data and reactive-transport model parameters. This is achieved by using multiple ROMs, which is a way to determine the robustness of the proposed framework. Sensitivity analysis is performed to identify the important parameters. Representative numerical examples from reactive-transport are presented to illustrate the importance of the proposed ROMs to accurately describe mixing process in porous media. [1] Lester, Metcalfe, and Trefry, "Is chaotic advection inherent to porous media flow?," PRL, 2013. [2] Ye, Chiogna, Cirpka, Grathwohl, and Rolle, "Experimental evidence of helical flow in porous media," PRL, 2015. [3] Mudunuru, and Nakshatrala, "On enforcing maximum principles and achieving element-wise species balance for advection-diffusion-reaction equations under the finite element method," JCP, 2016. [4] Quarteroni, Manzoni, and Negri. "Reduced Basis Methods for Partial Differential Equations: An Introduction," Springer, 2016.

  15. Description of the Process Model for the Technoeconomic Evaluation of MEA versus Mixed Amines for Carbon Dioxide Removal from Stack Gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Dale A.

    This model description is supplemental to the Lawrence Livermore National Laboratory (LLNL) report LLNL-TR-642494, Technoeconomic Evaluation of MEA versus Mixed Amines for CO2 Removal at Near- Commercial Scale at Duke Energy Gibson 3 Plant. We describe the assumptions and methodology used in the Laboratory’s simulation of its understanding of Huaneng’s novel amine solvent for CO2 capture with 35% mixed amine. The results of that simulation have been described in LLNL-TR-642494. The simulation was performed using ASPEN 7.0. The composition of the Huaneng’s novel amine solvent was estimated based on information gleaned from Huaneng patents. The chemistry of the process wasmore » described using nine equations, representing reactions within the absorber and stripper columns using the ELECTNRTL property method. As a rate-based ASPEN simulation model was not available to Lawrence Livermore at the time of writing, the height of a theoretical plate was estimated using open literature for similar processes. Composition of the flue gas was estimated based on information supplied by Duke Energy for Unit 3 of the Gibson plant. The simulation was scaled at one million short tons of CO2 absorbed per year. To aid stability of the model, convergence of the main solvent recycle loop was implemented manually, as described in the Blocks section below. Automatic convergence of this loop led to instability during the model iterations. Manual convergence of the loop enabled accurate representation and maintenance of model stability.« less

  16. Uncertainty in mixing models: a blessing in disguise?

    NASA Astrophysics Data System (ADS)

    Delsman, J. R.; Oude Essink, G. H. P.

    2012-04-01

    Despite the abundance of tracer-based studies in catchment hydrology over the past decades, relatively few studies have addressed the uncertainty associated with these studies in much detail. This uncertainty stems from analytical error, spatial and temporal variance in end-member composition, and from not incorporating all relevant processes in the necessarily simplistic mixing models. Instead of applying standard EMMA methodology, we used end-member mixing model analysis within a Monte Carlo framework to quantify the uncertainty surrounding our analysis. Borrowing from the well-known GLUE methodology, we discarded mixing models that could not satisfactorily explain sample concentrations and analyzed the posterior parameter set. This use of environmental tracers aided in disentangling hydrological pathways in a Dutch polder catchment. This 10 km2 agricultural catchment is situated in the coastal region of the Netherlands. Brackish groundwater seepage, originating from Holocene marine transgressions, adversely affects water quality in this catchment. Current water management practice is aimed at improving water quality by flushing the catchment with fresh water from the river Rhine. Climate change is projected to decrease future fresh water availability, signifying the need for a more sustainable water management practice and a better understanding of the functioning of the catchment. The end-member mixing analysis increased our understanding of the hydrology of the studied catchment. The use of a GLUE-like framework for applying the end-member mixing analysis not only quantified the uncertainty associated with the analysis, the analysis of the posterior parameter set also identified the existence of catchment processes otherwise overlooked.

  17. Community-LINE Source Model (C-LINE) to estimate roadway emissions

    EPA Pesticide Factsheets

    C-LINE is a web-based model that estimates emissions and dispersion of toxic air pollutants for roadways in the U.S. This reduced-form air quality model examines what-if scenarios for changes in emissions such as traffic volume fleet mix and vehicle speed.

  18. Estimates of lake trout (Salvelinus namaycush) diet in Lake Ontario using two and three isotope mixing models

    USGS Publications Warehouse

    Colborne, Scott F.; Rush, Scott A.; Paterson, Gordon; Johnson, Timothy B.; Lantry, Brian F.; Fisk, Aaron T.

    2016-01-01

    Recent development of multi-dimensional stable isotope models for estimating both foraging patterns and niches have presented the analytical tools to further assess the food webs of freshwater populations. One approach to refine predictions from these analyses is to include a third isotope to the more common two-isotope carbon and nitrogen mixing models to increase the power to resolve different prey sources. We compared predictions made with two-isotope carbon and nitrogen mixing models and three-isotope models that also included sulphur (δ34S) for the diets of Lake Ontario lake trout (Salvelinus namaycush). We determined the isotopic compositions of lake trout and potential prey fishes sampled from Lake Ontario and then used quantitative estimates of resource use generated by two- and three-isotope Bayesian mixing models (SIAR) to infer feeding patterns of lake trout. Both two- and three-isotope models indicated that alewife (Alosa pseudoharengus) and round goby (Neogobius melanostomus) were the primary prey items, but the three-isotope models were more consistent with recent measures of prey fish abundances and lake trout diets. The lake trout sampled directly from the hatcheries had isotopic compositions derived from the hatchery food which were distinctively different from those derived from the natural prey sources. Those hatchery signals were retained for months after release, raising the possibility to distinguish hatchery-reared yearlings and similarly sized naturally reproduced lake trout based on isotopic compositions. Addition of a third-isotope resulted in mixing model results that confirmed round goby have become an important component of lake trout diet and may be overtaking alewife as a prey resource.

  19. Stratification established by peeling detrainment from gravity currents: laboratory experiments and models

    NASA Astrophysics Data System (ADS)

    Hogg, Charlie; Dalziel, Stuart; Huppert, Herbert; Imberger, Jorg; Department of Applied Mathematics; Theoretical Physics Team; CentreWater Research Team

    2014-11-01

    Dense gravity currents feed fluid into confined basins in lakes, the oceans and many industrial applications. Existing models of the circulation and mixing in such basins are often based on the currents entraining ambient fluid. However, recent observations have suggested that uni-directional entrainment into a gravity current does not fully describe the mixing in such currents. Laboratory experiments were carried out which visualised peeling detrainment from the gravity current occurring when the ambient fluid was stratified. A theoretical model of the observed peeling detrainment was developed to predict the stratification in the basin. This new model gives a better approximation of the stratification observed in the experiments than the pre-existing entraining model. The model can now be developed such that it integrates into operational models of lakes.

  20. A comparative study of mixed exponential and Weibull distributions in a stochastic model replicating a tropical rainfall process

    NASA Astrophysics Data System (ADS)

    Abas, Norzaida; Daud, Zalina M.; Yusof, Fadhilah

    2014-11-01

    A stochastic rainfall model is presented for the generation of hourly rainfall data in an urban area in Malaysia. In view of the high temporal and spatial variability of rainfall within the tropical rain belt, the Spatial-Temporal Neyman-Scott Rectangular Pulse model was used. The model, which is governed by the Neyman-Scott process, employs a reasonable number of parameters to represent the physical attributes of rainfall. A common approach is to attach each attribute to a mathematical distribution. With respect to rain cell intensity, this study proposes the use of a mixed exponential distribution. The performance of the proposed model was compared to a model that employs the Weibull distribution. Hourly and daily rainfall data from four stations in the Damansara River basin in Malaysia were used as input to the models, and simulations of hourly series were performed for an independent site within the basin. The performance of the models was assessed based on how closely the statistical characteristics of the simulated series resembled the statistics of the observed series. The findings obtained based on graphical representation revealed that the statistical characteristics of the simulated series for both models compared reasonably well with the observed series. However, a further assessment using the AIC, BIC and RMSE showed that the proposed model yields better results. The results of this study indicate that for tropical climates, the proposed model, using a mixed exponential distribution, is the best choice for generation of synthetic data for ungauged sites or for sites with insufficient data within the limit of the fitted region.

  1. Crown structure and growth efficiency of red spruce in uneven-aged, mixed-species stands in Maine

    Treesearch

    Douglas A. Maguire; John C. Brissette; Lianhong. Gu

    1998-01-01

    Several hypotheses about the relationships among individual tree growth, tree leaf area, and relative tree size or position were tested with red spruce (Picea rubens Sarg.) growing in uneven-aged, mixed-species forests of south-central Maine, U.S.A. Based on data from 65 sample trees, predictive models were developed to (i)...

  2. Financial modeling/case-mix analysis.

    PubMed

    Heck, S; Esmond, T

    1983-06-01

    The authors describe a case mix system developed by users which goes beyond DRG requirements to respond to management's clinical/financial data needs for marketing, planning, budgeting and financial analysis as well as reimbursement. Lessons learned in development of the system and the clinical/financial base will be helpful to those currently contemplating the implementation of such a system or evaluating available software.

  3. Application of the urban mixing-depth concept to air pollution problems

    Treesearch

    Peter W. Summers

    1977-01-01

    A simple urban mixing-depth model is used to develop an indicator of downtown pollution concentrations based on emission strength, rural temperature lapse rate, wind speed, city heat input, and city size. It is shown that the mean annual downtown suspended particulate levels in Canadian cities are proportional to the fifth root of the population. The implications of...

  4. Optimization of an electrokinetic mixer for microfluidic applications.

    PubMed

    Bockelmann, Hendryk; Heuveline, Vincent; Barz, Dominik P J

    2012-06-01

    This work is concerned with the investigation of the concentration fields in an electrokinetic micromixer and its optimization in order to achieve high mixing rates. The mixing concept is based on the combination of an alternating electrical excitation applied to a pressure-driven base flow in a meandering microchannel geometry. The electrical excitation induces a secondary electrokinetic velocity component, which results in a complex flow field within the meander bends. A mathematical model describing the physicochemical phenomena present within the micromixer is implemented in an in-house finite-element-method code. We first perform simulations comparable to experiments concerned with the investigation of the flow field in the bends. The comparison of the complex flow topology found in simulation and experiment reveals excellent agreement. Hence, the validated model and numerical schemes are employed for a numerical optimization of the micromixer performance. In detail, we optimize the secondary electrokinetic flow by finding the best electrical excitation parameters, i.e., frequency and amplitude, for a given waveform. Two optimized electrical excitations featuring a discrete and a continuous waveform are discussed with respect to characteristic time scales of our mixing problem. The results demonstrate that the micromixer is able to achieve high mixing degrees very rapidly.

  5. Optimization of an electrokinetic mixer for microfluidic applications

    PubMed Central

    Bockelmann, Hendryk; Heuveline, Vincent; Barz, Dominik P. J.

    2012-01-01

    This work is concerned with the investigation of the concentration fields in an electrokinetic micromixer and its optimization in order to achieve high mixing rates. The mixing concept is based on the combination of an alternating electrical excitation applied to a pressure-driven base flow in a meandering microchannel geometry. The electrical excitation induces a secondary electrokinetic velocity component, which results in a complex flow field within the meander bends. A mathematical model describing the physicochemical phenomena present within the micromixer is implemented in an in-house finite-element-method code. We first perform simulations comparable to experiments concerned with the investigation of the flow field in the bends. The comparison of the complex flow topology found in simulation and experiment reveals excellent agreement. Hence, the validated model and numerical schemes are employed for a numerical optimization of the micromixer performance. In detail, we optimize the secondary electrokinetic flow by finding the best electrical excitation parameters, i.e., frequency and amplitude, for a given waveform. Two optimized electrical excitations featuring a discrete and a continuous waveform are discussed with respect to characteristic time scales of our mixing problem. The results demonstrate that the micromixer is able to achieve high mixing degrees very rapidly. PMID:22712034

  6. Liquid Water Oceans in Ice Giants

    NASA Technical Reports Server (NTRS)

    Wiktorowicz, Sloane J.; Ingersoll, Andrew P.

    2007-01-01

    Aptly named, ice giants such as Uranus and Neptune contain significant amounts of water. While this water cannot be present near the cloud tops, it must be abundant in the deep interior. We investigate the likelihood of a liquid water ocean existing in the hydrogen-rich region between the cloud tops and deep interior. Starting from an assumed temperature at a given upper tropospheric pressure (the photosphere), we follow a moist adiabat downward. The mixing ratio of water to hydrogen in the gas phase is small in the photosphere and increases with depth. The mixing ratio in the condensed phase is near unity in the photosphere and decreases with depth; this gives two possible outcomes. If at some pressure level the mixing ratio of water in the gas phase is equal to that in the deep interior, then that level is the cloud base. The gas below the cloud base has constant mixing ratio. Alternately, if the mixing ratio of water in the condensed phase reaches that in the deep interior, then the surface of a liquid ocean will occur. Below this ocean surface, the mixing ratio of water will be constant. A cloud base occurs when the photospheric temperature is high. For a family of ice giants with different photospheric temperatures, the cooler ice giants will have warmer cloud bases. For an ice giant with a cool enough photospheric temperature, the cloud base will exist at the critical temperature. For still cooler ice giants, ocean surfaces will result. A high mixing ratio of water in the deep interior favors a liquid ocean. We find that Neptune is both too warm (photospheric temperature too high) and too dry (mixing ratio of water in the deep interior too low) for liquid oceans to exist at present. To have a liquid ocean, Neptune s deep interior water to gas ratio would have to be higher than current models allow, and the density at 19 kbar would have to be approx. equal to 0.8 g/cu cm. Such a high density is inconsistent with gravitational data obtained during the Voyager flyby. In our model, Neptune s water cloud base occurs around 660 K and 11 kbar, and the density there is consistent with Voyager gravitational data. As Neptune cools, the probability of a liquid ocean increases. Extrasolar "hot Neptunes," which presumably migrate inward toward their parent stars, cannot harbor liquid water oceans unless they have lost almost all of the hydrogen and helium from their deep interiors.

  7. Empirical Behavioral Models to Support Alternative Tools for the Analysis of Mixed-Priority Pedestrian-Vehicle Interaction in a Highway Capacity Context

    PubMed Central

    Rouphail, Nagui M.

    2011-01-01

    This paper presents behavioral-based models for describing pedestrian gap acceptance at unsignalized crosswalks in a mixed-priority environment, where some drivers yield and some pedestrians cross in gaps. Logistic regression models are developed to predict the probability of pedestrian crossings as a function of vehicle dynamics, pedestrian assertiveness, and other factors. In combination with prior work on probabilistic yielding models, the results can be incorporated in a simulation environment, where they can more fully describe the interaction of these two modes. The approach is intended to supplement HCM analytical procedure for locations where significant interaction occurs between drivers and pedestrians, including modern roundabouts. PMID:21643488

  8. Studying mixing in Non-Newtonian blue maize flour suspensions using color analysis.

    PubMed

    Trujillo-de Santiago, Grissel; Rojas-de Gante, Cecilia; García-Lara, Silverio; Ballescá-Estrada, Adriana; Alvarez, Mario Moisés

    2014-01-01

    Non-Newtonian fluids occur in many relevant flow and mixing scenarios at the lab and industrial scale. The addition of acid or basic solutions to a non-Newtonian fluid is not an infrequent operation, particularly in Biotechnology applications where the pH of Non-Newtonian culture broths is usually regulated using this strategy. We conducted mixing experiments in agitated vessels using Non-Newtonian blue maize flour suspensions. Acid or basic pulses were injected to reveal mixing patterns and flow structures and to follow their time evolution. No foreign pH indicator was used as blue maize flours naturally contain anthocyanins that act as a native, wide spectrum, pH indicator. We describe a novel method to quantitate mixedness and mixing evolution through Dynamic Color Analysis (DCA) in this system. Color readings corresponding to different times and locations within the mixing vessel were taken with a digital camera (or a colorimeter) and translated to the CIELab scale of colors. We use distances in the Lab space, a 3D color space, between a particular mixing state and the final mixing point to characterize segregation/mixing in the system. Blue maize suspensions represent an adequate and flexible model to study mixing (and fluid mechanics in general) in Non-Newtonian suspensions using acid/base tracer injections. Simple strategies based on the evaluation of color distances in the CIELab space (or other scales such as HSB) can be adapted to characterize mixedness and mixing evolution in experiments using blue maize suspensions.

  9. A Household-Based Study of Contact Networks Relevant for the Spread of Infectious Diseases in the Highlands of Peru

    PubMed Central

    Grijalva, Carlos G.; Goeyvaerts, Nele; Verastegui, Hector; Edwards, Kathryn M.; Gil, Ana I.; Lanata, Claudio F.; Hens, Niel

    2015-01-01

    Background Few studies have quantified social mixing in remote rural areas of developing countries, where the burden of infectious diseases is usually the highest. Understanding social mixing patterns in those settings is crucial to inform the implementation of strategies for disease prevention and control. We characterized contact and social mixing patterns in rural communities of the Peruvian highlands. Methods and Findings This cross-sectional study was nested in a large prospective household-based study of respiratory infections conducted in the province of San Marcos, Cajamarca-Peru. Members of study households were interviewed using a structured questionnaire of social contacts (conversation or physical interaction) experienced during the last 24 hours. We identified 9015 reported contacts from 588 study household members. The median age of respondents was 17 years (interquartile range [IQR] 4–34 years). The median number of reported contacts was 12 (IQR 8–20) whereas the median number of physical (i.e. skin-to-skin) contacts was 8.5 (IQR 5–14). Study participants had contacts mostly with people of similar age, and with their offspring or parents. The number of reported contacts was mainly determined by the participants’ age, household size and occupation. School-aged children had more contacts than other age groups. Within-household reciprocity of contacts reporting declined with household size (range 70%-100%). Ninety percent of household contact networks were complete, and furthermore, household members' contacts with non-household members showed significant overlap (range 33%-86%), indicating a high degree of contact clustering. A two-level mixing epidemic model was simulated to compare within-household mixing based on observed contact networks and within-household random mixing. No differences in the size or duration of the simulated epidemics were revealed. Conclusion This study of rural low-density communities in the highlands of Peru suggests contact patterns are highly assortative. Study findings support the use of within-household homogenous mixing assumptions for epidemic modeling in this setting. PMID:25734772

  10. 3D mapping, hydrodynamics and modelling of the freshwater-brine mixing zone in salt flats similar to the Salar de Atacama (Chile)

    NASA Astrophysics Data System (ADS)

    Marazuela, M. A.; Vázquez-Suñé, E.; Custodio, E.; Palma, T.; García-Gil, A.; Ayora, C.

    2018-06-01

    Salt flat brines are a major source of minerals and especially lithium. Moreover, valuable wetlands with delicate ecologies are also commonly present at the margins of salt flats. Therefore, the efficient and sustainable exploitation of the brines they contain requires detailed knowledge about the hydrogeology of the system. A critical issue is the freshwater-brine mixing zone, which develops as a result of the mass balance between the recharged freshwater and the evaporating brine. The complex processes occurring in salt flats require a three-dimensional (3D) approach to assess the mixing zone geometry. In this study, a 3D map of the mixing zone in a salt flat is presented, using the Salar de Atacama as an example. This mapping procedure is proposed as the basis of computationally efficient three-dimensional numerical models, provided that the hydraulic heads of freshwater and mixed waters are corrected based on their density variations to convert them into brine heads. After this correction, the locations of lagoons and wetlands that are characteristic of the marginal zones of the salt flats coincide with the regional minimum water (brine) heads. The different morphologies of the mixing zone resulting from this 3D mapping have been interpreted using a two-dimensional (2D) flow and transport numerical model of an idealized cross-section of the mixing zone. The result of the model shows a slope of the mixing zone that is similar to that obtained by 3D mapping and lower than in previous models. To explain this geometry, the 2D model was used to evaluate the effects of heterogeneity in the mixing zone geometry. The higher the permeability of the upper aquifer is, the lower the slope and the shallower the mixing zone become. This occurs because most of the freshwater lateral recharge flows through the upper aquifer due to its much higher transmissivity, thus reducing the freshwater head. The presence of a few meters of highly permeable materials in the upper part of these hydrogeological systems, such as alluvial fans or karstified evaporites that are frequently associated with the salt flats, is enough to greatly modify the geometry of the saline interface.

  11. Analysis of mixed model in gear transmission based on ADAMS

    NASA Astrophysics Data System (ADS)

    Li, Xiufeng; Wang, Yabin

    2012-09-01

    The traditional method of mechanical gear driving simulation includes gear pair method and solid to solid contact method. The former has higher solving efficiency but lower results accuracy; the latter usually obtains higher precision of results while the calculation process is complex, also it is not easy to converge. Currently, most of the researches are focused on the description of geometric models and the definition of boundary conditions. However, none of them can solve the problems fundamentally. To improve the simulation efficiency while ensure the results with high accuracy, a mixed model method which uses gear tooth profiles to take the place of the solid gear to simulate gear movement is presented under these circumstances. In the process of modeling, build the solid models of the mechanism in the SolidWorks firstly; Then collect the point coordinates of outline curves of the gear using SolidWorks API and create fit curves in Adams based on the point coordinates; Next, adjust the position of those fitting curves according to the position of the contact area; Finally, define the loading conditions, boundary conditions and simulation parameters. The method provides gear shape information by tooth profile curves; simulates the mesh process through tooth profile curve to curve contact and offer mass as well as inertia data via solid gear models. This simulation process combines the two models to complete the gear driving analysis. In order to verify the validity of the method presented, both theoretical derivation and numerical simulation on a runaway escapement are conducted. The results show that the computational efficiency of the mixed model method is 1.4 times over the traditional method which contains solid to solid contact. Meanwhile, the simulation results are more closely to theoretical calculations. Consequently, mixed model method has a high application value regarding to the study of the dynamics of gear mechanism.

  12. Comparing estimates of genetic variance across different relationship models.

    PubMed

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Mixing of Supersonic Streams

    NASA Technical Reports Server (NTRS)

    Hawk, C. W.; Landrum, D. B.; Muller, S.; Turner, M.; Parkinson, D.

    1998-01-01

    The Strutjet approach to Rocket Based Combined Cycle (RBCC) propulsion depends upon fuel-rich flows from the rocket nozzles and turbine exhaust products mixing with the ingested air for successful operation in the ramjet and scramjet modes. It is desirable to delay this mixing process in the air-augmented mode of operation present during low speed flight. A model of the Strutjet device has been built and is undergoing test to investigate the mixing of the streams as a function of distance from the Strutjet exit plane during simulated low speed flight conditions. Cold flow testing of a 1/6 scale Strutjet model is underway and nearing completion. Planar Laser Induced Fluorescence (PLIF) diagnostic methods are being employed to observe the mixing of the turbine exhaust gas with the gases from both the primary rockets and the ingested air simulating low speed, air augmented operation of the RBCC. The ratio of the pressure in the turbine exhaust duct to that in the rocket nozzle wall at the point of their intersection is the independent variable in these experiments. Tests were accomplished at values of 1.0, 1.5 and 2.0 for this parameter. Qualitative results illustrate the development of the mixing zone from the exit plane of the model to a distance of about 10 rocket nozzle exit diameters downstream. These data show the mixing to be confined in the vertical plane for all cases, The lateral expansion is more pronounced at a pressure ratio of 1.0 and suggests that mixing with the ingested flow would be likely beginning at a distance of 7 nozzle exit diameters downstream of the nozzle exit plane.

  14. Future of endemic flora of biodiversity hotspots in India.

    PubMed

    Chitale, Vishwas Sudhir; Behera, Mukund Dev; Roy, Partha Sarthi

    2014-01-01

    India is one of the 12 mega biodiversity countries of the world, which represents 11% of world's flora in about 2.4% of global land mass. Approximately 28% of the total Indian flora and 33% of angiosperms occurring in India are endemic. Higher human population density in biodiversity hotspots in India puts undue pressure on these sensitive eco-regions. In the present study, we predict the future distribution of 637 endemic plant species from three biodiversity hotspots in India; Himalaya, Western Ghats, Indo-Burma, based on A1B scenario for year 2050 and 2080. We develop individual variable based models as well as mixed models in MaxEnt by combining ten least co-related bioclimatic variables, two disturbance variables and one physiography variable as predictor variables. The projected changes suggest that the endemic flora will be adversely impacted, even under such a moderate climate scenario. The future distribution is predicted to shift in northern and north-eastern direction in Himalaya and Indo-Burma, while in southern and south-western direction in Western Ghats, due to cooler climatic conditions in these regions. In the future distribution of endemic plants, we observe a significant shift and reduction in the distribution range compared to the present distribution. The model predicts a 23.99% range reduction and a 7.70% range expansion in future distribution by 2050, while a 41.34% range reduction and a 24.10% range expansion by 2080. Integration of disturbance and physiography variables along with bioclimatic variables in the models improved the prediction accuracy. Mixed models provide most accurate results for most of the combinations of climatic and non-climatic variables as compared to individual variable based models. We conclude that a) regions with cooler climates and higher moisture availability could serve as refugia for endemic plants in future climatic conditions; b) mixed models provide more accurate results, compared to single variable based models.

  15. Future of Endemic Flora of Biodiversity Hotspots in India

    PubMed Central

    Chitale, Vishwas Sudhir; Behera, Mukund Dev; Roy, Partha Sarthi

    2014-01-01

    India is one of the 12 mega biodiversity countries of the world, which represents 11% of world's flora in about 2.4% of global land mass. Approximately 28% of the total Indian flora and 33% of angiosperms occurring in India are endemic. Higher human population density in biodiversity hotspots in India puts undue pressure on these sensitive eco-regions. In the present study, we predict the future distribution of 637 endemic plant species from three biodiversity hotspots in India; Himalaya, Western Ghats, Indo-Burma, based on A1B scenario for year 2050 and 2080. We develop individual variable based models as well as mixed models in MaxEnt by combining ten least co-related bioclimatic variables, two disturbance variables and one physiography variable as predictor variables. The projected changes suggest that the endemic flora will be adversely impacted, even under such a moderate climate scenario. The future distribution is predicted to shift in northern and north-eastern direction in Himalaya and Indo-Burma, while in southern and south-western direction in Western Ghats, due to cooler climatic conditions in these regions. In the future distribution of endemic plants, we observe a significant shift and reduction in the distribution range compared to the present distribution. The model predicts a 23.99% range reduction and a 7.70% range expansion in future distribution by 2050, while a 41.34% range reduction and a 24.10% range expansion by 2080. Integration of disturbance and physiography variables along with bioclimatic variables in the models improved the prediction accuracy. Mixed models provide most accurate results for most of the combinations of climatic and non-climatic variables as compared to individual variable based models. We conclude that a) regions with cooler climates and higher moisture availability could serve as refugia for endemic plants in future climatic conditions; b) mixed models provide more accurate results, compared to single variable based models. PMID:25501852

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfram, Phillip J.; Ringler, Todd D.; Maltrud, Mathew E.

    Isopycnal diffusivity due to stirring by mesoscale eddies in an idealized, wind-forced, eddying, midlatitude ocean basin is computed using Lagrangian, in Situ, Global, High-Performance Particle Tracking (LIGHT). Simulation is performed via LIGHT within the Model for Prediction across Scales Ocean (MPAS-O). Simulations are performed at 4-, 8-, 16-, and 32-km resolution, where the first Rossby radius of deformation (RRD) is approximately 30 km. Scalar and tensor diffusivities are estimated at each resolution based on 30 ensemble members using particle cluster statistics. Each ensemble member is composed of 303 665 particles distributed across five potential density surfaces. Diffusivity dependence upon modelmore » resolution, velocity spatial scale, and buoyancy surface is quantified and compared with mixing length theory. The spatial structure of diffusivity ranges over approximately two orders of magnitude with values of O(10 5) m 2 s –1 in the region of western boundary current separation to O(10 3) m 2 s –1 in the eastern region of the basin. Dominant mixing occurs at scales twice the size of the first RRD. Model resolution at scales finer than the RRD is necessary to obtain sufficient model fidelity at scales between one and four RRD to accurately represent mixing. Mixing length scaling with eddy kinetic energy and the Lagrangian time scale yield mixing efficiencies that typically range between 0.4 and 0.8. In conclusion, a reduced mixing length in the eastern region of the domain relative to the west suggests there are different mixing regimes outside the baroclinic jet region.« less

  17. MixSIAR: advanced stable isotope mixing models in R

    EPA Science Inventory

    Background/Question/Methods The development of stable isotope mixing models has coincided with modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances are published in parity with software packages. However, while mixing model theory has recently been ex...

  18. Examining the Variability of Sleep Patterns during Treatment for Chronic Insomnia: Application of a Location-Scale Mixed Model

    PubMed Central

    Ong, Jason C.; Hedeker, Donald; Wyatt, James K.; Manber, Rachel

    2016-01-01

    Study Objectives: The purpose of this study was to introduce a novel statistical technique called the location-scale mixed model that can be used to analyze the mean level and intra-individual variability (IIV) using longitudinal sleep data. Methods: We applied the location-scale mixed model to examine changes from baseline in sleep efficiency on data collected from 54 participants with chronic insomnia who were randomized to an 8-week Mindfulness-Based Stress Reduction (MBSR; n = 19), an 8-week Mindfulness-Based Therapy for Insomnia (MBTI; n = 19), or an 8-week self-monitoring control (SM; n = 16). Sleep efficiency was derived from daily sleep diaries collected at baseline (days 1–7), early treatment (days 8–21), late treatment (days 22–63), and post week (days 64–70). The behavioral components (sleep restriction, stimulus control) were delivered during late treatment in MBTI. Results: For MBSR and MBTI, the pre-to-post change in mean levels of sleep efficiency were significantly larger than the change in mean levels for the SM control, but the change in IIV was not significantly different. During early and late treatment, MBSR showed a larger increase in mean levels of sleep efficiency and a larger decrease in IIV relative to the SM control. At late treatment, MBTI had a larger increase in the mean level of sleep efficiency compared to SM, but the IIV was not significantly different. Conclusions: The location-scale mixed model provides a two-dimensional analysis on the mean and IIV using longitudinal sleep diary data with the potential to reveal insights into treatment mechanisms and outcomes. Citation: Ong JC, Hedeker D, Wyatt JK, Manber R. Examining the variability of sleep patterns during treatment for chronic insomnia: application of a location-scale mixed model. J Clin Sleep Med 2016;12(6):797–804. PMID:26951414

  19. Modelling melting in crustal environments, with links to natural systems in the Nepal Himalayas

    NASA Astrophysics Data System (ADS)

    Isherwood, C.; Holland, T.; Bickle, M.; Harris, N.

    2003-04-01

    Melt bodies of broadly granitic character occur frequently in mountain belts such as the Himalayan chain which exposes leucogranitic intrusions along its entire length (e.g. Le Fort, 1975). The genesis and disposition of these bodies have considerable implications for the development of tectonic evolution models for such mountain belts. However, melting processes and melt migration behaviour are influenced by many factors (Hess, 1995; Wolf &McMillan, 1995) which are as yet poorly understood. Recent improvements in internally consistent thermodynamic datasets have allowed the modelling of simple granitic melt systems (Holland &Powell, 2001) at pressures below 10 kbar, of which Himalayan leucogranites provide a good natural example. Model calculations such as these have been extended to include an asymmetrical melt-mixing model based on the Van Laar approach, which uses volumes (or pseudovolumes) for the different end-members in a mixture to control the asymmetry of non-ideal mixing. This asymmetrical formalism has been used in conjunction with several different entropy of mixing assumptions in an attempt to find the closest fit to available experimental data for melting in simple binary and ternary haplogranite systems. The extracted mixing data are extended to more complex systems and allow the construction of phase relations in NKASH necessary to model simple haplogranitic melts involving albite, K-feldspar, quartz, sillimanite and {H}2{O}. The models have been applied to real bulk composition data from Himalayan leucogranites.

  20. Mixed formulation for seismic analysis of composite steel-concrete frame structures

    NASA Astrophysics Data System (ADS)

    Ayoub, Ashraf Salah Eldin

    This study presents a new finite element model for the nonlinear analysis of structures made up of steel and concrete under monotonic and cyclic loads. The new formulation is based on a two-field mixed formulation. In the formulation, both forces and deformations are simultaneously approximated within the element through independent interpolation functions. The main advantages of the model is the accuracy in global and local response with very few elements while maintaining rapid numerical convergence and robustness even under severe cyclic loading. Overall four elements were developed based on the new formulation: an element that describes the behavior of anchored reinforcing bars, an element that describes the behavior of composite steel-concrete beams with deformable shear connectors, an element that describes the behavior of reinforced concrete beam-columns with bond-slip, and an element that describes the behavior of pretensioned or posttensioned, bonded or unbonded prestressed concrete structures. The models use fiber discretization of beam sections to describe nonlinear material response. The transfer of forces between steel and concrete is described with bond elements. Bond elements are modeled with distributed spring elements. The non-linear behavior of the composite element derives entirely from the constitutive laws of the steel, concrete and bond elements. Two additional elements are used for the prestressed concrete models, a friction element that models the effect of friction between the tendon and the duct during the posttensioning operation, and an anchorage element that describes the behavior of the prestressing tendon anchorage in posttensioned structures. Two algorithms for the numerical implementation of the new proposed model are presented; an algorithm that enforces stress continuity at element boundaries, and an algorithm in which stress continuity is relaxed locally inside the element. Stability of both algorithms is discussed. Comparison with standard displacement based models and earlier flexibility based models is presented through numerical studies. The studies prove the superiority of the mixed model over both displacement and flexibility models. Correlation studies of the proposed model with experimental results of structural specimens are conducted. The studies show the accuracy of the model and its numerical robustness even under severe cyclic loading conditions.

  1. Models for Temperature and Composition in Uranus from Spitzer, Herschel and Ground-Based Infrared through Millimeter Observations

    NASA Astrophysics Data System (ADS)

    Orton, G. S.; Fletcher, L. N.; Feuchtgruber, H.; Lellouch, E.; Moreno, R.; Encrenaz, T.; Hartogh, P.; Jarchow, C.; Swinyard, B.; Moses, J. I.; Burgdorf, M. J.; Hammel, H. B.; Line, M. R.; Sandell, G.; Dowell, C. D.

    2013-12-01

    Photometric and spectroscopic observations of Uranus were combined to create self-consistent models of its global-mean temperature profile, bulk composition, and vertical distribution of gases. These were derived from a suite of spacecraft and ground-based observations that includes the Spitzer IRS, and the Herschel HIFI, PACS and SPIRE instruments, together with ground-based observations from UKIRT and CSO. Observations of the collision-induced absorption of H2 have constrained the temperature structure in the troposphere; this was possible up to atmospheric pressures of ~2 bars. Temperatures in the stratosphere were constrained by H2 quadrupole line emission. We coupled the vertical distribution of CH4 in the stratosphere of Uranus with models for the vertical mixing in a way that is consistent with the mixing ratios of hydrocarbons whose abundances are influenced primarily by mixing rather than chemistry. Spitzer and Herschel data constrain the abundances of CH3, CH4, C2H2, C2H6, C3H4, C4H2, H2O and CO2. At millimeter wavelengths, there is evidence that an additional opacity source is required besides the H2 collision-induced absorption and the NH3 absorption needed to match the microwave spectrum; this can reasonably (but not uniquely) be attributed to H2S. These models will be made more mature by consideration of spatial variability from Voyager IRIS and more recent spatially resolved imaging and mapping from ground-based observatories. The model is of ';programmatic' interest because it serves as a calibration source for Herschel instruments, and it provides a starting point for planning future spacecraft investigations of the atmosphere of Uranus.

  2. Nonlinear hyperspectral unmixing based on sparse non-negative matrix factorization

    NASA Astrophysics Data System (ADS)

    Li, Jing; Li, Xiaorun; Zhao, Liaoying

    2016-01-01

    Hyperspectral unmixing aims at extracting pure material spectra, accompanied by their corresponding proportions, from a mixed pixel. Owing to modeling more accurate distribution of real material, nonlinear mixing models (non-LMM) are usually considered to hold better performance than LMMs in complicated scenarios. In the past years, numerous nonlinear models have been successfully applied to hyperspectral unmixing. However, most non-LMMs only think of sum-to-one constraint or positivity constraint while the widespread sparsity among real materials mixing is the very factor that cannot be ignored. That is, for non-LMMs, a pixel is usually composed of a few spectral signatures of different materials from all the pure pixel set. Thus, in this paper, a smooth sparsity constraint is incorporated into the state-of-the-art Fan nonlinear model to exploit the sparsity feature in nonlinear model and use it to enhance the unmixing performance. This sparsity-constrained Fan model is solved with the non-negative matrix factorization. The algorithm was implemented on synthetic and real hyperspectral data and presented its advantage over those competing algorithms in the experiments.

  3. Potential-based and non-potential-based cohesive zone formulations under mixed-mode separation and over-closure-Part II: Finite element applications

    NASA Astrophysics Data System (ADS)

    Máirtín, Éamonn Ó.; Parry, Guillaume; Beltz, Glenn E.; McGarry, J. Patrick

    2014-02-01

    This paper, the second of two parts, presents three novel finite element case studies to demonstrate the importance of normal-tangential coupling in cohesive zone models (CZMs) for the prediction of mixed-mode interface debonding. Specifically, four new CZMs proposed in Part I of this study are implemented, namely the potential-based MP model and the non-potential-based NP1, NP2 and SMC models. For comparison, simulations are also performed for the well established potential-based Xu-Needleman (XN) model and the non-potential-based model of van den Bosch, Schreurs and Geers (BSG model). Case study 1: Debonding and rebonding of a biological cell from a cyclically deforming silicone substrate is simulated when the mode II work of separation is higher than the mode I work of separation at the cell-substrate interface. An active formulation for the contractility and remodelling of the cell cytoskeleton is implemented. It is demonstrated that when the XN potential function is used at the cell-substrate interface repulsive normal tractions are computed, preventing rebonding of significant regions of the cell to the substrate. In contrast, the proposed MP potential function at the cell-substrate interface results in negligible repulsive normal tractions, allowing for the prediction of experimentally observed patterns of cell cytoskeletal remodelling. Case study 2: Buckling of a coating from the compressive surface of a stent is simulated. It is demonstrated that during expansion of the stent the coating is initially compressed into the stent surface, while simultaneously undergoing tangential (shear) tractions at the coating-stent interface. It is demonstrated that when either the proposed NP1 or NP2 model is implemented at the stent-coating interface mixed-mode over-closure is correctly penalised. Further expansion of the stent results in the prediction of significant buckling of the coating from the stent surface, as observed experimentally. In contrast, the BSG model does not correctly penalise mixed-mode over-closure at the stent-coating interface, significantly altering the stress state in the coating and preventing the prediction of buckling. Case study 3: Application of a displacement to the base of a bi-layered composite arch results in a symmetric sinusoidal distribution of normal and tangential traction at the arch interface. The traction defined mode mixity at the interface ranges from pure mode II at the base of the arch to pure mode I at the top of the arch. It is demonstrated that predicted debonding patterns are highly sensitive to normal-tangential coupling terms in a CZM. The NP2, XN, and BSG models exhibit a strong bias towards mode I separation at the top of the arch, while the NP1 model exhibits a bias towards mode II debonding at the base of the arch. Only the SMC model provides mode-independent behaviour in the early stages of debonding. This case study provides a practical example of the importance of the behaviour of CZMs under conditions of traction controlled mode mixity, following from the theoretical analysis presented in Part I of this study.

  4. Prediction of heat release effects on a mixing layer

    NASA Technical Reports Server (NTRS)

    Farshchi, M.

    1986-01-01

    A fully second-order closure model for turbulent reacting flows is suggested based on Favre statistics. For diffusion flames the local thermodynamic state is related to single conserved scalar. The properties of pressure fluctuations are analyzed for turbulent flows with fluctuating density. Closure models for pressure correlations are discussed and modeled transport equations for Reynolds stresses, turbulent kinetic energy dissipation, density-velocity correlations, scalar moments and dissipation are presented and solved, together with the mean equations for momentum and mixture fraction. Solutions of these equations are compared with the experimental data for high heat release free mixing layers of fluorine and hydrogen in a nitrogen diluent.

  5. Six-degree-of-freedom aircraft simulation with mixed-data structure using the applied dynamics simulation language, ADSIM

    NASA Technical Reports Server (NTRS)

    Savaglio, Clare

    1989-01-01

    A realistic simulation of an aircraft in the flight using the AD 100 digital computer is presented. The implementation of three model features is specifically discussed: (1) a large aerodynamic data base (130,00 function values) which is evaluated using function interpolation to obtain the aerodynamic coefficients; (2) an option to trim the aircraft in longitudinal flight; and (3) a flight control system which includes a digital controller. Since the model includes a digital controller the simulation implements not only continuous time equations but also discrete time equations, thus the model has a mixed-data structure.

  6. Statistical power calculations for mixed pharmacokinetic study designs using a population approach.

    PubMed

    Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel

    2014-09-01

    Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.

  7. Line mixing calculation in the ν 6Q-branches of N 2-broadened CH 3Br at low temperatures

    NASA Astrophysics Data System (ADS)

    Gomez, L.; Tran, H.; Jacquemart, D.

    2009-07-01

    In an early study [H. Tran, D. Jacquemart, J.Y. Mandin, N. Lacome, JQSRT 109 (2008) 119-131], line mixing effects of the ν 6 band of methyl bromide were observed and modeled at room temperature. In the present work, line mixing effects have been considered at low temperatures using state-to-state collisional rates which were modeled by a fitting law based on the energy gap and a few fitting parameters. To validate the model, several spectra of methyl bromide perturbed by nitrogen have been recorded at various temperatures (205-299 K) and pressures (230-825 hPa). Comparisons between measured spectra and calculations using both direct calculation from relaxation operator and Rosenkranz profile have been performed showing improvement compared to the usual Lorentz profile. Note that the temperature dependence of the spectroscopic parameters has been taken into account using results of previous studies.

  8. Genetic mixed linear models for twin survival data.

    PubMed

    Ha, Il Do; Lee, Youngjo; Pawitan, Yudi

    2007-07-01

    Twin studies are useful for assessing the relative importance of genetic or heritable component from the environmental component. In this paper we develop a methodology to study the heritability of age-at-onset or lifespan traits, with application to analysis of twin survival data. Due to limited period of observation, the data can be left truncated and right censored (LTRC). Under the LTRC setting we propose a genetic mixed linear model, which allows general fixed predictors and random components to capture genetic and environmental effects. Inferences are based upon the hierarchical-likelihood (h-likelihood), which provides a statistically efficient and unified framework for various mixed-effect models. We also propose a simple and fast computation method for dealing with large data sets. The method is illustrated by the survival data from the Swedish Twin Registry. Finally, a simulation study is carried out to evaluate its performance.

  9. Physisorption and desorption of H2, HD and D2 on amorphous solid water ice. Effect on mixing isotopologue on statistical population of adsorption sites.

    PubMed

    Amiaud, Lionel; Fillion, Jean-Hugues; Dulieu, François; Momeni, Anouchah; Lemaire, Jean-Louis

    2015-11-28

    We study the adsorption and desorption of three isotopologues of molecular hydrogen mixed on 10 ML of porous amorphous water ice (ASW) deposited at 10 K. Thermally programmed desorption (TPD) of H2, D2 and HD adsorbed at 10 K have been performed with different mixings. Various coverages of H2, HD and D2 have been explored and a model taking into account all species adsorbed on the surface is presented in detail. The model we propose allows to extract the parameters required to fully reproduce the desorption of H2, HD and D2 for various coverages and mixtures in the sub-monolayer regime. The model is based on a statistical description of the process in a grand-canonical ensemble where adsorbed molecules are described following a Fermi-Dirac distribution.

  10. Heterosis and outbreeding depression: A multi-locus model and an application to salmon production

    USGS Publications Warehouse

    Emlen, John M.

    1991-01-01

    Both artificial propagation and efforts to preserve or augment natural populations sometimes involve, wittingly or unwittingly, the mixing of different gene pools. The advantages of such mixing vis-à-vis the alleviation of inbreeding depression are well known. Acknowledged, but less well understood, are the complications posed by outbreeding depression. This paper derives a simple model of outbreeding depression and demonstrates that it is reasonably possible to predict the generation-to-generation fitness course of hybrids derived from parents from different origins. Genetic difference, or distance between parental types, is defined by the drop in fitness experienced by one type reared at the site to which the other is locally adapted. For situations where decisions involving stock mixing must be made in the absence of complete information, a sensitivity analysis-based conflict resolution method (the Good-Bad-Ugly model) is described.

  11. DISCRETE VOLUME-ELEMENT METHOD FOR NETWORK WATER- QUALITY MODELS

    EPA Science Inventory

    An explicit dynamic water-quality modeling algorithm is developed for tracking dissolved substances in water-distribution networks. The algorithm is based on a mass-balance relation within pipes that considers both advective transport and reaction kinetics. Complete mixing of m...

  12. A PC-based inverse design method for radial and mixed flow turbomachinery

    NASA Technical Reports Server (NTRS)

    Skoe, Ivar Helge

    1991-01-01

    An Inverse Design Method suitable for radial and mixed flow turbomachinery is presented. The codes are based on the streamline curvature concept; therefore, it is applicable for current personal computers from the 286/287 range. In addition to the imposed aerodynamic constraints, mechanical constraints are imposed during the design process to ensure that the resulting geometry satisfies production consideration and that structural considerations are taken into account. By the use of Bezier Curves in the geometric modeling, the same subroutine is used to prepare input for both aero and structural files since it is important to ensure that the geometric data is identical to both structural analysis and production. To illustrate the method, a mixed flow turbine design is shown.

  13. Numerical Investigation of a Cavitating Mixing Layer of Liquefied Natural Gas (LNG) Behind a Flat Plate Splitter

    NASA Astrophysics Data System (ADS)

    Rahbarimanesh, Saeed; Brinkerhoff, Joshua

    2017-11-01

    The mutual interaction of shear layer instabilities and phase change in a two-dimensional cryogenic cavitating mixing layer is investigated using a numerical model. The developed model employs the homogeneous equilibrium mixture (HEM) approach in a density-based framework to compute the temperature-dependent cavitation field for liquefied natural gas (LNG). Thermal and baroclinic effects are captured via iterative coupled solution of the governing equations with dynamic thermophysical models that accurately capture the properties of LNG. The mixing layer is simulated for vorticity-thickness Reynolds numbers of 44 to 215 and cavitation numbers of 0.1 to 1.1. Attached cavity structures develop on the splitter plate followed by roll-up of the separated shear layer via the well-known Kelvin-Helmholtz mode, leading to streamwise accumulation of vorticity and eventual shedding of discrete vortices. Cavitation occurs as vapor cavities nucleate and grow from the low-pressure cores in the rolled-up vortices. Thermal effects and baroclinic vorticity production are found to have significant impacts on the mixing layer instability and cavitation processes.

  14. A Study of Mexican Free-Tailed Bat Chirp Syllables: Bayesian Functional Mixed Models for Nonstationary Acoustic Time Series.

    PubMed

    Martinez, Josue G; Bohn, Kirsten M; Carroll, Raymond J; Morris, Jeffrey S

    2013-06-01

    We describe a new approach to analyze chirp syllables of free-tailed bats from two regions of Texas in which they are predominant: Austin and College Station. Our goal is to characterize any systematic regional differences in the mating chirps and assess whether individual bats have signature chirps. The data are analyzed by modeling spectrograms of the chirps as responses in a Bayesian functional mixed model. Given the variable chirp lengths, we compute the spectrograms on a relative time scale interpretable as the relative chirp position, using a variable window overlap based on chirp length. We use 2D wavelet transforms to capture correlation within the spectrogram in our modeling and obtain adaptive regularization of the estimates and inference for the regions-specific spectrograms. Our model includes random effect spectrograms at the bat level to account for correlation among chirps from the same bat, and to assess relative variability in chirp spectrograms within and between bats. The modeling of spectrograms using functional mixed models is a general approach for the analysis of replicated nonstationary time series, such as our acoustical signals, to relate aspects of the signals to various predictors, while accounting for between-signal structure. This can be done on raw spectrograms when all signals are of the same length, and can be done using spectrograms defined on a relative time scale for signals of variable length in settings where the idea of defining correspondence across signals based on relative position is sensible.

  15. A pressure relaxation closure model for one-dimensional, two-material Lagrangian hydrodynamics based on the Riemann problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamm, James R; Shashkov, Mikhail J

    2009-01-01

    Despite decades of development, Lagrangian hydrodynamics of strengthfree materials presents numerous open issues, even in one dimension. We focus on the problem of closing a system of equations for a two-material cell under the assumption of a single velocity model. There are several existing models and approaches, each possessing different levels of fidelity to the underlying physics and each exhibiting unique features in the computed solutions. We consider the case in which the change in heat in the constituent materials in the mixed cell is assumed equal. An instantaneous pressure equilibration model for a mixed cell can be cast asmore » four equations in four unknowns, comprised of the updated values of the specific internal energy and the specific volume for each of the two materials in the mixed cell. The unique contribution of our approach is a physics-inspired, geometry-based model in which the updated values of the sub-cell, relaxing-toward-equilibrium constituent pressures are related to a local Riemann problem through an optimization principle. This approach couples the modeling problem of assigning sub-cell pressures to the physics associated with the local, dynamic evolution. We package our approach in the framework of a standard predictor-corrector time integration scheme. We evaluate our model using idealized, two material problems using either ideal-gas or stiffened-gas equations of state and compare these results to those computed with the method of Tipton and with corresponding pure-material calculations.« less

  16. Development of a numerical model for calculating exposure to toxic and nontoxic stressors in the water column and sediment from drilling discharges.

    PubMed

    Rye, Henrik; Reed, Mark; Frost, Tone Karin; Smit, Mathijs G D; Durgut, Ismail; Johansen, Øistein; Ditlevsen, May Kristin

    2008-04-01

    Drilling discharges are complex mixtures of chemical components and particles which might lead to toxic and nontoxic stress in the environment. In order to be able to evaluate the potential environmental consequences of such discharges in the water column and in sediments, a numerical model was developed. The model includes water column stratification, ocean currents and turbulence, natural burial, bioturbation, and biodegradation of organic matter in the sediment. Accounting for these processes, the fate of the discharge is modeled for the water column, including near-field mixing and plume motion, far-field mixing, and transport. The fate of the discharge is also modeled for the sediment, including sea floor deposition, and mixing due to bioturbation. Formulas are provided for the calculation of suspended matter and chemical concentrations in the water column, and burial, change in grain size, oxygen depletion, and chemical concentrations in the sediment. The model is fully 3-dimensional and time dependent. It uses a Lagrangian approach for the water column based on moving particles that represent the properties of the release and an Eulerian approach for the sediment based on calculation of the properties of matter in a grid. The model will be used to calculate the environmental risk, both in the water column and in sediments, from drilling discharges. It can serve as a tool to define risk mitigating measures, and as such it provides guidance towards the "zero harm" goal.

  17. A corrected formulation for marginal inference derived from two-part mixed models for longitudinal semi-continuous data.

    PubMed

    Tom, Brian Dm; Su, Li; Farewell, Vernon T

    2016-10-01

    For semi-continuous data which are a mixture of true zeros and continuously distributed positive values, the use of two-part mixed models provides a convenient modelling framework. However, deriving population-averaged (marginal) effects from such models is not always straightforward. Su et al. presented a model that provided convenient estimation of marginal effects for the logistic component of the two-part model but the specification of marginal effects for the continuous part of the model presented in that paper was based on an incorrect formulation. We present a corrected formulation and additionally explore the use of the two-part model for inferences on the overall marginal mean, which may be of more practical relevance in our application and more generally. © The Author(s) 2013.

  18. Structure Elucidation of Mixed-Linker Zeolitic Imidazolate Frameworks by Solid-State (1)H CRAMPS NMR Spectroscopy and Computational Modeling.

    PubMed

    Jayachandrababu, Krishna C; Verploegh, Ross J; Leisen, Johannes; Nieuwendaal, Ryan C; Sholl, David S; Nair, Sankar

    2016-06-15

    Mixed-linker zeolitic imidazolate frameworks (ZIFs) are nanoporous materials that exhibit continuous and controllable tunability of properties like effective pore size, hydrophobicity, and organophilicity. The structure of mixed-linker ZIFs has been studied on macroscopic scales using gravimetric and spectroscopic techniques. However, it has so far not been possible to obtain information on unit-cell-level linker distribution, an understanding of which is key to predicting and controlling their adsorption and diffusion properties. We demonstrate the use of (1)H combined rotation and multiple pulse spectroscopy (CRAMPS) NMR spin exchange measurements in combination with computational modeling to elucidate potential structures of mixed-linker ZIFs, particularly the ZIF 8-90 series. All of the compositions studied have structures that have linkers mixed at a unit-cell-level as opposed to separated or highly clustered phases within the same crystal. Direct experimental observations of linker mixing were accomplished by measuring the proton spin exchange behavior between functional groups on the linkers. The data were then fitted to a kinetic spin exchange model using proton positions from candidate mixed-linker ZIF structures that were generated computationally using the short-range order (SRO) parameter as a measure of the ordering, clustering, or randomization of the linkers. The present method offers the advantages of sensitivity without requiring isotope enrichment, a straightforward NMR pulse sequence, and an analysis framework that allows one to relate spin diffusion behavior to proposed atomic positions. We find that structures close to equimolar composition of the two linkers show a greater tendency for linker clustering than what would be predicted based on random models. Using computational modeling we have also shown how the window-type distribution in experimentally synthesized mixed-linker ZIF-8-90 materials varies as a function of their composition. The structural information thus obtained can be further used for predicting, screening, or understanding the tunable adsorption and diffusion behavior of mixed-linker ZIFs, for which the knowledge of linker distributions in the framework is expected to be important.

  19. Robust Fault Detection for Aircraft Using Mixed Structured Singular Value Theory and Fuzzy Logic

    NASA Technical Reports Server (NTRS)

    Collins, Emmanuel G.

    2000-01-01

    The purpose of fault detection is to identify when a fault or failure has occurred in a system such as an aircraft or expendable launch vehicle. The faults may occur in sensors, actuators, structural components, etc. One of the primary approaches to model-based fault detection relies on analytical redundancy. That is the output of a computer-based model (actually a state estimator) is compared with the sensor measurements of the actual system to determine when a fault has occurred. Unfortunately, the state estimator is based on an idealized mathematical description of the underlying plant that is never totally accurate. As a result of these modeling errors, false alarms can occur. This research uses mixed structured singular value theory, a relatively recent and powerful robustness analysis tool, to develop robust estimators and demonstrates the use of these estimators in fault detection. To allow qualitative human experience to be effectively incorporated into the detection process fuzzy logic is used to predict the seriousness of the fault that has occurred.

  20. Low-illumination image denoising method for wide-area search of nighttime sea surface

    NASA Astrophysics Data System (ADS)

    Song, Ming-zhu; Qu, Hong-song; Zhang, Gui-xiang; Tao, Shu-ping; Jin, Guang

    2018-05-01

    In order to suppress complex mixing noise in low-illumination images for wide-area search of nighttime sea surface, a model based on total variation (TV) and split Bregman is proposed in this paper. A fidelity term based on L1 norm and a fidelity term based on L2 norm are designed considering the difference between various noise types, and the regularization mixed first-order TV and second-order TV are designed to balance the influence of details information such as texture and edge for sea surface image. The final detection result is obtained by using the high-frequency component solved from L1 norm and the low-frequency component solved from L2 norm through wavelet transform. The experimental results show that the proposed denoising model has perfect denoising performance for artificially degraded and low-illumination images, and the result of image quality assessment index for the denoising image is superior to that of the contrastive models.

  1. A high-resolution integrated model of the National Ignition Campaign cryogenic layered experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, O. S.; Cerjan, C. J.; Marinak, M. M.

    A detailed simulation-based model of the June 2011 National Ignition Campaign cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60. Simulatedmore » experimental values were extracted from the simulation and compared against the experiment. Although by design the model is able to reproduce the 1D in-flight implosion parameters and low-mode asymmetries, it is not able to accurately predict the measured and inferred stagnation properties and levels of mix. In particular, the measured yields were 15%-40% of the calculated yields, and the inferred stagnation pressure is about 3 times lower than simulated.« less

  2. Using existing case-mix methods to fund trauma cases.

    PubMed

    Monakova, Julia; Blais, Irene; Botz, Charles; Chechulin, Yuriy; Picciano, Gino; Basinski, Antoni

    2010-01-01

    Policymakers frequently face the need to increase funding in isolated and frequently heterogeneous (clinically and in terms of resource consumption) patient subpopulations. This article presents a methodologic solution for testing the appropriateness of using existing grouping and weighting methodologies for funding subsets of patients in the scenario where a case-mix approach is preferable to a flat-rate based payment system. Using as an example the subpopulation of trauma cases of Ontario lead trauma hospitals, the statistical techniques of linear and nonlinear regression models, regression trees, and spline models were applied to examine the fit of the existing case-mix groups and reference weights for the trauma cases. The analyses demonstrated that for funding Ontario trauma cases, the existing case-mix systems can form the basis for rational and equitable hospital funding, decreasing the need to develop a different grouper for this subset of patients. This study confirmed that Injury Severity Score is a poor predictor of costs for trauma patients. Although our analysis used the Canadian case-mix classification system and cost weights, the demonstrated concept of using existing case-mix systems to develop funding rates for specific subsets of patient populations may be applicable internationally.

  3. Agents of Change: Mixed-Race Households and the Dynamics of Neighborhood Segregation in the United States

    PubMed Central

    Ellis, Mark; Holloway, Steven R.; Wright, Richard; Fowler, Christopher S.

    2014-01-01

    This article explores the effects of mixed-race household formation on trends in neighborhood-scale racial segregation. Census data show that these effects are nontrivial in relation to the magnitude of decadal changes in residential segregation. An agent-based model illustrates the potential long-run impacts of rising numbers of mixed-race households on measures of neighborhood-scale segregation. It reveals that high rates of mixed-race household formation will reduce residential segregation considerably. This occurs even when preferences for own-group neighbors are high enough to maintain racial separation in residential space in a Schelling-type model. We uncover a disturbing trend, however; levels of neighborhood-scale segregation of single-race households can remain persistently high even while a growing number of mixed-race households drives down the overall rate of residential segregation. Thus, the article’s main conclusion is that parsing neighborhood segregation levels by household type—single versus mixed race—is essential to interpret correctly trends in the spatial separation of racial groups, especially when the fraction of households that are mixed race is dynamic. More broadly, the article illustrates the importance of household-scale processes for urban outcomes and joins debates in geography about interscalar relationships. PMID:25082984

  4. A framework for quantification and physical modeling of cell mixing applied to oscillator synchronization in vertebrate somitogenesis.

    PubMed

    Uriu, Koichiro; Bhavna, Rajasekaran; Oates, Andrew C; Morelli, Luis G

    2017-08-15

    In development and disease, cells move as they exchange signals. One example is found in vertebrate development, during which the timing of segment formation is set by a 'segmentation clock', in which oscillating gene expression is synchronized across a population of cells by Delta-Notch signaling. Delta-Notch signaling requires local cell-cell contact, but in the zebrafish embryonic tailbud, oscillating cells move rapidly, exchanging neighbors. Previous theoretical studies proposed that this relative movement or cell mixing might alter signaling and thereby enhance synchronization. However, it remains unclear whether the mixing timescale in the tissue is in the right range for this effect, because a framework to reliably measure the mixing timescale and compare it with signaling timescale is lacking. Here, we develop such a framework using a quantitative description of cell mixing without the need for an external reference frame and constructing a physical model of cell movement based on the data. Numerical simulations show that mixing with experimentally observed statistics enhances synchronization of coupled phase oscillators, suggesting that mixing in the tailbud is fast enough to affect the coherence of rhythmic gene expression. Our approach will find general application in analyzing the relative movements of communicating cells during development and disease. © 2017. Published by The Company of Biologists Ltd.

  5. A framework for quantification and physical modeling of cell mixing applied to oscillator synchronization in vertebrate somitogenesis

    PubMed Central

    Bhavna, Rajasekaran; Oates, Andrew C.; Morelli, Luis G.

    2017-01-01

    ABSTRACT In development and disease, cells move as they exchange signals. One example is found in vertebrate development, during which the timing of segment formation is set by a ‘segmentation clock’, in which oscillating gene expression is synchronized across a population of cells by Delta-Notch signaling. Delta-Notch signaling requires local cell-cell contact, but in the zebrafish embryonic tailbud, oscillating cells move rapidly, exchanging neighbors. Previous theoretical studies proposed that this relative movement or cell mixing might alter signaling and thereby enhance synchronization. However, it remains unclear whether the mixing timescale in the tissue is in the right range for this effect, because a framework to reliably measure the mixing timescale and compare it with signaling timescale is lacking. Here, we develop such a framework using a quantitative description of cell mixing without the need for an external reference frame and constructing a physical model of cell movement based on the data. Numerical simulations show that mixing with experimentally observed statistics enhances synchronization of coupled phase oscillators, suggesting that mixing in the tailbud is fast enough to affect the coherence of rhythmic gene expression. Our approach will find general application in analyzing the relative movements of communicating cells during development and disease. PMID:28652318

  6. Flow and active mixing have a strong impact on bacterial growth dynamics in the proximal large intestine

    NASA Astrophysics Data System (ADS)

    Cremer, Jonas; Segota, Igor; Yang, Chih-Yu; Arnoldini, Markus; Groisman, Alex; Hwa, Terence

    2016-11-01

    More than half of fecal dry weight is bacterial mass with bacterial densities reaching up to 1012 cells per gram. Mostly, these bacteria grow in the proximal large intestine where lateral flow along the intestine is strong: flow can in principal lead to a washout of bacteria from the proximal large intestine. Active mixing by contractions of the intestinal wall together with bacterial growth might counteract such a washout and allow high bacterial densities to occur. As a step towards understanding bacterial growth in the presence of mixing and flow, we constructed an in-vitro setup where controlled wall-deformations of a channel emulate contractions. We investigate growth along the channel under a steady nutrient inflow. Depending on mixing and flow, we observe varying spatial gradients in bacterial density along the channel. Active mixing by deformations of the channel wall is shown to be crucial in maintaining a steady-state bacterial population in the presence of flow. The growth-dynamics is quantitatively captured by a simple mathematical model, with the effect of mixing described by an effective diffusion term. Based on this model, we discuss bacterial growth dynamics in the human large intestine using flow- and mixing-behavior having been observed for humans.

  7. Linear mixed model for heritability estimation that explicitly addresses environmental variation.

    PubMed

    Heckerman, David; Gurdasani, Deepti; Kadie, Carl; Pomilla, Cristina; Carstensen, Tommy; Martin, Hilary; Ekoru, Kenneth; Nsubuga, Rebecca N; Ssenyomo, Gerald; Kamali, Anatoli; Kaleebu, Pontiano; Widmer, Christian; Sandhu, Manjinder S

    2016-07-05

    The linear mixed model (LMM) is now routinely used to estimate heritability. Unfortunately, as we demonstrate, LMM estimates of heritability can be inflated when using a standard model. To help reduce this inflation, we used a more general LMM with two random effects-one based on genomic variants and one based on easily measured spatial location as a proxy for environmental effects. We investigated this approach with simulated data and with data from a Uganda cohort of 4,778 individuals for 34 phenotypes including anthropometric indices, blood factors, glycemic control, blood pressure, lipid tests, and liver function tests. For the genomic random effect, we used identity-by-descent estimates from accurately phased genome-wide data. For the environmental random effect, we constructed a covariance matrix based on a Gaussian radial basis function. Across the simulated and Ugandan data, narrow-sense heritability estimates were lower using the more general model. Thus, our approach addresses, in part, the issue of "missing heritability" in the sense that much of the heritability previously thought to be missing was fictional. Software is available at https://github.com/MicrosoftGenomics/FaST-LMM.

  8. Mixing Study in a Multi-dimensional Motion Mixer

    NASA Astrophysics Data System (ADS)

    Shah, R.; Manickam, S. S.; Tomei, J.; Bergman, T. L.; Chaudhuri, B.

    2009-06-01

    Mixing is an important but poorly understood aspect in petrochemical, food, ceramics, fertilizer and pharmaceutical processing and manufacturing. Deliberate mixing of granular solids is an essential operation in the production of industrial powder products usually constituted from different ingredients. The knowledge of particle flow and mixing in a blender is critical to optimize the design and operation. Since performance of the product depends on blend homogeneity, the consequence of variability can be detrimental. A common approach to powder mixing is to use a tumbling blender, which is essentially a hollow vessel horizontally attached to a rotating shaft. This single axis rotary blender is one of the most common batch mixers among in industry, and also finds use in myriad of application as dryers, kilns, coaters, mills and granulators. In most of the rotary mixers the radial convection is faster than axial dispersion transport. This slow dispersive process hinders mixing performance in many blending, drying and coating applications. A double cone mixer is designed and fabricated which rotates around two axes, causing axial mixing competitive to its radial counterpart. Discrete Element Method (DEM) based numerical model is developed to simulate the granular flow within the mixer. Digitally recorded mixing states from experiments are used to fine tune the numerical model. Discrete pocket samplers are also used in the experiments to quantify the characteristics of mixing. A parametric study of the effect of vessel speeds, relative rotational speed (between two axes of rotation), on the granular mixing is investigated by experiments and numerical simulation. Incorporation of dual axis rotation enhances axial mixing by 60 to 85% in comparison to single axis rotation.

  9. Changes in Teaching Efficacy during a Professional Development School-Based Science Methods Course

    ERIC Educational Resources Information Center

    Swars, Susan L.; Dooley, Caitlin McMunn

    2010-01-01

    This mixed methods study offers a theoretically grounded description of a field-based science methods course within a Professional Development School (PDS) model (i.e., PDS-based course). The preservice teachers' (n = 21) experiences within the PDS-based course prompted significant changes in their personal teaching efficacy, with the…

  10. Simulations of arctic mixed-phase clouds in forecasts with CAM3 and AM2 for M-PACE

    DOE PAGES

    Xie, Shaocheng; Boyle, James; Klein, Stephen A.; ...

    2008-02-27

    [1] Simulations of mixed-phase clouds in forecasts with the NCAR Atmosphere Model version 3 (CAM3) and the GFDL Atmospheric Model version 2 (AM2) for the Mixed-Phase Arctic Cloud Experiment (M-PACE) are performed using analysis data from numerical weather prediction centers. CAM3 significantly underestimates the observed boundary layer mixed-phase cloud fraction and cannot realistically simulate the variations of liquid water fraction with temperature and cloud height due to its oversimplified cloud microphysical scheme. In contrast, AM2 reasonably reproduces the observed boundary layer cloud fraction while its clouds contain much less cloud condensate than CAM3 and the observations. The simulation of themore » boundary layer mixed-phase clouds and their microphysical properties is considerably improved in CAM3 when a new physically based cloud microphysical scheme is used (CAM3LIU). The new scheme also leads to an improved simulation of the surface and top of the atmosphere longwave radiative fluxes. Sensitivity tests show that these results are not sensitive to the analysis data used for model initialization. Increasing model horizontal resolution helps capture the subgrid-scale features in Arctic frontal clouds but does not help improve the simulation of the single-layer boundary layer clouds. AM2 simulated cloud fraction and LWP are sensitive to the change in cloud ice number concentrations used in the Wegener-Bergeron-Findeisen process while CAM3LIU only shows moderate sensitivity in its cloud fields to this change. Furthermore, this paper shows that the Wegener-Bergeron-Findeisen process is important for these models to correctly simulate the observed features of mixed-phase clouds.« less

  11. Simulations of Arctic mixed-phase clouds in forecasts with CAM3 and AM2 for M-PACE

    NASA Astrophysics Data System (ADS)

    Xie, Shaocheng; Boyle, James; Klein, Stephen A.; Liu, Xiaohong; Ghan, Steven

    2008-02-01

    Simulations of mixed-phase clouds in forecasts with the NCAR Atmosphere Model version 3 (CAM3) and the GFDL Atmospheric Model version 2 (AM2) for the Mixed-Phase Arctic Cloud Experiment (M-PACE) are performed using analysis data from numerical weather prediction centers. CAM3 significantly underestimates the observed boundary layer mixed-phase cloud fraction and cannot realistically simulate the variations of liquid water fraction with temperature and cloud height due to its oversimplified cloud microphysical scheme. In contrast, AM2 reasonably reproduces the observed boundary layer cloud fraction while its clouds contain much less cloud condensate than CAM3 and the observations. The simulation of the boundary layer mixed-phase clouds and their microphysical properties is considerably improved in CAM3 when a new physically based cloud microphysical scheme is used (CAM3LIU). The new scheme also leads to an improved simulation of the surface and top of the atmosphere longwave radiative fluxes. Sensitivity tests show that these results are not sensitive to the analysis data used for model initialization. Increasing model horizontal resolution helps capture the subgrid-scale features in Arctic frontal clouds but does not help improve the simulation of the single-layer boundary layer clouds. AM2 simulated cloud fraction and LWP are sensitive to the change in cloud ice number concentrations used in the Wegener-Bergeron-Findeisen process while CAM3LIU only shows moderate sensitivity in its cloud fields to this change. This paper shows that the Wegener-Bergeron-Findeisen process is important for these models to correctly simulate the observed features of mixed-phase clouds.

  12. Microfluidic Injector Models Based on Artificial Neural Networks

    DTIC Science & Technology

    2005-06-15

    medicine, and chemistry [1], [2]. They generally perform chemical analysis involving sample preparation, mixing , reaction, injection, separation analysis...algorithms have been validated against many ex- periments found in the literature demonstrating microfluidic mixing , joule heating, injection, and...385 [7] K. Seiler, Z. H. Fan, K. Fluri, and D. J. Harrison, “ Electroosmotic pump- ing and valveless control of fluid flow within a manifold of

  13. A New Mixing Diagnostic and Gulf Oil Spill Movement

    DTIC Science & Technology

    2010-10-01

    could be used with new estimates of the suppression parameter to yield appreciably larger estimates of the hydrogen content in the shallow lunar ...paradigm for mixing in fluid flows with simple time dependence. Its skeletal structure is based on analysis of invariant attracting and repelling...continues to the present day. Model analysis and forecasts are compared to independent (nonassimilated) infrared frontal po- sitions and drifter trajectories

  14. Prediction of reaction knockouts to maximize succinate production by Actinobacillus succinogenes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nag, Ambarish; St. John, Peter C.; Crowley, Michael F.

    Succinate is a precursor of multiple commodity chemicals and bio-based succinate production is an active area of industrial bioengineering research. One of the most important microbial strains for bio-based production of succinate is the capnophilic gram-negative bacterium Actinobacillus succinogenes, which naturally produces succinate by a mixed-acid fermentative pathway. To engineer A. succinogenes to improve succinate yields during mixed acid fermentation, it is important to have a detailed understanding of the metabolic flux distribution in A. succinogenes when grown in suitable media. To this end, we have developed a detailed stoichiometric model of the A. succinogenes central metabolism that includes themore » biosynthetic pathways for the main components of biomass - namely glycogen, amino acids, DNA, RNA, lipids and UDP-N-Acetyl-a-D-glucosamine. We have validated our model by comparing model predictions generated via flux balance analysis with experimental results on mixed acid fermentation. Moreover, we have used the model to predict single and double reaction knockouts to maximize succinate production while maintaining growth viability. According to our model, succinate production can be maximized by knocking out either of the reactions catalyzed by the PTA (phosphate acetyltransferase) and ACK (acetyl kinase) enzymes, whereas the double knockouts of PEPCK (phosphoenolpyruvate carboxykinase) and PTA or PEPCK and ACK enzymes are the most effective in increasing succinate production.« less

  15. Prediction of reaction knockouts to maximize succinate production by Actinobacillus succinogenes

    DOE PAGES

    Nag, Ambarish; St. John, Peter C.; Crowley, Michael F.; ...

    2018-01-30

    Succinate is a precursor of multiple commodity chemicals and bio-based succinate production is an active area of industrial bioengineering research. One of the most important microbial strains for bio-based production of succinate is the capnophilic gram-negative bacterium Actinobacillus succinogenes, which naturally produces succinate by a mixed-acid fermentative pathway. To engineer A. succinogenes to improve succinate yields during mixed acid fermentation, it is important to have a detailed understanding of the metabolic flux distribution in A. succinogenes when grown in suitable media. To this end, we have developed a detailed stoichiometric model of the A. succinogenes central metabolism that includes themore » biosynthetic pathways for the main components of biomass - namely glycogen, amino acids, DNA, RNA, lipids and UDP-N-Acetyl-a-D-glucosamine. We have validated our model by comparing model predictions generated via flux balance analysis with experimental results on mixed acid fermentation. Moreover, we have used the model to predict single and double reaction knockouts to maximize succinate production while maintaining growth viability. According to our model, succinate production can be maximized by knocking out either of the reactions catalyzed by the PTA (phosphate acetyltransferase) and ACK (acetyl kinase) enzymes, whereas the double knockouts of PEPCK (phosphoenolpyruvate carboxykinase) and PTA or PEPCK and ACK enzymes are the most effective in increasing succinate production.« less

  16. Estimation of time-variable fast flow path chemical concentrations for application in tracer-based hydrograph separation analyses

    USGS Publications Warehouse

    Kronholm, Scott C.; Capel, Paul D.

    2016-01-01

    Mixing models are a commonly used method for hydrograph separation, but can be hindered by the subjective choice of the end-member tracer concentrations. This work tests a new variant of mixing model that uses high-frequency measures of two tracers and streamflow to separate total streamflow into water from slowflow and fastflow sources. The ratio between the concentrations of the two tracers is used to create a time-variable estimate of the concentration of each tracer in the fastflow end-member. Multiple synthetic data sets, and data from two hydrologically diverse streams, are used to test the performance and limitations of the new model (two-tracer ratio-based mixing model: TRaMM). When applied to the synthetic streams under many different scenarios, the TRaMM produces results that were reasonable approximations of the actual values of fastflow discharge (±0.1% of maximum fastflow) and fastflow tracer concentrations (±9.5% and ±16% of maximum fastflow nitrate concentration and specific conductance, respectively). With real stream data, the TRaMM produces high-frequency estimates of slowflow and fastflow discharge that align with expectations for each stream based on their respective hydrologic settings. The use of two tracers with the TRaMM provides an innovative and objective approach for estimating high-frequency fastflow concentrations and contributions of fastflow water to the stream. This provides useful information for tracking chemical movement to streams and allows for better selection and implementation of water quality management strategies.

  17. Discrete bivariate population balance modelling of heteroaggregation processes.

    PubMed

    Rollié, Sascha; Briesen, Heiko; Sundmacher, Kai

    2009-08-15

    Heteroaggregation in binary particle mixtures was simulated with a discrete population balance model in terms of two internal coordinates describing the particle properties. The considered particle species are of different size and zeta-potential. Property space is reduced with a semi-heuristic approach to enable an efficient solution. Aggregation rates are based on deterministic models for Brownian motion and stability, under consideration of DLVO interaction potentials. A charge-balance kernel is presented, relating the electrostatic surface potential to the property space by a simple charge balance. Parameter sensitivity with respect to the fractal dimension, aggregate size, hydrodynamic correction, ionic strength and absolute particle concentration was assessed. Results were compared to simulations with the literature kernel based on geometric coverage effects for clusters with heterogeneous surface properties. In both cases electrostatic phenomena, which dominate the aggregation process, show identical trends: impeded cluster-cluster aggregation at low particle mixing ratio (1:1), restabilisation at high mixing ratios (100:1) and formation of complex clusters for intermediate ratios (10:1). The particle mixing ratio controls the surface coverage extent of the larger particle species. Simulation results are compared to experimental flow cytometric data and show very satisfactory agreement.

  18. Two-year concurrent observation of isoprene at 20 sites over China: comparison with MEGAN-REAM model simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Yang, W.; Zhang, R.; Zhang, Z.; Lyu, S.; Yu, J.; Wang, Y.; Wang, G.; Wang, X.

    2017-12-01

    Isoprene, the most abundant non-methane hydrocarbon emitted from plants, directly and indirectly affects atmospheric photochemistry and radiative forcing, yet narrowing its emission uncertainties is a continuous challenge. Comparison of observed and modelled isoprene on large spatiotemporal scales would help recognize factors that control isoprene variability, systematic field observation data are however quite lacking. Here we collected ambient air samples with 1 L silonite-treated stainless steel canisters simultaneously at 20 sites over China on every Wednesday at approximately 14:00 pm Beijing time from 2012 to 2014, and analyzed isoprene mixing ratios by preconcentrator-GC-MSD/FID. Observed isoprene mixing ratios were also compared with that simulated by coupling MEGAN 2.0 (Guenther et al., 2006) with a 3-D Regional chEmical trAnsport Model (REAM) (Zhang et al., 2017). Similar seasonal variations between observation and model simulation were obtained for most of sampling sites, but overall the average isoprene mixing ratios during growing months (May to October) was 0.37 ± 0.08 ppbv from model simulation, about 32% lower than that of 0.54 ± 0.20 ppbv based on ground-based observation, and this discrepancy was particularly significant in north China during wintertime. Further investigation demonstrated that emission of biogenic isoprene in northwest China might be underestimated and non-biogenic emission, such burning biomass/biofuel, might contribute to the elevated levels of isoprene during winter time. The observation-based empirical formulas for changing isoprene emission with solar radiation and temperature were also derived for different regions of China.

  19. Rain Impact Model Assessment of Near-Surface Salinity Stratification Following Rainfall

    NASA Astrophysics Data System (ADS)

    Drushka, K.; Jones, L.; Jacob, M. M.; Asher, W.; Santos-Garcia, A.

    2016-12-01

    Rainfall over oceans produces a layer of fresher surface water, which can have a significant effect on the exchanges between the surface and the bulk mixed layer and also on satellite/in-situ comparisons. For satellite sea surface salinity (SSS) measurements, the standard is the Hybrid Coordinate Ocean Model (HYCOM), but there is a significant difference between the remote sensing sampling depth of 0.01 m and the typical range of 5-10 m of in-situ instruments. Under normal conditions the upper layer of the ocean is well mixed and there is uniform salinity; however, under rainy conditions, there is a dilution of the near-surface salinity that mixes downward by diffusion and by mechanical mixing (gravity waves/wind speed). This significantly modifies the salinity gradient in the upper 1-2 m of the ocean, but these transient salinity stratifications dissipate in a few hours, and the upper layer becomes well mixed at a slightly fresher salinity. Based upon research conducted within the NASA/CONAE Aquarius/SAC-D mission, a rain impact model (RIM) was developed to estimate the change in SSS due to rainfall near the time of the satellite observation, with the objective to identify the probability of salinity stratification. RIM uses HYCOM (which does not include the short-term rain effects) and a NOAA global rainfall product CMORPH to model changes in the near-surface salinity profile in 0.5 h increments. Based upon SPURS-2 experimental near-surface salinity measurements with rain, this paper introduces a term in the RIM model that accounts for the effect of wind speed in the mechanical mixing, which translates into a dynamic vertical diffusivity; whereby a Generalized Ocean Turbulence Model (GOTM) is used to investigate the response to rain events of the upper few meters of the ocean. The objective is to determine how rain and wind forcing control the thickness, stratification strength, and lifetime of fresh lenses and to quantify the impacts of rain-formed fresh lenses on the fresh bias in satellite retrievals of salinity. Results will be presented of comparisons of RIM measurements at depth of a few meters with measurements from in-situ salinity instruments. Also, analytical results will be shown, which assess the accuracy of RIM salinity profiles under a variety of rain rate, wind/wave conditions.

  20. Effects of Transition-Metal Mixing on Na Ordering and Kinetics in Layered P 2 Oxides

    NASA Astrophysics Data System (ADS)

    Zheng, Chen; Radhakrishnan, Balachandran; Chu, Iek-Heng; Wang, Zhenbin; Ong, Shyue Ping

    2017-06-01

    Layered P 2 oxides are promising cathode materials for rechargeable sodium-ion batteries. In this work, we systematically investigate the effects of transition-metal (TM) mixing on Na ordering and kinetics in the NaxCo1 -yMnyO2 model system using density-functional-theory (DFT) calculations. The DFT-predicted 0-K stability diagrams indicate that Co-Mn mixing reduces the energetic differences between Na orderings, which may account for the reduction of the number of phase transformations observed during the cycling of mixed-TM P 2 layered oxides compared to a single TM. Using ab initio molecular-dynamics simulations and nudged elastic-band calculations, we show that the TM composition at the Na(1) (face-sharing) site has a strong influence on the Na site energies, which in turn impacts the kinetics of Na diffusion towards the end of the charge. By employing a site-percolation model, we establish theoretical upper and lower bounds for TM concentrations based on their effect on Na(1) site energies, providing a framework to rationally tune mixed-TM compositions for optimal Na diffusion.

  1. Modeling Multiple Human-Automation Distributed Systems using Network-form Games

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume

    2012-01-01

    The paper describes at a high-level the network-form game framework (based on Bayes net and game theory), which can be used to model and analyze safety issues in large, distributed, mixed human-automation systems such as NextGen.

  2. Virtual Universities: Current Models and Future Trends.

    ERIC Educational Resources Information Center

    Guri-Rosenblit, Sarah

    2001-01-01

    Describes current models of distance education (single-mode distance teaching universities, dual- and mixed-mode universities, extension services, consortia-type ventures, and new technology-based universities), including their merits and problems. Discusses future trends in potential student constituencies, faculty roles, forms of knowledge…

  3. An Efficient Alternative Mixed Randomized Response Procedure

    ERIC Educational Resources Information Center

    Singh, Housila P.; Tarray, Tanveer A.

    2015-01-01

    In this article, we have suggested a new modified mixed randomized response (RR) model and studied its properties. It is shown that the proposed mixed RR model is always more efficient than the Kim and Warde's mixed RR model. The proposed mixed RR model has also been extended to stratified sampling. Numerical illustrations and graphical…

  4. Impact of Antarctic mixed-phase clouds on climate

    DOE PAGES

    Lawson, R. Paul; Gettelman, Andrew

    2014-12-08

    Precious little is known about the composition of low-level clouds over the Antarctic Plateau and their effect on climate. In situ measurements at the South Pole using a unique tethered balloon system and ground-based lidar reveal a much higher than anticipated incidence of low-level, mixed-phase clouds (i.e., consisting of supercooled liquid water drops and ice crystals). The high incidence of mixed-phase clouds is currently poorly represented in global climate models (GCMs). As a result, the effects that mixed-phase clouds have on climate predictions are highly uncertain. In this paper, we modify the National Center for Atmospheric Research (NCAR) Community Earthmore » System Model (CESM) GCM to align with the new observations and evaluate the radiative effects on a continental scale. The net cloud radiative effects (CREs) over Antarctica are increased by +7.4 Wm –2, and although this is a significant change, a much larger effect occurs when the modified model physics are extended beyond the Antarctic continent. The simulations show significant net CRE over the Southern Ocean storm tracks, where recent measurements also indicate substantial regions of supercooled liquid. Finally, these sensitivity tests confirm that Southern Ocean CREs are strongly sensitive to mixed-phase clouds colder than –20 °C.« less

  5. Quantifying the effect of mixing on the mean age of air in CCMVal-2 and CCMI-1 models

    NASA Astrophysics Data System (ADS)

    Dietmüller, Simone; Eichinger, Roland; Garny, Hella; Birner, Thomas; Boenisch, Harald; Pitari, Giovanni; Mancini, Eva; Visioni, Daniele; Stenke, Andrea; Revell, Laura; Rozanov, Eugene; Plummer, David A.; Scinocca, John; Jöckel, Patrick; Oman, Luke; Deushi, Makoto; Kiyotaka, Shibata; Kinnison, Douglas E.; Garcia, Rolando; Morgenstern, Olaf; Zeng, Guang; Stone, Kane Adam; Schofield, Robyn

    2018-05-01

    The stratospheric age of air (AoA) is a useful measure of the overall capabilities of a general circulation model (GCM) to simulate stratospheric transport. Previous studies have reported a large spread in the simulation of AoA by GCMs and coupled chemistry-climate models (CCMs). Compared to observational estimates, simulated AoA is mostly too low. Here we attempt to untangle the processes that lead to the AoA differences between the models and between models and observations. AoA is influenced by both mean transport by the residual circulation and two-way mixing; we quantify the effects of these processes using data from the CCM inter-comparison projects CCMVal-2 (Chemistry-Climate Model Validation Activity 2) and CCMI-1 (Chemistry-Climate Model Initiative, phase 1). Transport along the residual circulation is measured by the residual circulation transit time (RCTT). We interpret the difference between AoA and RCTT as additional aging by mixing. Aging by mixing thus includes mixing on both the resolved and subgrid scale. We find that the spread in AoA between the models is primarily caused by differences in the effects of mixing and only to some extent by differences in residual circulation strength. These effects are quantified by the mixing efficiency, a measure of the relative increase in AoA by mixing. The mixing efficiency varies strongly between the models from 0.24 to 1.02. We show that the mixing efficiency is not only controlled by horizontal mixing, but by vertical mixing and vertical diffusion as well. Possible causes for the differences in the models' mixing efficiencies are discussed. Differences in subgrid-scale mixing (including differences in advection schemes and model resolutions) likely contribute to the differences in mixing efficiency. However, differences in the relative contribution of resolved versus parameterized wave forcing do not appear to be related to differences in mixing efficiency or AoA.

  6. Developing approaches for linear mixed modeling in landscape genetics through landscape-directed dispersal simulations

    USGS Publications Warehouse

    Row, Jeffrey R.; Knick, Steven T.; Oyler-McCance, Sara J.; Lougheed, Stephen C.; Fedy, Bradley C.

    2017-01-01

    Dispersal can impact population dynamics and geographic variation, and thus, genetic approaches that can establish which landscape factors influence population connectivity have ecological and evolutionary importance. Mixed models that account for the error structure of pairwise datasets are increasingly used to compare models relating genetic differentiation to pairwise measures of landscape resistance. A model selection framework based on information criteria metrics or explained variance may help disentangle the ecological and landscape factors influencing genetic structure, yet there are currently no consensus for the best protocols. Here, we develop landscape-directed simulations and test a series of replicates that emulate independent empirical datasets of two species with different life history characteristics (greater sage-grouse; eastern foxsnake). We determined that in our simulated scenarios, AIC and BIC were the best model selection indices and that marginal R2 values were biased toward more complex models. The model coefficients for landscape variables generally reflected the underlying dispersal model with confidence intervals that did not overlap with zero across the entire model set. When we controlled for geographic distance, variables not in the underlying dispersal models (i.e., nontrue) typically overlapped zero. Our study helps establish methods for using linear mixed models to identify the features underlying patterns of dispersal across a variety of landscapes.

  7. Semi-empirical correlation for binary interaction parameters of the Peng-Robinson equation of state with the van der Waals mixing rules for the prediction of high-pressure vapor-liquid equilibrium.

    PubMed

    Fateen, Seif-Eddeen K; Khalil, Menna M; Elnabawy, Ahmed O

    2013-03-01

    Peng-Robinson equation of state is widely used with the classical van der Waals mixing rules to predict vapor liquid equilibria for systems containing hydrocarbons and related compounds. This model requires good values of the binary interaction parameter kij . In this work, we developed a semi-empirical correlation for kij partly based on the Huron-Vidal mixing rules. We obtained values for the adjustable parameters of the developed formula for over 60 binary systems and over 10 categories of components. The predictions of the new equation system were slightly better than the constant-kij model in most cases, except for 10 systems whose predictions were considerably improved with the new correlation.

  8. Assessing ocean vertical mixing schemes for the study of climate change

    NASA Astrophysics Data System (ADS)

    Howard, A. M.; Lindo, F.; Fells, J.; Tulsee, V.; Cheng, Y.; Canuto, V.

    2014-12-01

    Climate change is a burning issue of our time. It is critical to know the consequences of choosing "business as usual" vs. mitigating our emissions for impacts e.g. ecosystem disruption, sea-level rise, floods and droughts. To make predictions we must model realistically each component of the climate system. The ocean must be modeled carefully as it plays a critical role, including transporting heat and storing heat and dissolved carbon dioxide. Modeling the ocean realistically in turn requires physically based parameterizations of key processes in it that cannot be explicitly represented in a global climate model. One such process is vertical mixing. The turbulence group at NASA-GISS has developed a comprehensive new vertical mixing scheme (GISSVM) based on turbulence theory, including surface convection and wind shear, interior waves and double-diffusion, and bottom tides. The GISSVM is tested in stand-alone ocean simulations before being used in coupled climate models. It is also being upgraded to more faithfully represent the physical processes. To help assess mixing schemes, students use data from NASA-GISS to create visualizations and calculate statistics including mean bias and rms differences and correlations of fields. These are created and programmed with MATLAB. Results with the commonly used KPP mixing scheme and the present GISSVM and candidate improved variants of GISSVM will be compared between stand-alone ocean models and coupled models and observations. This project introduces students to modeling of a complex system, an important theme in contemporary science and helps them gain a better appreciation of climate science and a new perspective on it. They also gain familiarity with MATLAB, a widely used tool, and develop skills in writing and understanding programs. Moreover they contribute to the advancement of science by providing information that will help guide the improvement of the GISSVM and hence of ocean and climate models and ultimately our understanding and prediction of climate. The PI is both a member of the turbulence group at NASA-GISS and an associate professor at Medgar Evers College of CUNY, a minority serving institution in an urban setting in central Brooklyn. This Project is supported by NSF award AGS-1359293 REU site: CUNY/GISS Center for Global Climate Research.

  9. Investigation of optical current transformer signal processing method based on an improved Kalman algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Yan; Ge, Jin-ming; Zhang, Guo-qing; Yu, Wen-bin; Liu, Rui-tong; Fan, Wei; Yang, Ying-xuan

    2018-01-01

    This paper explores the problem of signal processing in optical current transformers (OCTs). Based on the noise characteristics of OCTs, such as overlapping signals, noise frequency bands, low signal-to-noise ratios, and difficulties in acquiring statistical features of noise power, an improved standard Kalman filtering algorithm was proposed for direct current (DC) signal processing. The state-space model of the OCT DC measurement system is first established, and then mixed noise can be processed by adding mixed noise into measurement and state parameters. According to the minimum mean squared error criterion, state predictions and update equations of the improved Kalman algorithm could be deduced based on the established model. An improved central difference Kalman filter was proposed for alternating current (AC) signal processing, which improved the sampling strategy and noise processing of colored noise. Real-time estimation and correction of noise were achieved by designing AC and DC noise recursive filters. Experimental results show that the improved signal processing algorithms had a good filtering effect on the AC and DC signals with mixed noise of OCT. Furthermore, the proposed algorithm was able to achieve real-time correction of noise during the OCT filtering process.

  10. An Assessment of Southern Ocean Water Masses and Sea Ice During 1988-2007 in a Suite of Interannual CORE-II Simulations

    NASA Technical Reports Server (NTRS)

    Downes, Stephanie M.; Farneti, Riccardo; Uotila, Petteri; Griffies, Stephen M.; Marsland, Simon J.; Bailey, David; Behrens, Erik; Bentsen, Mats; Bi, Daohua; Biastoch, Arne; hide

    2015-01-01

    We characterise the representation of the Southern Ocean water mass structure and sea ice within a suite of 15 global ocean-ice models run with the Coordinated Ocean-ice Reference Experiment Phase II (CORE-II) protocol. The main focus is the representation of the present (1988-2007) mode and intermediate waters, thus framing an analysis of winter and summer mixed layer depths; temperature, salinity, and potential vorticity structure; and temporal variability of sea ice distributions. We also consider the interannual variability over the same 20 year period. Comparisons are made between models as well as to observation-based analyses where available. The CORE-II models exhibit several biases relative to Southern Ocean observations, including an underestimation of the model mean mixed layer depths of mode and intermediate water masses in March (associated with greater ocean surface heat gain), and an overestimation in September (associated with greater high latitude ocean heat loss and a more northward winter sea-ice extent). In addition, the models have cold and fresh/warm and salty water column biases centred near 50 deg S. Over the 1988-2007 period, the CORE-II models consistently simulate spatially variable trends in sea-ice concentration, surface freshwater fluxes, mixed layer depths, and 200-700 m ocean heat content. In particular, sea-ice coverage around most of the Antarctic continental shelf is reduced, leading to a cooling and freshening of the near surface waters. The shoaling of the mixed layer is associated with increased surface buoyancy gain, except in the Pacific where sea ice is also influential. The models are in disagreement, despite the common CORE-II atmospheric state, in their spatial pattern of the 20-year trends in the mixed layer depth and sea-ice.

  11. Analysis of Composite Skin-Stiffener Debond Specimens Using Volume Elements and a Shell/3D Modeling Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Minguet, Pierre J.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The debonding of a skin/stringer specimen subjected to tension was studied using three-dimensional volume element modeling and computational fracture mechanics. Mixed mode strain energy release rates were calculated from finite element results using the virtual crack closure technique. The simulations revealed an increase in total energy release rate in the immediate vicinity of the free edges of the specimen. Correlation of the computed mixed-mode strain energy release rates along the delamination front contour with a two-dimensional mixed-mode interlaminar fracture criterion suggested that in spite of peak total energy release rates at the free edge the delamination would not advance at the edges first. The qualitative prediction of the shape of the delamination front was confirmed by X-ray photographs of a specimen taken during testing. The good correlation between prediction based on analysis and experiment demonstrated the efficiency of a mixed-mode failure analysis for the investigation of skin/stiffener separation due to delamination in the adherents. The application of a shell/3D modeling technique for the simulation of skin/stringer debond in a specimen subjected to three-point bending is also demonstrated. The global structure was modeled with shell elements. A local three-dimensional model, extending to about three specimen thicknesses on either side of the delamination front was used to capture the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from shell/3D simulations were in good agreement with results obtained from full solid models. The good correlations of the results demonstrated the effectiveness of the shell/3D modeling technique for the investigation of skin/stiffener separation due to delamination in the adherents.

  12. A Hybrid RANS/LES Approach for Predicting Jet Noise

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.

    2006-01-01

    Hybrid acoustic prediction methods have an important advantage over the current Reynolds averaged Navier-Stokes (RANS) based methods in that they only involve modeling of the relatively universal subscale motion and not the configuration dependent larger scale turbulence. Unfortunately, they are unable to account for the high frequency sound generated by the turbulence in the initial mixing layers. This paper introduces an alternative approach that directly calculates the sound from a hybrid RANS/LES flow model (which can resolve the steep gradients in the initial mixing layers near the nozzle lip) and adopts modeling techniques similar to those used in current RANS based noise prediction methods to determine the unknown sources in the equations for the remaining unresolved components of the sound field. The resulting prediction method would then be intermediate between the current noise prediction codes and previously proposed hybrid noise prediction methods.

  13. A mixing evolution model for bidirectional microblog user networks

    NASA Astrophysics Data System (ADS)

    Yuan, Wei-Guo; Liu, Yun

    2015-08-01

    Microblogs have been widely used as a new form of online social networking. Based on the user profile data collected from Sina Weibo, we find that the number of microblog user bidirectional friends approximately corresponds with the lognormal distribution. We then build two microblog user networks with real bidirectional relationships, both of which have not only small-world and scale-free but also some special properties, such as double power-law degree distribution, disassortative network, hierarchical and rich-club structure. Moreover, by detecting the community structures of the two real networks, we find both of their community scales follow an exponential distribution. Based on the empirical analysis, we present a novel evolution network model with mixed connection rules, including lognormal fitness preferential and random attachment, nearest neighbor interconnected in the same community, and global random associations in different communities. The simulation results show that our model is consistent with real network in many topology features.

  14. A Finite Length Cylinder Model for Mixed Oxide-Ion and Electron Conducting Cathodes Suited for Intermediate-Temperature Solid Oxide Fuel Cells

    DOE PAGES

    Jin, Xinfang; Wang, Jie; Jiang, Long; ...

    2016-03-25

    A physics-based model is presented to simulate the electrochemical behavior of mixed ion and electron conducting (MIEC) cathodes for intermediate-temperature solid oxide fuel cells. Analytic solutions for both transient and impedance models based on a finite length cylinder are derived. These solutions are compared to their infinite length counterparts. The impedance solution is also compared to experimental electrochemical impedance spectroscopy data obtained from both a traditional well-established La 0.6Sr 0.4Co 0.2Fe 0.8O 3-δ (LSCF) cathode and a new SrCo 0.9Nb 0.1O 3-δ (SCN) porous cathode. Lastly, the impedance simulations agree well with the experimental values, demonstrating that the new modelsmore » can be used to extract electro-kinetic parameters of MIEC SOFC cathodes.« less

  15. Agent-based modeling of deforestation in southern Yucatán, Mexico, and reforestation in the Midwest United States

    PubMed Central

    Manson, Steven M.; Evans, Tom

    2007-01-01

    We combine mixed-methods research with integrated agent-based modeling to understand land change and economic decision making in the United States and Mexico. This work demonstrates how sustainability science benefits from combining integrated agent-based modeling (which blends methods from the social, ecological, and information sciences) and mixed-methods research (which interleaves multiple approaches ranging from qualitative field research to quantitative laboratory experiments and interpretation of remotely sensed imagery). We test assumptions of utility-maximizing behavior in household-level landscape management in south-central Indiana, linking parcel data, land cover derived from aerial photography, and findings from laboratory experiments. We examine the role of uncertainty and limited information, preferences, differential demographic attributes, and past experience and future time horizons. We also use evolutionary programming to represent bounded rationality in agriculturalist households in the southern Yucatán of Mexico. This approach captures realistic rule of thumb strategies while identifying social and environmental factors in a manner similar to econometric models. These case studies highlight the role of computational models of decision making in land-change contexts and advance our understanding of decision making in general. PMID:18093928

  16. On mixed and displacement finite element models of a refined shear deformation theory for laminated anisotropic plates

    NASA Technical Reports Server (NTRS)

    Reddy, J. N.

    1986-01-01

    An improved plate theory that accounts for the transverse shear deformation is presented, and mixed and displacement finite element models of the theory are developed. The theory is based on an assumed displacement field in which the inplane displacements are expanded in terms of the thickness coordinate up to the cubic term and the transverse deflection is assumed to be independent of the thickness coordinate. The governing equations of motion for the theory are derived from the Hamilton's principle. The theory eliminates the need for shear correction factors because the transverse shear stresses are represented parabolically. A mixed finite element model that uses independent approximations of the displacements and moments, and a displacement model that uses only displacements as degrees of freedom are developed. A comparison of the numerical results for bending with the exact solutions of the new theory and the three-dimensional elasticity theory shows that the present theory (and hence the finite element models) is more accurate than other plate-theories of the same order.

  17. Generalized linear mixed models with varying coefficients for longitudinal data.

    PubMed

    Zhang, Daowen

    2004-03-01

    The routinely assumed parametric functional form in the linear predictor of a generalized linear mixed model for longitudinal data may be too restrictive to represent true underlying covariate effects. We relax this assumption by representing these covariate effects by smooth but otherwise arbitrary functions of time, with random effects used to model the correlation induced by among-subject and within-subject variation. Due to the usually intractable integration involved in evaluating the quasi-likelihood function, the double penalized quasi-likelihood (DPQL) approach of Lin and Zhang (1999, Journal of the Royal Statistical Society, Series B61, 381-400) is used to estimate the varying coefficients and the variance components simultaneously by representing a nonparametric function by a linear combination of fixed effects and random effects. A scaled chi-squared test based on the mixed model representation of the proposed model is developed to test whether an underlying varying coefficient is a polynomial of certain degree. We evaluate the performance of the procedures through simulation studies and illustrate their application with Indonesian children infectious disease data.

  18. Adjusting case mix payment amounts for inaccurately reported comorbidity data.

    PubMed

    Sutherland, Jason M; Hamm, Jeremy; Hatcher, Jeff

    2010-03-01

    Case mix methods such as diagnosis related groups have become a basis of payment for inpatient hospitalizations in many countries. Specifying cost weight values for case mix system payment has important consequences; recent evidence suggests case mix cost weight inaccuracies influence the supply of some hospital-based services. To begin to address the question of case mix cost weight accuracy, this paper is motivated by the objective of improving the accuracy of cost weight values due to inaccurate or incomplete comorbidity data. The methods are suitable to case mix methods that incorporate disease severity or comorbidity adjustments. The methods are based on the availability of detailed clinical and cost information linked at the patient level and leverage recent results from clinical data audits. A Bayesian framework is used to synthesize clinical data audit information regarding misclassification probabilities into cost weight value calculations. The models are implemented through Markov chain Monte Carlo methods. An example used to demonstrate the methods finds that inaccurate comorbidity data affects cost weight values by biasing cost weight values (and payments) downward. The implications for hospital payments are discussed and the generalizability of the approach is explored.

  19. Studying Mixing in Non-Newtonian Blue Maize Flour Suspensions Using Color Analysis

    PubMed Central

    Trujillo-de Santiago, Grissel; Rojas-de Gante, Cecilia; García-Lara, Silverio; Ballescá-Estrada, Adriana; Alvarez, Mario Moisés

    2014-01-01

    Background Non-Newtonian fluids occur in many relevant flow and mixing scenarios at the lab and industrial scale. The addition of acid or basic solutions to a non-Newtonian fluid is not an infrequent operation, particularly in Biotechnology applications where the pH of Non-Newtonian culture broths is usually regulated using this strategy. Methodology and Findings We conducted mixing experiments in agitated vessels using Non-Newtonian blue maize flour suspensions. Acid or basic pulses were injected to reveal mixing patterns and flow structures and to follow their time evolution. No foreign pH indicator was used as blue maize flours naturally contain anthocyanins that act as a native, wide spectrum, pH indicator. We describe a novel method to quantitate mixedness and mixing evolution through Dynamic Color Analysis (DCA) in this system. Color readings corresponding to different times and locations within the mixing vessel were taken with a digital camera (or a colorimeter) and translated to the CIELab scale of colors. We use distances in the Lab space, a 3D color space, between a particular mixing state and the final mixing point to characterize segregation/mixing in the system. Conclusion and Relevance Blue maize suspensions represent an adequate and flexible model to study mixing (and fluid mechanics in general) in Non-Newtonian suspensions using acid/base tracer injections. Simple strategies based on the evaluation of color distances in the CIELab space (or other scales such as HSB) can be adapted to characterize mixedness and mixing evolution in experiments using blue maize suspensions. PMID:25401332

  20. A brief measure of attitudes toward mixed methods research in psychology.

    PubMed

    Roberts, Lynne D; Povee, Kate

    2014-01-01

    The adoption of mixed methods research in psychology has trailed behind other social science disciplines. Teaching psychology students, academics, and practitioners about mixed methodologies may increase the use of mixed methods within the discipline. However, tailoring and evaluating education and training in mixed methodologies requires an understanding of, and way of measuring, attitudes toward mixed methods research in psychology. To date, no such measure exists. In this article we present the development and initial validation of a new measure: Attitudes toward Mixed Methods Research in Psychology. A pool of 42 items developed from previous qualitative research on attitudes toward mixed methods research along with validation measures was administered via an online survey to a convenience sample of 274 psychology students, academics and psychologists. Principal axis factoring with varimax rotation on a subset of the sample produced a four-factor, 12-item solution. Confirmatory factor analysis on a separate subset of the sample indicated that a higher order four factor model provided the best fit to the data. The four factors; 'Limited Exposure,' '(in)Compatibility,' 'Validity,' and 'Tokenistic Qualitative Component'; each have acceptable internal reliability. Known groups validity analyses based on preferred research orientation and self-rated mixed methods research skills, and convergent and divergent validity analyses based on measures of attitudes toward psychology as a science and scientist and practitioner orientation, provide initial validation of the measure. This brief, internally reliable measure can be used in assessing attitudes toward mixed methods research in psychology, measuring change in attitudes as part of the evaluation of mixed methods education, and in larger research programs.

  1. Assessing Argumentative Representation with Bayesian Network Models in Debatable Social Issues

    ERIC Educational Resources Information Center

    Zhang, Zhidong; Lu, Jingyan

    2014-01-01

    This study seeks to obtain argumentation models, which represent argumentative processes and an assessment structure in secondary school debatable issues in the social sciences. The argumentation model was developed based on mixed methods, a combination of both theory-driven and data-driven methods. The coding system provided a combing point by…

  2. Vaporization and Zonal Mixing in Performance Modeling of Advanced LOX-Methane Rockets

    NASA Technical Reports Server (NTRS)

    Williams, George J., Jr.; Stiegemeier, Benjamin R.

    2013-01-01

    Initial modeling of LOX-Methane reaction control (RCE) 100 lbf thrusters and larger, 5500 lbf thrusters with the TDK/VIPER code has shown good agreement with sea-level and altitude test data. However, the vaporization and zonal mixing upstream of the compressible flow stage of the models leveraged empirical trends to match the sea-level data. This was necessary in part because the codes are designed primarily to handle the compressible part of the flow (i.e. contraction through expansion) and in part because there was limited data on the thrusters themselves on which to base a rigorous model. A more rigorous model has been developed which includes detailed vaporization trends based on element type and geometry, radial variations in mixture ratio within each of the "zones" associated with elements and not just between zones of different element types, and, to the extent possible, updated kinetic rates. The Spray Combustion Analysis Program (SCAP) was leveraged to support assumptions in the vaporization trends. Data of both thrusters is revisited and the model maintains a good predictive capability while addressing some of the major limitations of the previous version.

  3. Comparison of Themodynamic and Transport Property Models for Computing Equilibrium High Enthalpy Flows

    NASA Astrophysics Data System (ADS)

    Ramasahayam, Veda Krishna Vyas; Diwakar, Anant; Bodi, Kowsik

    2017-11-01

    To study the flow of high temperature air in vibrational and chemical equilibrium, accurate models for thermodynamic state and transport phenomena are required. In the present work, the performance of a state equation model and two mixing rules for determining equilibrium air thermodynamic and transport properties are compared with that of curve fits. The thermodynamic state model considers 11 species which computes flow chemistry by an iterative process and the mixing rules considered for viscosity are Wilke and Armaly-Sutton. The curve fits of Srinivasan, which are based on Grabau type transition functions, are chosen for comparison. A two-dimensional Navier-Stokes solver is developed to simulate high enthalpy flows with numerical fluxes computed by AUSM+-up. The accuracy of state equation model and curve fits for thermodynamic properties is determined using hypersonic inviscid flow over a circular cylinder. The performance of mixing rules and curve fits for viscosity are compared using hypersonic laminar boundary layer prediction on a flat plate. It is observed that steady state solutions from state equation model and curve fits match with each other. Though curve fits are significantly faster the state equation model is more general and can be adapted to any flow composition.

  4. Simulation of Long Lived Tracers Using an Improved Empirically-Based Two-Dimensional Model Transport Algorithm

    NASA Technical Reports Server (NTRS)

    Fleming, Eric L.; Jackman, Charles H.; Stolarski, Richard S.; Considine, David B.

    1998-01-01

    We have developed a new empirically-based transport algorithm for use in our GSFC two-dimensional transport and chemistry assessment model. The new algorithm contains planetary wave statistics, and parameterizations to account for the effects due to gravity waves and equatorial Kelvin waves. We will present an overview of the new algorithm, and show various model-data comparisons of long-lived tracers as part of the model validation. We will also show how the new algorithm gives substantially better agreement with observations compared to our previous model transport. The new model captures much of the qualitative structure and seasonal variability observed methane, water vapor, and total ozone. These include: isolation of the tropics and winter polar vortex, the well mixed surf-zone region of the winter sub-tropics and mid-latitudes, and the propagation of seasonal signals in the tropical lower stratosphere. Model simulations of carbon-14 and strontium-90 compare fairly well with observations in reproducing the peak in mixing ratio at 20-25 km, and the decrease with altitude in mixing ratio above 25 km. We also ran time dependent simulations of SF6 from which the model mean age of air values were derived. The oldest air (5.5 to 6 years) occurred in the high latitude upper stratosphere during fall and early winter of both hemispheres, and in the southern hemisphere lower stratosphere during late winter and early spring. The latitudinal gradient of the mean ages also compare well with ER-2 aircraft observations in the lower stratosphere.

  5. Wave models for turbulent free shear flows

    NASA Technical Reports Server (NTRS)

    Liou, W. W.; Morris, P. J.

    1991-01-01

    New predictive closure models for turbulent free shear flows are presented. They are based on an instability wave description of the dominant large scale structures in these flows using a quasi-linear theory. Three model were developed to study the structural dynamics of turbulent motions of different scales in free shear flows. The local characteristics of the large scale motions are described using linear theory. Their amplitude is determined from an energy integral analysis. The models were applied to the study of an incompressible free mixing layer. In all cases, predictions are made for the development of the mean flow field. In the last model, predictions of the time dependent motion of the large scale structure of the mixing region are made. The predictions show good agreement with experimental observations.

  6. Stochastic parameterization for light absorption by internally mixed BC/dust in snow grains for application to climate models

    NASA Astrophysics Data System (ADS)

    Liou, K. N.; Takano, Y.; He, C.; Yang, P.; Leung, L. R.; Gu, Y.; Lee, W. L.

    2014-06-01

    A stochastic approach has been developed to model the positions of BC (black carbon)/dust internally mixed with two snow grain types: hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine BC/dust single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), the action of internal mixing absorbs substantially more light than external mixing. The snow grain shape effect on absorption is relatively small, but its effect on asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions of BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2-5 µm) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 µm, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo substantially more than external mixing and that the snow grain shape plays a critical role in snow albedo calculations through its forward scattering strength. Also, multiple inclusion of BC/dust significantly reduces snow albedo as compared to an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization involving contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountain/snow topography.

  7. Stochastic Parameterization for Light Absorption by Internally Mixed BC/dust in Snow Grains for Application to Climate Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liou, K. N.; Takano, Y.; He, Cenlin

    2014-06-27

    A stochastic approach to model the positions of BC/dust internally mixed with two snow-grain types has been developed, including hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine their single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), internal mixing absorbs more light than external mixing. The snow-grain shape effect on absorption is relatively small, but its effect on the asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions ofmore » BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2 – 5 um) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 um, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo more than external mixing and that the snow-grain shape plays a critical role in snow albedo calculations through the asymmetry factor. Also, snow albedo reduces more in the case of multiple inclusion of BC/dust compared to that of an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization containing contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountains/snow topography.« less

  8. Mixing of Supersonic Streams

    NASA Technical Reports Server (NTRS)

    Hawk, C. W.; Landrum, D. B.; Muller, S.; Turner, M.; Parkinson, D.

    1998-01-01

    The Strutjet approach to Rocket Based Combined Cycle (RBCC) propulsion depends upon fuel-rich flows from the rocket nozzles and turbine exhaust products mixing with the ingested air for successful operation in the ramjet and scramjet modes. It is desirable to delay this mixing process in the air-augmented mode of operation present during low speed flight. A model of the Strutjet device has been built and is undergoing test to investigate the mixing of the streams as a function of distance from the Strutjet exit plane during simulated low speed flight conditions. Cold flow testing of a 1/6 scale Strutjet model is underway and nearing completion. Planar Laser Induced Fluorescence (PLIF) diagnostic methods are being employed to observe the mixing of the turbine exhaust gas with the gases from both the primary rockets and the ingested air simulating low speed, air augmented operation of the RBCC. The ratio of the pressure in the turbine exhaust duct to that in the rocket nozzle wall at the point of their intersection is the independent variable in these experiments. Tests were accomplished at values of 1.0, 1.5 and 2.0 for this parameter. Qualitative results illustrate the development of the mixing zone from the exit plane of the model to a distance of about 19 equivalent rocket nozzle exit diameters downstream. These data show the mixing to be confined in the vertical plane for all cases, The lateral expansion is more pronounced at a pressure ratio of 1.0 and suggests that mixing with the ingested flow would be likely beginning at a distance of 7 nozzle exit diameters downstream of the nozzle exit plane.

  9. Tropical Cyclone Induced Air-Sea Interactions Over Oceanic Fronts

    NASA Astrophysics Data System (ADS)

    Shay, L. K.

    2012-12-01

    Recent severe tropical cyclones underscore the inherent importance of warm background ocean fronts and their interactions with the atmospheric boundary layer. Central to the question of heat and moisture fluxes, the amount of heat available to the tropical cyclone is predicated by the initial mixed layer depth and strength of the stratification that essentially set the level of entrainment mixing at the base of the mixed layer. In oceanic regimes where the ocean mixed layers are thin, shear-induced mixing tends to cool the upper ocean to form cold wakes which reduces the air-sea fluxes. This is an example of negative feedback. By contrast, in regimes where the ocean mixed layers are deep (usually along the western part of the gyres), warm water advection by the nearly steady currents reduces the levels of turbulent mixing by shear instabilities. As these strong near-inertial shears are arrested, more heat and moisture transfers are available through the enthalpy fluxes (typically 1 to 1.5 kW m-2) into the hurricane boundary layer. When tropical cyclones move into favorable or neutral atmospheric conditions, tropical cyclones have a tendency to rapidly intensify as observed over the Gulf of Mexico during Isidore and Lili in 2002, Katrina, Rita and Wilma in 2005, Dean and Felix in 2007 in the Caribbean Sea, and Earl in 2010 just north of the Caribbean Islands. To predict these tropical cyclone deepening (as well as weakening) cycles, coupled models must have ocean models with realistic ocean conditions and accurate air-sea and vertical mixing parameterizations. Thus, to constrain these models, having complete 3-D ocean profiles juxtaposed with atmospheric profiler measurements prior, during and subsequent to passage is an absolute necessity framed within regional scale satellite derived fields.

  10. Constrained minimization problems for the reproduction number in meta-population models.

    PubMed

    Poghotanyan, Gayane; Feng, Zhilan; Glasser, John W; Hill, Andrew N

    2018-02-14

    The basic reproduction number ([Formula: see text]) can be considerably higher in an SIR model with heterogeneous mixing compared to that from a corresponding model with homogeneous mixing. For example, in the case of measles, mumps and rubella in San Diego, CA, Glasser et al. (Lancet Infect Dis 16(5):599-605, 2016. https://doi.org/10.1016/S1473-3099(16)00004-9 ), reported an increase of 70% in [Formula: see text] when heterogeneity was accounted for. Meta-population models with simple heterogeneous mixing functions, e.g., proportionate mixing, have been employed to identify optimal vaccination strategies using an approach based on the gradient of the effective reproduction number ([Formula: see text]), which consists of partial derivatives of [Formula: see text] with respect to the proportions immune [Formula: see text] in sub-groups i (Feng et al. in J Theor Biol 386:177-187, 2015.  https://doi.org/10.1016/j.jtbi.2015.09.006 ; Math Biosci 287:93-104, 2017.  https://doi.org/10.1016/j.mbs.2016.09.013 ). These papers consider cases in which an optimal vaccination strategy exists. However, in general, the optimal solution identified using the gradient may not be feasible for some parameter values (i.e., vaccination coverages outside the unit interval). In this paper, we derive the analytic conditions under which the optimal solution is feasible. Explicit expressions for the optimal solutions in the case of [Formula: see text] sub-populations are obtained, and the bounds for optimal solutions are derived for [Formula: see text] sub-populations. This is done for general mixing functions and examples of proportionate and preferential mixing are presented. Of special significance is the result that for general mixing schemes, both [Formula: see text] and [Formula: see text] are bounded below and above by their corresponding expressions when mixing is proportionate and isolated, respectively.

  11. Responses of Mixed-Phase Cloud Condensates and Cloud Radiative Effects to Ice Nucleating Particle Concentrations in NCAR CAM5 and DOE ACME Climate Models

    NASA Astrophysics Data System (ADS)

    Liu, X.; Shi, Y.; Wu, M.; Zhang, K.

    2017-12-01

    Mixed-phase clouds frequently observed in the Arctic and mid-latitude storm tracks have the substantial impacts on the surface energy budget, precipitation and climate. In this study, we first implement the two empirical parameterizations (Niemand et al. 2012 and DeMott et al. 2015) of heterogeneous ice nucleation for mixed-phase clouds in the NCAR Community Atmosphere Model Version 5 (CAM5) and DOE Accelerated Climate Model for Energy Version 1 (ACME1). Model simulated ice nucleating particle (INP) concentrations based on Niemand et al. and DeMott et al. are compared with those from the default ice nucleation parameterization based on the classical nucleation theory (CNT) in CAM5 and ACME, and with in situ observations. Significantly higher INP concentrations (by up to a factor of 5) are simulated from Niemand et al. than DeMott et al. and CNT especially over the dust source regions in both CAM5 and ACME. Interestingly the ACME model simulates higher INP concentrations than CAM5, especially in the Polar regions. This is also the case when we nudge the two models' winds and temperature towards the same reanalysis, indicating more efficient transport of aerosols (dust) to the Polar regions in ACME. Next, we examine the responses of model simulated cloud liquid water and ice water contents to different INP concentrations from three ice nucleation parameterizations (Niemand et al., DeMott et al., and CNT) in CAM5 and ACME. Changes in liquid water path (LWP) reach as much as 20% in the Arctic regions in ACME between the three parameterizations while the LWP changes are smaller and limited in the Northern Hemispheric mid-latitudes in CAM5. Finally, the impacts on cloud radiative forcing and dust indirect effects on mixed-phase clouds are quantified with the three ice nucleation parameterizations in CAM5 and ACME.

  12. Generalization of mixed multiscale finite element methods with applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C S

    Many science and engineering problems exhibit scale disparity and high contrast. The small scale features cannot be omitted in the physical models because they can affect the macroscopic behavior of the problems. However, resolving all the scales in these problems can be prohibitively expensive. As a consequence, some types of model reduction techniques are required to design efficient solution algorithms. For practical purpose, we are interested in mixed finite element problems as they produce solutions with certain conservative properties. Existing multiscale methods for such problems include the mixed multiscale finite element methods. We show that for complicated problems, the mixedmore » multiscale finite element methods may not be able to produce reliable approximations. This motivates the need of enrichment for coarse spaces. Two enrichment approaches are proposed, one is based on generalized multiscale finte element metthods (GMsFEM), while the other is based on spectral element-based algebraic multigrid (rAMGe). The former one, which is called mixed GMsFEM, is developed for both Darcy’s flow and linear elasticity. Application of the algorithm in two-phase flow simulations are demonstrated. For linear elasticity, the algorithm is subtly modified due to the symmetry requirement of the stress tensor. The latter enrichment approach is based on rAMGe. The algorithm differs from GMsFEM in that both of the velocity and pressure spaces are coarsened. Due the multigrid nature of the algorithm, recursive application is available, which results in an efficient multilevel construction of the coarse spaces. Stability, convergence analysis, and exhaustive numerical experiments are carried out to validate the proposed enrichment approaches. iii« less

  13. Improving deep convolutional neural networks with mixed maxout units

    PubMed Central

    Liu, Fu-xian; Li, Long-yue

    2017-01-01

    Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that “non-maximal features are unable to deliver” and “feature mapping subspace pooling is insufficient,” we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance. PMID:28727737

  14. A Priori Subgrid Scale Modeling for a Droplet Laden Temporal Mixing Layer

    NASA Technical Reports Server (NTRS)

    Okongo, Nora; Bellan, Josette

    2000-01-01

    Subgrid analysis of a transitional temporal mixing layer with evaporating droplets has been performed using a direct numerical simulation (DNS) database. The DNS is for a Reynolds number (based on initial vorticity thickness) of 600, with droplet mass loading of 0.2. The gas phase is computed using a Eulerian formulation, with Lagrangian droplet tracking. Since Large Eddy Simulation (LES) of this flow requires the computation of unfiltered gas-phase variables at droplet locations from filtered gas-phase variables at the grid points, it is proposed to model these by assuming the gas-phase variables to be given by the filtered variables plus a correction based on the filtered standard deviation, which can be computed from the sub-grid scale (SGS) standard deviation. This model predicts unfiltered variables at droplet locations better than simply interpolating the filtered variables. Three methods are investigated for modeling the SGS standard deviation: Smagorinsky, gradient and scale-similarity. When properly calibrated, the gradient and scale-similarity methods give results in excellent agreement with the DNS.

  15. Improving mixing efficiency of a polymer micromixer by use of a plastic shim divider

    NASA Astrophysics Data System (ADS)

    Li, Lei; Lee, L. James; Castro, Jose M.; Yi, Allen Y.

    2010-03-01

    In this paper, a critical modification to a polymer based affordable split-and-recombination static micromixer is described. To evaluate the improvement, both the original and the modified design were carefully investigated using an experimental setup and numerical modeling approach. The structure of the micromixer was designed to take advantage of the process capabilities of both ultraprecision micromachining and microinjection molding process. Specifically, the original and the modified design were numerically simulated using commercial finite element method software ANSYS CFX to assist the re-designing of the micromixers. The simulation results have shown that both designs are capable of performing mixing while the modified design has a much improved performance. Mixing experiments with two different fluids were carried out using the original and the modified mixers again showed a significantly improved mixing uniformity by the latter. The measured mixing coefficient for the original design was 0.11, and for the improved design it was 0.065. The developed manufacturing process based on ultraprecision machining and microinjection molding processes for device fabrication has the advantage of high-dimensional precision, low cost and manufacturing flexibility.

  16. On-orbit servicing system assessment and optimization methods based on lifecycle simulation under mixed aleatory and epistemic uncertainties

    NASA Astrophysics Data System (ADS)

    Yao, Wen; Chen, Xiaoqian; Huang, Yiyong; van Tooren, Michel

    2013-06-01

    To assess the on-orbit servicing (OOS) paradigm and optimize its utilities by taking advantage of its inherent flexibility and responsiveness, the OOS system assessment and optimization methods based on lifecycle simulation under uncertainties are studied. The uncertainty sources considered in this paper include both the aleatory (random launch/OOS operation failure and on-orbit component failure) and the epistemic (the unknown trend of the end-used market price) types. Firstly, the lifecycle simulation under uncertainties is discussed. The chronological flowchart is presented. The cost and benefit models are established, and the uncertainties thereof are modeled. The dynamic programming method to make optimal decision in face of the uncertain events is introduced. Secondly, the method to analyze the propagation effects of the uncertainties on the OOS utilities is studied. With combined probability and evidence theory, a Monte Carlo lifecycle Simulation based Unified Uncertainty Analysis (MCS-UUA) approach is proposed, based on which the OOS utility assessment tool under mixed uncertainties is developed. Thirdly, to further optimize the OOS system under mixed uncertainties, the reliability-based optimization (RBO) method is studied. To alleviate the computational burden of the traditional RBO method which involves nested optimum search and uncertainty analysis, the framework of Sequential Optimization and Mixed Uncertainty Analysis (SOMUA) is employed to integrate MCS-UUA, and the RBO algorithm SOMUA-MCS is developed. Fourthly, a case study on the OOS system for a hypothetical GEO commercial communication satellite is investigated with the proposed assessment tool. Furthermore, the OOS system is optimized with SOMUA-MCS. Lastly, some conclusions are given and future research prospects are highlighted.

  17. A continuous mixing model for pdf simulations and its applications to combusting shear flows

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Chen, J.-Y.

    1991-01-01

    The problem of time discontinuity (or jump condition) in the coalescence/dispersion (C/D) mixing model is addressed in this work. A C/D mixing model continuous in time is introduced. With the continuous mixing model, the process of chemical reaction can be fully coupled with mixing. In the case of homogeneous turbulence decay, the new model predicts a pdf very close to a Gaussian distribution, with finite higher moments also close to that of a Gaussian distribution. Results from the continuous mixing model are compared with both experimental data and numerical results from conventional C/D models.

  18. Mixing behavior of chromophoric dissolved organic matter in the Pearl River Estuary in spring

    NASA Astrophysics Data System (ADS)

    Lei, Xia; Pan, Jiayi; Devlin, Adam T.

    2018-02-01

    Mixing behavior of chromophoric dissolved organic matter (CDOM) in the Pearl River Estuary (PRE) and relevant hydrodynamic parameters such as horizontal transport and vertical mixing are identified and discussed based on a set of sampling data obtained during a cruise in May 2014. Using a theoretical conservative mixing model, the surface CDOM in the PRE in spring is classified into two groups by the CDOM absorption-spectral slope relationship (a(300) vs S(275-295)): First, terrigenous CDOM under a non-conservative mixing condition, and removal processes such as photobleaching are suggested to happen; second, marine CDOM behaves conservatively during mixing. The mixing of CDOM at the bottom is shown to be conservative. Controlled by the two-layer gravitational circulation in the PRE, the northern and western estuary shows higher CDOM absorption and lower spectral slope than the southern and eastern estuary, and the surface CDOM presents higher absorption and lower spectral slope than the bottom. Horizontal transport is hypothesized to be the dominant hydrodynamic mechanism affecting CDOM variation and mixing behavior in the PRE, while the vertical mixing has less influence.

  19. The Development of Web-based Graphical User Interface for Unified Modeling Data with Multi (Correlated) Responses

    NASA Astrophysics Data System (ADS)

    Made Tirta, I.; Anggraeni, Dian

    2018-04-01

    Statistical models have been developed rapidly into various directions to accommodate various types of data. Data collected from longitudinal, repeated measured, clustered data (either continuous, binary, count, or ordinal), are more likely to be correlated. Therefore statistical model for independent responses, such as Generalized Linear Model (GLM), Generalized Additive Model (GAM) are not appropriate. There are several models available to apply for correlated responses including GEEs (Generalized Estimating Equations), for marginal model and various mixed effect model such as GLMM (Generalized Linear Mixed Models) and HGLM (Hierarchical Generalized Linear Models) for subject spesific models. These models are available on free open source software R, but they can only be accessed through command line interface (using scrit). On the othe hand, most practical researchers very much rely on menu based or Graphical User Interface (GUI). We develop, using Shiny framework, standard pull down menu Web-GUI that unifies most models for correlated responses. The Web-GUI has accomodated almost all needed features. It enables users to do and compare various modeling for repeated measure data (GEE, GLMM, HGLM, GEE for nominal responses) much more easily trough online menus. This paper discusses the features of the Web-GUI and illustrates the use of them. In General we find that GEE, GLMM, HGLM gave very closed results.

  20. Influence of bulk microphysics schemes upon Weather Research and Forecasting (WRF) version 3.6.1 nor'easter simulations

    NASA Astrophysics Data System (ADS)

    Nicholls, Stephen D.; Decker, Steven G.; Tao, Wei-Kuo; Lang, Stephen E.; Shi, Jainn J.; Mohr, Karen I.

    2017-03-01

    This study evaluated the impact of five single- or double-moment bulk microphysics schemes (BMPSs) on Weather Research and Forecasting model (WRF) simulations of seven intense wintertime cyclones impacting the mid-Atlantic United States; 5-day long WRF simulations were initialized roughly 24 h prior to the onset of coastal cyclogenesis off the North Carolina coastline. In all, 35 model simulations (five BMPSs and seven cases) were run and their associated microphysics-related storm properties (hydrometer mixing ratios, precipitation, and radar reflectivity) were evaluated against model analysis and available gridded radar and ground-based precipitation products. Inter-BMPS comparisons of column-integrated mixing ratios and mixing ratio profiles reveal little variability in non-frozen hydrometeor species due to their shared programming heritage, yet their assumptions concerning snow and graupel intercepts, ice supersaturation, snow and graupel density maps, and terminal velocities led to considerable variability in both simulated frozen hydrometeor species and radar reflectivity. WRF-simulated precipitation fields exhibit minor spatiotemporal variability amongst BMPSs, yet their spatial extent is largely conserved. Compared to ground-based precipitation data, WRF simulations demonstrate low-to-moderate (0.217-0.414) threat scores and a rainfall distribution shifted toward higher values. Finally, an analysis of WRF and gridded radar reflectivity data via contoured frequency with altitude diagrams (CFADs) reveals notable variability amongst BMPSs, where better performing schemes favored lower graupel mixing ratios and better underlying aggregation assumptions.

  1. Influence of Bulk Microphysics Schemes upon Weather Research and Forecasting (WRF) Version 3.6.1 Nor'easter Simulations.

    PubMed

    Nicholls, Stephen D; Decker, Steven G; Tao, Wei-Kuo; Lang, Stephen E; Shi, Jainn J; Mohr, Karen I

    2017-01-01

    This study evaluated the impact of five, single- or double- moment bulk microphysics schemes (BMPSs) on Weather Research and Forecasting model (WRF) simulations of seven, intense winter time cyclones impacting the Mid-Atlantic United States. Five-day long WRF simulations were initialized roughly 24 hours prior to the onset of coastal cyclogenesis off the North Carolina coastline. In all, 35 model simulations (5 BMPSs and seven cases) were run and their associated microphysics-related storm properties (hydrometer mixing ratios, precipitation, and radar reflectivity) were evaluated against model analysis and available gridded radar and ground-based precipitation products. Inter-BMPS comparisons of column-integrated mixing ratios and mixing ratio profiles reveal little variability in non-frozen hydrometeor species due to their shared programming heritage, yet their assumptions concerning snow and graupel intercepts, ice supersaturation, snow and graupel density maps, and terminal velocities lead to considerable variability in both simulated frozen hydrometeor species and radar reflectivity. WRF-simulated precipitation fields exhibit minor spatio-temporal variability amongst BMPSs, yet their spatial extent is largely conserved. Compared to ground-based precipitation data, WRF-simulations demonstrate low-to-moderate (0.217-0.414) threat scores and a rainfall distribution shifted toward higher values. Finally, an analysis of WRF and gridded radar reflectivity data via contoured frequency with altitude (CFAD) diagrams reveals notable variability amongst BMPSs, where better performing schemes favored lower graupel mixing ratios and better underlying aggregation assumptions.

  2. Influence of Bulk Microphysics Schemes upon Weather Research and Forecasting (WRF) Version 3.6.1 Nor'easter Simulations

    PubMed Central

    Nicholls, Stephen D.; Decker, Steven G.; Tao, Wei-Kuo; Lang, Stephen E.; Shi, Jainn J.; Mohr, Karen I.

    2018-01-01

    This study evaluated the impact of five, single- or double- moment bulk microphysics schemes (BMPSs) on Weather Research and Forecasting model (WRF) simulations of seven, intense winter time cyclones impacting the Mid-Atlantic United States. Five-day long WRF simulations were initialized roughly 24 hours prior to the onset of coastal cyclogenesis off the North Carolina coastline. In all, 35 model simulations (5 BMPSs and seven cases) were run and their associated microphysics-related storm properties (hydrometer mixing ratios, precipitation, and radar reflectivity) were evaluated against model analysis and available gridded radar and ground-based precipitation products. Inter-BMPS comparisons of column-integrated mixing ratios and mixing ratio profiles reveal little variability in non-frozen hydrometeor species due to their shared programming heritage, yet their assumptions concerning snow and graupel intercepts, ice supersaturation, snow and graupel density maps, and terminal velocities lead to considerable variability in both simulated frozen hydrometeor species and radar reflectivity. WRF-simulated precipitation fields exhibit minor spatio-temporal variability amongst BMPSs, yet their spatial extent is largely conserved. Compared to ground-based precipitation data, WRF-simulations demonstrate low-to-moderate (0.217–0.414) threat scores and a rainfall distribution shifted toward higher values. Finally, an analysis of WRF and gridded radar reflectivity data via contoured frequency with altitude (CFAD) diagrams reveals notable variability amongst BMPSs, where better performing schemes favored lower graupel mixing ratios and better underlying aggregation assumptions. PMID:29697705

  3. Influence of Bulk Microphysics Schemes upon Weather Research and Forecasting (WRF) Version 3.6.1 Nor'easter Simulations

    NASA Technical Reports Server (NTRS)

    Nicholls, Stephen D.; Decker, Steven G.; Tao, Wei-Kuo; Lang, Stephen E.; Shi, Jainn J.; Mohr, Karen Irene

    2017-01-01

    This study evaluated the impact of five single- or double-moment bulk microphysics schemes (BMPSs) on Weather Research and Forecasting model (WRF) simulations of seven intense wintertime cyclones impacting the mid-Atlantic United States; 5-day long WRF simulations were initialized roughly 24 hours prior to the onset of coastal cyclogenesis off the North Carolina coastline. In all, 35 model simulations (five BMPSs and seven cases) were run and their associated microphysics-related storm properties (hydrometer mixing ratios, precipitation, and radar reflectivity) were evaluated against model analysis and available gridded radar and ground-based precipitation products. Inter-BMPS comparisons of column-integrated mixing ratios and mixing ratio profiles reveal little variability in non-frozen hydrometeor species due to their shared programming heritage, yet their assumptions concerning snow and graupel intercepts, ice supersaturation, snow and graupel density maps, and terminal velocities led to considerable variability in both simulated frozen hydrometeor species and radar reflectivity. WRF-simulated precipitation fields exhibit minor spatiotemporal variability amongst BMPSs, yet their spatial extent is largely conserved. Compared to ground-based precipitation data, WRF simulations demonstrate low-to-moderate (0.217 to 0.414) threat scores and a rainfall distribution shifted toward higher values. Finally, an analysis of WRF and gridded radar reflectivity data via contoured frequency with altitude (CFAD) diagrams reveals notable variability amongst BMPSs, where better performing schemes favored lower graupel mixing ratios and better underlying aggregation assumptions.

  4. Dynamic Behavior of Wind Turbine by a Mixed Flexible-Rigid Multi-Body Model

    NASA Astrophysics Data System (ADS)

    Wang, Jianhong; Qin, Datong; Ding, Yi

    A mixed flexible-rigid multi-body model is presented to study the dynamic behavior of a horizontal axis wind turbine. The special attention is given to flexible body: flexible rotor is modeled by a newly developed blade finite element, support bearing elasticities, variations in the number of teeth in contact as well as contact tooth's elasticities are mainly flexible components in the power train. The couple conditions between different subsystems are established by constraint equations. The wind turbine model is generated by coupling models of rotor, power train and generator with constraint equations together. Based on this model, an eigenproblem analysis is carried out to show the mode shape of rotor and power train at a few natural frequencies. The dynamic responses and contact forces among gears under constant wind speed and fixed pitch angle are analyzed.

  5. Assessment of Professional Development for Teachers in the Vocational Education and Training Sector: An Examination of the Concerns Based Adoption Model

    ERIC Educational Resources Information Center

    Saunders, Rebecca

    2012-01-01

    The purpose of this article is to describe the use of the Concerns Based Adoption Model (Hall & Hord, 2006) as a conceptual lens and practical methodology for professional development program assessment in the vocational education and training (VET) sector. In this sequential mixed-methods study, findings from the first two phases (two of…

  6. Women's Endorsement of Models of Sexual Response: Correlates and Predictors.

    PubMed

    Nowosielski, Krzysztof; Wróbel, Beata; Kowalczyk, Robert

    2016-02-01

    Few studies have investigated endorsement of female sexual response models, and no single model has been accepted as a normative description of women's sexual response. The aim of the study was to establish how women from a population-based sample endorse current theoretical models of the female sexual response--the linear models and circular model (partial and composite Basson models)--as well as predictors of endorsement. Accordingly, 174 heterosexual women aged 18-55 years were included in a cross-sectional study: 74 women diagnosed with female sexual dysfunction (FSD) based on DSM-5 criteria and 100 non-dysfunctional women. The description of sexual response models was used to divide subjects into four subgroups: linear (Masters-Johnson and Kaplan models), circular (partial Basson model), mixed (linear and circular models in similar proportions, reflective of the composite Basson model), and a different model. Women were asked to choose which of the models best described their pattern of sexual response and how frequently they engaged in each model. Results showed that 28.7% of women endorsed the linear models, 19.5% the partial Basson model, 40.8% the composite Basson model, and 10.9% a different model. Women with FSD endorsed the partial Basson model and a different model more frequently than did non-dysfunctional controls. Individuals who were dissatisfied with a partner as a lover were more likely to endorse a different model. Based on the results, we concluded that the majority of women endorsed a mixed model combining the circular response with the possibility of an innate desire triggering a linear response. Further, relationship difficulties, not FSD, predicted model endorsement.

  7. Optimal mix of renewable power generation in the MENA region as a basis for an efficient electricity supply to europe

    NASA Astrophysics Data System (ADS)

    Alhamwi, Alaa; Kleinhans, David; Weitemeyer, Stefan; Vogt, Thomas

    2014-12-01

    Renewable Energy sources are gaining importance in the Middle East and North Africa (MENA) region. The purpose of this study is to quantify the optimal mix of renewable power generation in the MENA region, taking Morocco as a case study. Based on hourly meteorological data and load data, a 100% solar-plus-wind only scenario for Morocco is investigated. For the optimal mix analyses, a mismatch energy modelling approach is adopted with the objective to minimise the required storage capacities. For a hypothetical Moroccan energy supply system which is entirely based on renewable energy sources, our results show that the minimum storage capacity is achieved at a share of 63% solar and 37% wind power generations.

  8. A Spatial Faithful Cooperative System Based on Mixed Presence Groupware Model

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Wang, Xiangyu; Wang, Rui

    Traditional groupware platforms are found restrained and cumbersome for supporting geographically dispersed design collaboration. This paper starts with two groupware models, which are Single Display Groupware and Mixed Presence Groupware, and then discusses some of the limitations and argues how these limitations could possibly impair efficient communication among remote designers. Next, it suggests that the support for spatial faithfulness and Tangible User Interface (TUI) could help fill the gap between Face-to-Face (F2F) collaboration and computer-mediated remote collaboration. A spatial faithful groupware with TUI support is then developed to illustrate this concept.

  9. Estimating a graphical intra-class correlation coefficient (GICC) using multivariate probit-linear mixed models.

    PubMed

    Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S

    2015-09-01

    Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.

  10. Hemispheric Differences in Tropical Lower Stratospheric Transport and Tracers Annual Cycle

    NASA Technical Reports Server (NTRS)

    Tweedy, Olga; Waugh, D.; Stolarski, R.; Oman, L.

    2016-01-01

    Transport of long-lived tracers (such as O, CO, and N O) in the lower stratosphere largely determines the composition of the entire stratosphere. Stratospheric transport includes the mean residual circulation (with air rising in the tropics and sinking in the polar and middle latitudes), plus two-way isentropic (quasi-horizontal) mixing by eddies. However, the relative importance of two transport components remains uncertain. Previous studies quantified the relative role of these processes based on tropics-wide average characteristics under common assumption of well-mixed tropics. However, multiple instruments provide us with evidence that show significant differences in the seasonal cycle of ozone between the Northern (0-20N) and Southern (0-20S) tropical (NT and ST respectively) lower stratosphere. In this study we investigate these differences in tracer seasonality and quantify transport processes affecting tracers annual cycle amplitude using simulations from Goddard Earth Observing System Chemistry Climate Model (GEOSCCM) and Whole Atmosphere Community Climate Model (WACCM) and compare them to observations from the Microwave Limb Sounder (MLS) on the Aura satellite. We detect the observed contrast between the ST and NT in GEOSCCM and WACCM: annual cycle in ozone and other chemical tracers is larger in the NT than in the ST but opposite is true for the annual cycle in vertical advection. Ozone budgets in the models, analyzed based on the Transformed Eulerian Mean (TEM) framework, demonstrate a major role of quasi-horizontal mixing vertical advection in determining the NTST ozone distribution and behavior. Analysis of zonal variations in the NT and ST ozone annual cycles further suggests important role of North American and Asian Summer Monsoons (associated with strong isentropic mixing) on the lower stratospheric ozone in the NT. Furthermore, multi model comparison shows that most CCMs reproduce the observed characteristic of ozone annual cycle quite well. Thus, latitudinal variations within the tropics have to be considered in order to understand the balance between upwelling and quasi- horizontal mixing in the tropical lower stratosphere and the paradigm of well mixed tropics has to be reconsidered.

  11. Contributions of Heterogeneous Ice Nucleation, Large-Scale Circulation, and Shallow Cumulus Detrainment to Cloud Phase Transition in Mixed-Phase Clouds with NCAR CAM5

    NASA Astrophysics Data System (ADS)

    Liu, X.; Wang, Y.; Zhang, D.; Wang, Z.

    2016-12-01

    Mixed-phase clouds consisting of both liquid and ice water occur frequently at high-latitudes and in mid-latitude storm track regions. This type of clouds has been shown to play a critical role in the surface energy balance, surface air temperature, and sea ice melting in the Arctic. Cloud phase partitioning between liquid and ice water determines the cloud optical depth of mixed-phase clouds because of distinct optical properties of liquid and ice hydrometeors. The representation and simulation of cloud phase partitioning in state-of-the-art global climate models (GCMs) are associated with large biases. In this study, the cloud phase partition in mixed-phase clouds simulated from the NCAR Community Atmosphere Model version 5 (CAM5) is evaluated against satellite observations. Observation-based supercooled liquid fraction (SLF) is calculated from CloudSat, MODIS and CPR radar detected liquid and ice water paths for clouds with cloud-top temperatures between -40 and 0°C. Sensitivity tests with CAM5 are conducted for different heterogeneous ice nucleation parameterizations with respect to aerosol influence (Wang et al., 2014), different phase transition temperatures for detrained cloud water from shallow convection (Kay et al., 2016), and different CAM5 model configurations (free-run versus nudged winds and temperature, Zhang et al., 2015). A classical nucleation theory-based ice nucleation parameterization in mixed-phase clouds increases the SLF especially at temperatures colder than -20°C, and significantly improves the model agreement with observations in the Arctic. The change of transition temperature for detrained cloud water increases the SLF at higher temperatures and improves the SLF mostly over the Southern Ocean. Even with the improved SLF from the ice nucleation and shallow cumulus detrainment, the low SLF biases in some regions can only be improved through the improved circulation with the nudging technique. Our study highlights the challenges of representations of large-scale moisture transport, cloud microphysics, ice nucleation, and cumulus detrainment in order to improve the mixed-phase transition in GCMs.

  12. Rapid and Efficient Filtration-Based Procedure for Separation and Safe Analysis of CBRN Mixed Samples

    PubMed Central

    Bentahir, Mostafa; Laduron, Frederic; Irenge, Leonid; Ambroise, Jérôme; Gala, Jean-Luc

    2014-01-01

    Separating CBRN mixed samples that contain both chemical and biological warfare agents (CB mixed sample) in liquid and solid matrices remains a very challenging issue. Parameters were set up to assess the performance of a simple filtration-based method first optimized on separate C- and B-agents, and then assessed on a model of CB mixed sample. In this model, MS2 bacteriophage, Autographa californica nuclear polyhedrosis baculovirus (AcNPV), Bacillus atrophaeus and Bacillus subtilis spores were used as biological agent simulants whereas ethyl methylphosphonic acid (EMPA) and pinacolyl methylphophonic acid (PMPA) were used as VX and soman (GD) nerve agent surrogates, respectively. Nanoseparation centrifugal devices with various pore size cut-off (30 kD up to 0.45 µm) and three RNA extraction methods (Invisorb, EZ1 and Nuclisens) were compared. RNA (MS2) and DNA (AcNPV) quantification was carried out by means of specific and sensitive quantitative real-time PCRs (qPCR). Liquid chromatography coupled to time-of-flight mass spectrometry (LC/TOFMS) methods was used for quantifying EMPA and PMPA. Culture methods and qPCR demonstrated that membranes with a 30 kD cut-off retain more than 99.99% of biological agents (MS2, AcNPV, Bacillus Atrophaeus and Bacillus subtilis spores) tested separately. A rapid and reliable separation of CB mixed sample models (MS2/PEG-400 and MS2/EMPA/PMPA) contained in simple liquid or complex matrices such as sand and soil was also successfully achieved on a 30 kD filter with more than 99.99% retention of MS2 on the filter membrane, and up to 99% of PEG-400, EMPA and PMPA recovery in the filtrate. The whole separation process turnaround-time (TAT) was less than 10 minutes. The filtration method appears to be rapid, versatile and extremely efficient. The separation method developed in this work constitutes therefore a useful model for further evaluating and comparing additional separation alternative procedures for a safe handling and preparation of CB mixed samples. PMID:24505375

  13. Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms Based on Kalman Filter Estimation

    NASA Technical Reports Server (NTRS)

    Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.

  14. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.

    PubMed

    Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed

    2013-01-01

    In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.

  15. Forming groups of aggressive sows based on a predictive test of aggression does not affect overall sow aggression or welfare.

    PubMed

    Verdon, Megan; Morrison, R S; Hemsworth, P H

    2018-05-01

    This experiment examined the effects of group composition on sow aggressive behaviour and welfare. Over 6 time replicates, 360 sows (parity 1-6) were mixed into groups (10 sows per pen, 1.8 m 2 /sow) composed of animals that were predicted to be aggressive (n = 18 pens) or groups composed of animals that were randomly selected (n = 18 pens). Predicted aggressive sows were selected based on a model-pig test that has been shown to be related to the aggressive behaviour of parity 2 sows when subsequently mixed in groups. Measurements were taken on aggression delivered post-mixing, and aggression delivered around feeding, fresh skin injuries and plasma cortisol concentrations at days 2 and 24 post-mixing. Live weight gain, litter size (born alive, total born, stillborn piglets), and farrowing rate were also recorded. Manipulating the group composition based on predicted sow aggressiveness had no effect (P > 0.05) on sow aggression delivered at mixing or around feeding, fresh injuries, cortisol, weight gain from day 2 to day 24, farrowing rate, or litter size. The lack of treatment effects in the present experiment could be attributed to (1) a failure of the model-pig test to predict aggression in older sows in groups, or (2) the dependence of the expression of the aggressive phenotype on factors such as social experience and characteristics (e.g., physical size and aggressive phenotype) of pen mates. This research draws attention to the intrinsic difficulties associated with predicting behaviour across contexts, particularly when the behaviour is highly dependent on interactions with conspecifics, and highlights the social complexities involved in the presentation of a behavioural phenotype. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. A time dependent mixing model to close PDF equations for transport in heterogeneous aquifers

    NASA Astrophysics Data System (ADS)

    Schüler, L.; Suciu, N.; Knabner, P.; Attinger, S.

    2016-10-01

    Probability density function (PDF) methods are a promising alternative to predicting the transport of solutes in groundwater under uncertainty. They make it possible to derive the evolution equations of the mean concentration and the concentration variance, used in moment methods. The mixing model, describing the transport of the PDF in concentration space, is essential for both methods. Finding a satisfactory mixing model is still an open question and due to the rather elaborate PDF methods, a difficult undertaking. Both the PDF equation and the concentration variance equation depend on the same mixing model. This connection is used to find and test an improved mixing model for the much easier to handle concentration variance. Subsequently, this mixing model is transferred to the PDF equation and tested. The newly proposed mixing model yields significantly improved results for both variance modelling and PDF modelling.

  17. Models for Temperature and Composition in Uranus from Spitzer, Herschel and Ground-Based Infrared through Millimeter Observations

    NASA Astrophysics Data System (ADS)

    Orton, Glenn; Fletcher, Leigh; Feuchtgruber, Helmut; Lellouch, Emmanuel; Moreno, Raphael; Hartogh, Paul; Jarchow, Christopher; Swinyard, Bruce; Moses, Julianne; Burgdorf, Martin; Hammel, Heidi; Line, Michael; Mainzer, Amy; Hofstadter, Mark; Sandell, Goran; Dowell, Charles

    2014-05-01

    Photometric and spectroscopic observations of Uranus were combined to create self-consistent models of its global-mean temperature profile, bulk composition, and vertical distribution of gases. These were derived from a suite of spacecraft and ground-based observations that includes the Spitzer IRS, and the Herschel HIFI, PACS and SPIRE instruments, together with ground-based observations from UKIRT and CSO. Observations of the collision-induced absorption of H2 have constrained the temperature structure in the troposphere; this was possible up to atmospheric pressures of ~2 bars. Temperatures in the stratosphere were constrained by H2 quadrupole line emission. We coupled the vertical distribution of CH4 in the stratosphere of Uranus with models for the vertical mixing in a way that is consistent with the mixing ratios of hydrocarbons whose abundances are influenced primarily by mixing rather than chemistry. Spitzer and Herschel data constrain the abundances of CH3, CH4, C2H2, C2H6, C3H4, C4H2, H2O and CO2. The Spitzer IRS data, in concert with photochemical models, show that the atmosphere the homopause is much higher pressures than for the other outer planets, with the predominant trace constituents for pressures lower than 10 μbar being H2O and CO2. At millimeter wavelengths, there is evidence that an additional opacity source is required besides the H2 collision-induced absorption and the NH3 absorption needed to match the microwave spectrum; this can reasonably (but not uniquely) be attributed to H2S. These models will be made more mature by consideration of spatial variability from Voyager IRIS and more recent spatially resolved imaging and mapping from ground-based observatories. The model is of 'programmatic' interest because it serves as a calibration source for Herschel instruments, and it provides a starting point for planning future spacecraft investigations of the atmosphere of Uranus.

  18. Housing Value Projection Model Related to Educational Planning: The Feasibility of a New Methodology. Final Report.

    ERIC Educational Resources Information Center

    Helbock, Richard W.; Marker, Gordon

    This study concerns the feasibility of a Markov chain model for projecting housing values and racial mixes. Such projections could be used in planning the layout of school districts to achieve desired levels of socioeconomic heterogeneity. Based upon the concepts and assumptions underlying a Markov chain model, it is concluded that such a model is…

  19. A Structural Equation Model at the Individual and Group Level for Assessing Faking-Related Change

    ERIC Educational Resources Information Center

    Ferrando, Pere Joan; Anguiano-Carrasco, Cristina

    2011-01-01

    This article proposes a comprehensive approach based on structural equation modeling for assessing the amount of trait-level change derived from faking-motivating situations. The model is intended for a mixed 2-wave 2-group design, and assesses change at both the group and the individual level. Theoretically the model adopts an integrative…

  20. Unifying error structures in commonly used biotracer mixing models.

    PubMed

    Stock, Brian C; Semmens, Brice X

    2016-10-01

    Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet. © 2016 by the Ecological Society of America.

  1. Lagrangian mixed layer modeling of the western equatorial Pacific

    NASA Technical Reports Server (NTRS)

    Shinoda, Toshiaki; Lukas, Roger

    1995-01-01

    Processes that control the upper ocean thermohaline structure in the western equatorial Pacific are examined using a Lagrangian mixed layer model. The one-dimensional bulk mixed layer model of Garwood (1977) is integrated along the trajectories derived from a nonlinear 1 1/2 layer reduced gravity model forced with actual wind fields. The Global Precipitation Climatology Project (GPCP) data are used to estimate surface freshwater fluxes for the mixed layer model. The wind stress data which forced the 1 1/2 layer model are used for the mixed layer model. The model was run for the period 1987-1988. This simple model is able to simulate the isothermal layer below the mixed layer in the western Pacific warm pool and its variation. The subduction mechanism hypothesized by Lukas and Lindstrom (1991) is evident in the model results. During periods of strong South Equatorial Current, the warm and salty mixed layer waters in the central Pacific are subducted below the fresh shallow mixed layer in the western Pacific. However, this subduction mechanism is not evident when upwelling Rossby waves reach the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific due to episodes of strong wind and light precipitation associated with the El Nino-Southern Oscillation. Comparison of the results between the Lagrangian mixed layer model and a locally forced Eulerian mixed layer model indicated that horizontal advection of salty waters from the central Pacific strongly affects the upper ocean salinity variation in the western Pacific, and that this advection is necessary to maintain the upper ocean thermohaline structure in this region.

  2. Modeling Power Plant Cooling Water Requirements: A Regional Analysis of the Energy-Water Nexus Considering Renewable Sources within the Power Generation Mix

    NASA Astrophysics Data System (ADS)

    Peck, Jaron Joshua

    Water is used in power generation for cooling processes in thermoelectric power. plants and currently withdraws more water than any other sector in the U.S. Reducing water. use from power generation will help to alleviate water stress in at risk areas, where droughts. have the potential to strain water resources. The amount of water used for power varies. depending on many climatic aspects as well as plant operation factors. This work presents. a model that quantifies the water use for power generation for two regions representing. different generation fuel portfolios, California and Utah. The analysis of the California Independent System Operator introduces the methods. of water energy modeling by creating an overall water use factor in volume of water per. unit of energy produced based on the fuel generation mix of the area. The idea of water. monitoring based on energy used by a building or region is explored based on live fuel mix. data. This is for the purposes of increasing public awareness of the water associated with. personal energy use and helping to promote greater energy efficiency. The Utah case study explores the effects more renewable, and less water-intensive, forms of energy will have on the overall water use from power generation for the state. Using a similar model to that of the California case study, total water savings are quantified. based on power reduction scenarios involving increased use of renewable energy. The. plausibility of implementing more renewable energy into Utah’s power grid is also. discussed. Data resolution, as well as dispatch methods, economics, and solar variability, introduces some uncertainty into the analysis.

  3. Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rossi, R; Gallagher, B; Neville, J

    Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied ourmore » model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.« less

  4. Assessing and Upgrading Ocean Mixing for the Study of Climate Change

    NASA Astrophysics Data System (ADS)

    Howard, A. M.; Fells, J.; Lindo, F.; Tulsee, V.; Canuto, V.; Cheng, Y.; Dubovikov, M. S.; Leboissetier, A.

    2016-12-01

    Climate is critical. Climate variability affects us all; Climate Change is a burning issue. Droughts, floods, other extreme events, and Global Warming's effects on these and problems such as sea-level rise and ecosystem disruption threaten lives. Citizens must be informed to make decisions concerning climate such as "business as usual" vs. mitigating emissions to keep warming within bounds. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. To make useful predictions we must realistically model each component of the climate system, including the ocean, whose critical role includes transporting&storing heat and dissolved CO2. We need physically based parameterizations of key ocean processes that can't be put explicitly in a global climate model, e.g. vertical&lateral mixing. The NASA-GISS turbulence group uses theory to model mixing including: 1) a comprehensive scheme for small scale vertical mixing, including convection&shear, internal waves & double-diffusion, and bottom tides 2) a new parameterization for the lateral&vertical mixing by mesoscale eddies. For better understanding we write our own programs. To assess the modelling MATLAB programs visualize and calculate statistics, including means, standard deviations and correlations, on NASA-GISS OGCM output with different mixing schemes and help us study drift from observations. We also try to upgrade the schemes, e.g. the bottom tidal mixing parameterizations' roughness, calculated from high resolution topographic data using Gaussian weighting functions with cut-offs. We study the effects of their parameters to improve them. A FORTRAN program extracts topography data subsets of manageable size for a MATLAB program, tested on idealized cases, to visualize&calculate roughness on. Students are introduced to modeling a complex system, gain a deeper appreciation of climate science, programming skills and familiarity with MATLAB, while furthering climate science by improving our mixing schemes. We are incorporating climate research into our college curriculum. The PI is both a member of the turbulence group at NASA-GISS and an associate professor at Medgar Evers College of CUNY, an urban minority serving institution in central Brooklyn. Supported by NSF Award AGS-1359293.

  5. Study on the Spectral Mixing Model for Mineral Pigments Based on Derivative of Ratio Spectroscopy-Take Vermilion and Stone Yellow for Example

    NASA Astrophysics Data System (ADS)

    Zhao, H.; Hao, Y.; Liu, X.; Hou, M.; Zhao, X.

    2018-04-01

    Hyperspectral remote sensing is a completely non-invasive technology for measurement of cultural relics, and has been successfully applied in identification and analysis of pigments of Chinese historical paintings. Although the phenomenon of mixing pigments is very usual in Chinese historical paintings, the quantitative analysis of the mixing pigments in the ancient paintings is still unsolved. In this research, we took two typical mineral pigments, vermilion and stone yellow as example, made precisely mixed samples using these two kinds of pigments, and measured their spectra in the laboratory. For the mixing spectra, both fully constrained least square (FCLS) method and derivative of ratio spectroscopy (DRS) were performed. Experimental results showed that the mixing spectra of vermilion and stone yellow had strong nonlinear mixing characteristics, but at some bands linear unmixing could also achieve satisfactory results. DRS using strong linear bands can reach much higher accuracy than that of FCLS using full bands.

  6. Catalytic oxidation of toluene: comparative study over powder and monolithic manganese-nickel mixed oxide catalysts.

    PubMed

    Duplančić, Marina; Tomašić, Vesna; Gomzi, Zoran

    2017-07-05

    This paper is focused on development of the metal monolithic structure for total oxidation of toluene at low temperature. The well-adhered catalyst, based on the mixed oxides of manganese and nickel, is washcoated on the Al/Al 2 O 3 plates as metallic support. For the comparison purposes, results observed for the manganese-nickel mixed oxide supported on the metallic monolith are compared with those obtained using powder type of the same catalyst. Prepared manganese-nickel mixed oxides in both configurations show remarkable low-temperature activity for the toluene oxidation. The reaction temperature T 50 corresponding to 50% of the toluene conversion is observed at temperatures of ca. 400-430 K for the powder catalyst and at ca. 450-490 K for the monolith configuration. The appropriate mathematical models, such as one-dimensional (1D) pseudo-homogeneous model of the fixed bed reactor and the 1D heterogeneous model of the metal monolith reactor, are applied to describe and compare catalytic performances of both reactors. Validation of the applied models is performed by comparing experimental data with theoretical predictions. The obtained results confirmed that the reaction over the monolithic structure is kinetically controlled, while in the case of the powder catalyst the reaction rate is influenced by the intraphase diffusion.

  7. Relevance of the c-statistic when evaluating risk-adjustment models in surgery.

    PubMed

    Merkow, Ryan P; Hall, Bruce L; Cohen, Mark E; Dimick, Justin B; Wang, Edward; Chow, Warren B; Ko, Clifford Y; Bilimoria, Karl Y

    2012-05-01

    The measurement of hospital quality based on outcomes requires risk adjustment. The c-statistic is a popular tool used to judge model performance, but can be limited, particularly when evaluating specific operations in focused populations. Our objectives were to examine the interpretation and relevance of the c-statistic when used in models with increasingly similar case mix and to consider an alternative perspective on model calibration based on a graphical depiction of model fit. From the American College of Surgeons National Surgical Quality Improvement Program (2008-2009), patients were identified who underwent a general surgery procedure, and procedure groups were increasingly restricted: colorectal-all, colorectal-elective cases only, and colorectal-elective cancer cases only. Mortality and serious morbidity outcomes were evaluated using logistic regression-based risk adjustment, and model c-statistics and calibration curves were used to compare model performance. During the study period, 323,427 general, 47,605 colorectal-all, 39,860 colorectal-elective, and 21,680 colorectal cancer patients were studied. Mortality ranged from 1.0% in general surgery to 4.1% in the colorectal-all group, and serious morbidity ranged from 3.9% in general surgery to 12.4% in the colorectal-all procedural group. As case mix was restricted, c-statistics progressively declined from the general to the colorectal cancer surgery cohorts for both mortality and serious morbidity (mortality: 0.949 to 0.866; serious morbidity: 0.861 to 0.668). Calibration was evaluated graphically by examining predicted vs observed number of events over risk deciles. For both mortality and serious morbidity, there was no qualitative difference in calibration identified between the procedure groups. In the present study, we demonstrate how the c-statistic can become less informative and, in certain circumstances, can lead to incorrect model-based conclusions, as case mix is restricted and patients become more homogenous. Although it remains an important tool, caution is advised when the c-statistic is advanced as the sole measure of a model performance. Copyright © 2012 American College of Surgeons. All rights reserved.

  8. Design optimization of single mixed refrigerant LNG process using a hybrid modified coordinate descent algorithm

    NASA Astrophysics Data System (ADS)

    Qyyum, Muhammad Abdul; Long, Nguyen Van Duc; Minh, Le Quang; Lee, Moonyong

    2018-01-01

    Design optimization of the single mixed refrigerant (SMR) natural gas liquefaction (LNG) process involves highly non-linear interactions between decision variables, constraints, and the objective function. These non-linear interactions lead to an irreversibility, which deteriorates the energy efficiency of the LNG process. In this study, a simple and highly efficient hybrid modified coordinate descent (HMCD) algorithm was proposed to cope with the optimization of the natural gas liquefaction process. The single mixed refrigerant process was modeled in Aspen Hysys® and then connected to a Microsoft Visual Studio environment. The proposed optimization algorithm provided an improved result compared to the other existing methodologies to find the optimal condition of the complex mixed refrigerant natural gas liquefaction process. By applying the proposed optimization algorithm, the SMR process can be designed with the 0.2555 kW specific compression power which is equivalent to 44.3% energy saving as compared to the base case. Furthermore, in terms of coefficient of performance (COP), it can be enhanced up to 34.7% as compared to the base case. The proposed optimization algorithm provides a deep understanding of the optimization of the liquefaction process in both technical and numerical perspectives. In addition, the HMCD algorithm can be employed to any mixed refrigerant based liquefaction process in the natural gas industry.

  9. Visuo-Haptic Mixed Reality with Unobstructed Tool-Hand Integration.

    PubMed

    Cosco, Francesco; Garre, Carlos; Bruno, Fabio; Muzzupappa, Maurizio; Otaduy, Miguel A

    2013-01-01

    Visuo-haptic mixed reality consists of adding to a real scene the ability to see and touch virtual objects. It requires the use of see-through display technology for visually mixing real and virtual objects, and haptic devices for adding haptic interaction with the virtual objects. Unfortunately, the use of commodity haptic devices poses obstruction and misalignment issues that complicate the correct integration of a virtual tool and the user's real hand in the mixed reality scene. In this work, we propose a novel mixed reality paradigm where it is possible to touch and see virtual objects in combination with a real scene, using commodity haptic devices, and with a visually consistent integration of the user's hand and the virtual tool. We discuss the visual obstruction and misalignment issues introduced by commodity haptic devices, and then propose a solution that relies on four simple technical steps: color-based segmentation of the hand, tracking-based segmentation of the haptic device, background repainting using image-based models, and misalignment-free compositing of the user's hand. We have developed a successful proof-of-concept implementation, where a user can touch virtual objects and interact with them in the context of a real scene, and we have evaluated the impact on user performance of obstruction and misalignment correction.

  10. Elastic-viscoplastic modeling of soft biological tissues using a mixed finite element formulation based on the relative deformation gradient.

    PubMed

    Weickenmeier, J; Jabareen, M

    2014-11-01

    The characteristic highly nonlinear, time-dependent, and often inelastic material response of soft biological tissues can be expressed in a set of elastic-viscoplastic constitutive equations. The specific elastic-viscoplastic model for soft tissues proposed by Rubin and Bodner (2002) is generalized with respect to the constitutive equations for the scalar quantity of the rate of inelasticity and the hardening parameter in order to represent a general framework for elastic-viscoplastic models. A strongly objective integration scheme and a new mixed finite element formulation were developed based on the introduction of the relative deformation gradient-the deformation mapping between the last converged and current configurations. The numerical implementation of both the generalized framework and the specific Rubin and Bodner model is presented. As an example of a challenging application of the new model equations, the mechanical response of facial skin tissue is characterized through an experimental campaign based on the suction method. The measurement data are used for the identification of a suitable set of model parameters that well represents the experimentally observed tissue behavior. Two different measurement protocols were defined to address specific tissue properties with respect to the instantaneous tissue response, inelasticity, and tissue recovery. Copyright © 2014 John Wiley & Sons, Ltd.

  11. High Performance, Robust Control of Flexible Space Structures: MSFC Center Director's Discretionary Fund

    NASA Technical Reports Server (NTRS)

    Whorton, M. S.

    1998-01-01

    Many spacecraft systems have ambitious objectives that place stringent requirements on control systems. Achievable performance is often limited because of difficulty of obtaining accurate models for flexible space structures. To achieve sufficiently high performance to accomplish mission objectives may require the ability to refine the control design model based on closed-loop test data and tune the controller based on the refined model. A control system design procedure is developed based on mixed H2/H(infinity) optimization to synthesize a set of controllers explicitly trading between nominal performance and robust stability. A homotopy algorithm is presented which generates a trajectory of gains that may be implemented to determine maximum achievable performance for a given model error bound. Examples show that a better balance between robustness and performance is obtained using the mixed H2/H(infinity) design method than either H2 or mu-synthesis control design. A second contribution is a new procedure for closed-loop system identification which refines parameters of a control design model in a canonical realization. Examples demonstrate convergence of the parameter estimation and improved performance realized by using the refined model for controller redesign. These developments result in an effective mechanism for achieving high-performance control of flexible space structures.

  12. Large Eddy Simulation of Heat Entrainment Under Arctic Sea Ice

    NASA Astrophysics Data System (ADS)

    Ramudu, Eshwan; Gelderloos, Renske; Yang, Di; Meneveau, Charles; Gnanadesikan, Anand

    2018-01-01

    Arctic sea ice has declined rapidly in recent decades. The faster than projected retreat suggests that free-running large-scale climate models may not be accurately representing some key processes. The small-scale turbulent entrainment of heat from the mixed layer could be one such process. To better understand this mechanism, we model the Arctic Ocean's Canada Basin, which is characterized by a perennial anomalously warm Pacific Summer Water (PSW) layer residing at the base of the mixed layer and a summertime Near-Surface Temperature Maximum (NSTM) within the mixed layer trapping heat from solar radiation. We use large eddy simulation (LES) to investigate heat entrainment for different ice-drift velocities and different initial temperature profiles. The value of LES is that the resolved turbulent fluxes are greater than the subgrid-scale fluxes for most of our parameter space. The results show that the presence of the NSTM enhances heat entrainment from the mixed layer. Additionally there is no PSW heat entrained under the parameter space considered. We propose a scaling law for the ocean-to-ice heat flux which depends on the initial temperature anomaly in the NSTM layer and the ice-drift velocity. A case study of "The Great Arctic Cyclone of 2012" gives a turbulent heat flux from the mixed layer that is approximately 70% of the total ocean-to-ice heat flux estimated from the PIOMAS model often used for short-term predictions. Present results highlight the need for large-scale climate models to account for the NSTM layer.

  13. Modeling tidal exchange and dispersion in Boston Harbor

    USGS Publications Warehouse

    Signell, Richard P.; Butman, Bradford

    1992-01-01

    Tidal dispersion and the horizontal exchange of water between Boston Harbor and the surrounding ocean are examined with a high-resolution (200 m) depth-averaged numerical model. The strongly varying bathymetry and coastline geometry of the harbor generate complex spatial patterns in the modeled tidal currents which are verified by shipboard acoustic Doppler surveys. Lagrangian exchange experiments demonstrate that tidal currents rapidly exchange and mix material near the inlets of the harbor due to asymmetry in the ebb/flood response. This tidal mixing zone extends roughly a tidal excursion from the inlets and plays an important role in the overall flushing of the harbor. Because the tides can only efficiently mix material in this limited region, however, harbor flushing must be considered a two step process: rapid exchange in the tidal mixing zone, followed by flushing of the tidal mixing zone by nontidal residual currents. Estimates of embayment flushing based on tidal calculations alone therefore can significantly overestimate the flushing time that would be expected under typical environmental conditions. Particle-release simulations from point sources also demonstrate that while the tides efficiently exchange material in the vicinity of the inlets, the exact nature of dispersion from point sources is extremely sensitive to the timing and location of the release, and the distribution of particles is streaky and patchlike. This suggests that high-resolution modeling of dispersion from point sources in these regions must be performed explicitly and cannot be parameterized as a plume with Gaussian-spreading in a larger scale flow field.

  14. A Study of Mexican Free-Tailed Bat Chirp Syllables: Bayesian Functional Mixed Models for Nonstationary Acoustic Time Series

    PubMed Central

    MARTINEZ, Josue G.; BOHN, Kirsten M.; CARROLL, Raymond J.

    2013-01-01

    We describe a new approach to analyze chirp syllables of free-tailed bats from two regions of Texas in which they are predominant: Austin and College Station. Our goal is to characterize any systematic regional differences in the mating chirps and assess whether individual bats have signature chirps. The data are analyzed by modeling spectrograms of the chirps as responses in a Bayesian functional mixed model. Given the variable chirp lengths, we compute the spectrograms on a relative time scale interpretable as the relative chirp position, using a variable window overlap based on chirp length. We use 2D wavelet transforms to capture correlation within the spectrogram in our modeling and obtain adaptive regularization of the estimates and inference for the regions-specific spectrograms. Our model includes random effect spectrograms at the bat level to account for correlation among chirps from the same bat, and to assess relative variability in chirp spectrograms within and between bats. The modeling of spectrograms using functional mixed models is a general approach for the analysis of replicated nonstationary time series, such as our acoustical signals, to relate aspects of the signals to various predictors, while accounting for between-signal structure. This can be done on raw spectrograms when all signals are of the same length, and can be done using spectrograms defined on a relative time scale for signals of variable length in settings where the idea of defining correspondence across signals based on relative position is sensible. PMID:23997376

  15. Modeling dam-break flows using finite volume method on unstructured grid

    USDA-ARS?s Scientific Manuscript database

    Two-dimensional shallow water models based on unstructured finite volume method and approximate Riemann solvers for computing the intercell fluxes have drawn growing attention because of their robustness, high adaptivity to complicated geometry and ability to simulate flows with mixed regimes and di...

  16. Evaluating methods to visualize patterns of genetic differentiation on a landscape.

    PubMed

    House, Geoffrey L; Hahn, Matthew W

    2018-05-01

    With advances in sequencing technology, research in the field of landscape genetics can now be conducted at unprecedented spatial and genomic scales. This has been especially evident when using sequence data to visualize patterns of genetic differentiation across a landscape due to demographic history, including changes in migration. Two recent model-based visualization methods that can highlight unusual patterns of genetic differentiation across a landscape, SpaceMix and EEMS, are increasingly used. While SpaceMix's model can infer long-distance migration, EEMS' model is more sensitive to short-distance changes in genetic differentiation, and it is unclear how these differences may affect their results in various situations. Here, we compare SpaceMix and EEMS side by side using landscape genetics simulations representing different migration scenarios. While both methods excel when patterns of simulated migration closely match their underlying models, they can produce either un-intuitive or misleading results when the simulated migration patterns match their models less well, and this may be difficult to assess in empirical data sets. We also introduce unbundled principal components (un-PC), a fast, model-free method to visualize patterns of genetic differentiation by combining principal components analysis (PCA), which is already used in many landscape genetics studies, with the locations of sampled individuals. Un-PC has characteristics of both SpaceMix and EEMS and works well with simulated and empirical data. Finally, we introduce msLandscape, a collection of tools that streamline the creation of customizable landscape-scale simulations using the popular coalescent simulator ms and conversion of the simulated data for use with un-PC, SpaceMix and EEMS. © 2017 John Wiley & Sons Ltd.

  17. Small area estimation for semicontinuous data.

    PubMed

    Chandra, Hukum; Chambers, Ray

    2016-03-01

    Survey data often contain measurements for variables that are semicontinuous in nature, i.e. they either take a single fixed value (we assume this is zero) or they have a continuous, often skewed, distribution on the positive real line. Standard methods for small area estimation (SAE) based on the use of linear mixed models can be inefficient for such variables. We discuss SAE techniques for semicontinuous variables under a two part random effects model that allows for the presence of excess zeros as well as the skewed nature of the nonzero values of the response variable. In particular, we first model the excess zeros via a generalized linear mixed model fitted to the probability of a nonzero, i.e. strictly positive, value being observed, and then model the response, given that it is strictly positive, using a linear mixed model fitted on the logarithmic scale. Empirical results suggest that the proposed method leads to efficient small area estimates for semicontinuous data of this type. We also propose a parametric bootstrap method to estimate the MSE of the proposed small area estimator. These bootstrap estimates of the MSE are compared to the true MSE in a simulation study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Miscibility and Thermodynamics of Mixing of Different Models of Formamide and Water in Computer Simulation.

    PubMed

    Kiss, Bálint; Fábián, Balázs; Idrissi, Abdenacer; Szőri, Milán; Jedlovszky, Pál

    2017-07-27

    The thermodynamic changes that occur upon mixing five models of formamide and three models of water, including the miscibility of these model combinations itself, is studied by performing Monte Carlo computer simulations using an appropriately chosen thermodynamic cycle and the method of thermodynamic integration. The results show that the mixing of these two components is close to the ideal mixing, as both the energy and entropy of mixing turn out to be rather close to the ideal term in the entire composition range. Concerning the energy of mixing, the OPLS/AA_mod model of formamide behaves in a qualitatively different way than the other models considered. Thus, this model results in negative, while the other ones in positive energy of mixing values in combination with all three water models considered. Experimental data supports this latter behavior. Although the Helmholtz free energy of mixing always turns out to be negative in the entire composition range, the majority of the model combinations tested either show limited miscibility, or, at least, approach the miscibility limit very closely in certain compositions. Concerning both the miscibility and the energy of mixing of these model combinations, we recommend the use of the combination of the CHARMM formamide and TIP4P water models in simulations of water-formamide mixtures.

  19. Microphysical and macrophysical characteristics of ice and mixed-phase clouds compared between in-situ observations from the NSF ORCAS campaign and the NCAR Community Atmospheric Model

    NASA Astrophysics Data System (ADS)

    Diao, M.; D'Alessandro, J.; Wu, C.; Liu, X.; Jensen, J. B.

    2016-12-01

    Large spatial coverage of ice and mixed-phase clouds is frequently observed in the higher latitudinal regions, especially over the Arctic and Antarctica. However, because the microphysical properties in the ice and mixed-phase clouds are highly variable in space, major challenges still remain in understanding the characteristics of ice and mixed-phase clouds on the microscale, as well as representing the sub-grid scale variabilities of relative humidity in the General Circulation Models. In this work, we use the in-situ, airborne observations from the NSF O2/N2 Ratio and CO2 Airborne Southern Ocean (ORCAS) Study (January - February 2016) to analyze the microphysical and macrophysical characteristics of ice and mixed-phase clouds over the Southern Ocean. A total of 18 flights onboard the NSF Gulfstream-V research aircraft are used to quantify the cloud properties and relative humidity distributions at various temperatures, pressures and aerosol background. New QC/QA water vapor data of the Vertical Cavity Surface Emitting Laser based on the laboratory calibration in summer 2016 will be presented. The statistical distributions of cloud microphysical properties and relative humidity with respect to ice (RHi) derived from in-situ observations will be compared with the NCAR Community Atmospheric Model Version 5 (CAM5). The horizontal extent of ice and mixed-phase clouds, and their formation and evolution will be derived based on the method of Diao et al. (2013). The occurrence frequency of ice supersaturation (i.e., RHi > 100%) will be examined in relation to various chemical tracers (i.e., O3 and CO) and total aerosol number concentrations (e.g., aerosols > 0.1 μm and > 0.5 μm) at clear-sky and in-cloud conditions. We will quantify whether these characteristics of ISS are scale-dependent from the microscale to the mesoscale. Overall, our work will evaluate the spatial variabilities of RHi inside/outside of ice and mixed-phase clouds, the frequency and magnitude of ice supersaturation, as well as the correlations between ice water content and liquid water content in the CAM5 simulations.

  20. Mid-depth temperature maximum in an estuarine lake

    NASA Astrophysics Data System (ADS)

    Stepanenko, V. M.; Repina, I. A.; Artamonov, A. Yu; Gorin, S. L.; Lykossov, V. N.; Kulyamin, D. V.

    2018-03-01

    The mid-depth temperature maximum (TeM) was measured in an estuarine Bol’shoi Vilyui Lake (Kamchatka peninsula, Russia) in summer 2015. We applied 1D k-ɛ model LAKE to the case, and found it successfully simulating the phenomenon. We argue that the main prerequisite for mid-depth TeM development is a salinity increase below the freshwater mixed layer, sharp enough in order to increase the temperature with depth not to cause convective mixing and double diffusion there. Given that this condition is satisfied, the TeM magnitude is controlled by physical factors which we identified as: radiation absorption below the mixed layer, mixed-layer temperature dynamics, vertical heat conduction and water-sediments heat exchange. In addition to these, we formulate the mechanism of temperature maximum ‘pumping’, resulting from the phase shift between diurnal cycles of mixed-layer depth and temperature maximum magnitude. Based on the LAKE model results we quantify the contribution of the above listed mechanisms and find their individual significance highly sensitive to water turbidity. Relying on physical mechanisms identified we define environmental conditions favouring the summertime TeM development in salinity-stratified lakes as: small-mixed layer depth (roughly, ~< 2 m), transparent water, daytime maximum of wind and cloudless weather. We exemplify the effect of mixed-layer depth on TeM by a set of selected lakes.

  1. Population stochastic modelling (PSM)--an R package for mixed-effects models based on stochastic differential equations.

    PubMed

    Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode; Overgaard, Rune Viig; Madsen, Henrik

    2009-06-01

    The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model development, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 109-141; C.W. Tornøe, R.V. Overgaard, H. Agersø, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8)) (2005) 1247-1258; R.V. Overgaard, N. Jonsson, C.W. Tornøe, H. Madsen, Non-linear mixed-effects models with stochastic differential equations: implementation of an estimation algorithm, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 85-107; U. Picchini, S. Ditlevsen, A. De Gaetano, Maximum likelihood estimation of a time-inhomogeneous stochastic differential model of glucose dynamics, Math. Med. Biol. 25 (June(2)) (2008) 141-155]. PK/PD models are traditionally based ordinary differential equations (ODEs) with an observation link that incorporates noise. This state-space formulation only allows for observation noise and not for system noise. Extending to SDEs allows for a Wiener noise component in the system equations. This additional noise component enables handling of autocorrelated residuals originating from natural variation or systematic model error. Autocorrelated residuals are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE(1) approximation to the population likelihood which is generated from the individual likelihoods that are approximated using the Extended Kalman Filter's one-step predictions.

  2. Learning Strategies Model to Enhance Thai Undergraduate Students' Self-Efficacy Beliefs in EIL Textual Reading Performance

    ERIC Educational Resources Information Center

    Kakew, Jiraporn; Damnet, Anamai

    2017-01-01

    This classroom based research of a learning strategies model was designed to investigate its application in a mixed-ability classroom. The study built on Oxford's language learning strategies model (1990, 2001) and fulfilled it with rhetorical strategies to accommodate challenges encountered in the paradigm of English as an international language…

  3. The Effects of a Simulation Game on Mental Models about Organizational Systems

    ERIC Educational Resources Information Center

    Reese, Rebecca M.

    2017-01-01

    This mixed methods study was designed to uncover evidence of change to mental models about organizational systems resulting from participation in a simulation game that is based on a system dynamics model. Thirty participants in a 2 day experiential workshop completed a pretest and posttest to assess learning about particular systems concepts.…

  4. Intercomparison of granular stress and turbulence models for unidirectional sheet flow applications

    NASA Astrophysics Data System (ADS)

    Chauchat, J.; Cheng, Z.; Hsu, T. J.

    2016-12-01

    The intergranular stresses are one of the key elements in two-phase sediment transport models. There are two main existing approaches, the kinetic theory of granular flows (Jenkins and Hanes, 1998; Hsu et al., 2004) and the phenomenological rheology such as the one proposed by Bagnold (Hanes and Bowen, 1985) or the μ(I) dense granular flow rheology (Revil-Baudard and Chauchat, 2013). Concerning the turbulent Reynolds stress, mixing length and k-ɛ turbulence models have been validated by previous studies (Revil-Baudard and Chauchat, 2013; Hsu et al., 2004). Recently, sedFoam was developed based on kinetic theory of granular flows and k-ɛ turbulence models (Cheng and Hsu, 2014). In this study, we further extended sedFoam by implementing the mixing length and the dense granular flow rheology by following Revil-Baudard and Chauchat (2013). This allows us to objectively compare the different combinations of intergranular stresses (kinetic theory or the dense granular flow rheology) and turbulence models (mixing length or k-ɛ) under unidirectional sheet flow conditions. We found that the calibrated mixing length and k-ɛ models predicts similar velocity and concentration profiles. The differences observed between the kinetic theory and the dense granular flow rheology requires further investigation. In particular, we hypothesize that the extended kinetic theory proposed by Berzi (2011) would probably improve the existing combination of the kinetic theory with a simple Coulomb frictional model in sedFoam. A semi-analytical solution proposed by Berzi and Fraccarollo(2013) for sediment transport rate and sheet layer thickness versus the Shields number is compared with the results obtained by using the dense granular flow rheology and the mixing length model. The results are similar which demonstrate that both the extended kinetic theory and the dense granular flow rheology can be used to model intergranular stresses under sheet flow conditions.

  5. Model simulations of dense bottom currents in the Western Baltic Sea

    NASA Astrophysics Data System (ADS)

    Burchard, Hans; Janssen, Frank; Bolding, Karsten; Umlauf, Lars; Rennau, Hannes

    2009-01-01

    Only recently, medium intensity inflow events into the Baltic Sea have gained more awareness because of their potential to ventilate intermediate layers in the Southern Baltic Sea basins. With the present high-resolution model study of the Western Baltic Sea a first attempt is made to obtain model based realistic estimates of turbulent mixing in this area where dense bottom currents resulting from medium intensity inflow events are weakened by turbulent entrainment. The numerical model simulation which is carried out using the General Estuarine Transport Model (GETM) during nine months in 2003 and 2004 is first validated by means of three automatic stations at the Drogden and Darss Sills and in the Arkona Sea. In order to obtain good agreement between observations and model results, the 0.5×0.5 nautical mile bathymetry had to be adjusted in order to account for the fact that even at that scale many relevant topographic features are not resolved. Current velocity, salinity and turbulence observations during a medium intensity inflow event through the Øresund are then compared to the model results. Given the general problems of point to point comparisons between observations and model simulations, the agreement is fairly good with the characteristic features of the inflow event well represented by the model simulations. Two different bulk measures for mixing activity are then introduced, the vertically integrated decay of salinity variance, which is equal to the production of micro-scale salinity variance, and the vertically integrated turbulent salt flux, which is related to an increase of potential energy due to vertical mixing of stably stratified flow. Both measures give qualitatively similar results and identify the Drogden and Darss Sills as well as the Bornholm Channel as mixing hot spots. Further regions of strong mixing are the dense bottom current pathways from these sills into the Arkona Sea, areas around Kriegers Flak (a shoal in the western Arkona Sea) and north-west of the island of Rügen.

  6. Late-time mixing and turbulent behavior in high-energy-density shear experiments at high Atwood numbers

    DOE PAGES

    Flippo, K. A.; Doss, F. W.; Merritt, E. C.; ...

    2018-05-30

    The LANL Shear Campaign uses millimeter-scale initially solid shock tubes on the National Ignition Facility to conduct high-energy-density hydrodynamic plasma experiments, capable of reaching energy densities exceeding 100 kJ/cm 3. These shock-tube experiments have for the first time reproduced spontaneously emergent coherent structures due to shear-based fluid instabilities [i.e., Kelvin-Helmholtz (KH)], demonstrating hydrodynamic scaling over 8 orders of magnitude in time and velocity. The KH vortices, referred to as “rollers,” and the secondary instabilities, referred to as “ribs,” are used to understand the turbulent kinetic energy contained in the system. Their evolution is used to understand the transition to turbulencemore » and that transition's dependence on initial conditions. Experimental results from these studies are well modeled by the RAGE (Radiation Adaptive Grid Eulerian) hydro-code using the Besnard-Harlow-Rauenzahn turbulent mix model. Information inferred from both the experimental data and the mix model allows us to demonstrate that the specific Turbulent Kinetic Energy (sTKE) in the layer, as calculated from the plan-view structure data, is consistent with the mixing width growth and the RAGE simulations of sTKE.« less

  7. Late-time mixing and turbulent behavior in high-energy-density shear experiments at high Atwood numbers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flippo, K. A.; Doss, F. W.; Merritt, E. C.

    The LANL Shear Campaign uses millimeter-scale initially solid shock tubes on the National Ignition Facility to conduct high-energy-density hydrodynamic plasma experiments, capable of reaching energy densities exceeding 100 kJ/cm 3. These shock-tube experiments have for the first time reproduced spontaneously emergent coherent structures due to shear-based fluid instabilities [i.e., Kelvin-Helmholtz (KH)], demonstrating hydrodynamic scaling over 8 orders of magnitude in time and velocity. The KH vortices, referred to as “rollers,” and the secondary instabilities, referred to as “ribs,” are used to understand the turbulent kinetic energy contained in the system. Their evolution is used to understand the transition to turbulencemore » and that transition's dependence on initial conditions. Experimental results from these studies are well modeled by the RAGE (Radiation Adaptive Grid Eulerian) hydro-code using the Besnard-Harlow-Rauenzahn turbulent mix model. Information inferred from both the experimental data and the mix model allows us to demonstrate that the specific Turbulent Kinetic Energy (sTKE) in the layer, as calculated from the plan-view structure data, is consistent with the mixing width growth and the RAGE simulations of sTKE.« less

  8. High Fidelity Modeling of Turbulent Mixing and Chemical Kinetics Interactions in a Post-Detonation Flow Field

    NASA Astrophysics Data System (ADS)

    Sinha, Neeraj; Zambon, Andrea; Ott, James; Demagistris, Michael

    2015-06-01

    Driven by the continuing rapid advances in high-performance computing, multi-dimensional high-fidelity modeling is an increasingly reliable predictive tool capable of providing valuable physical insight into complex post-detonation reacting flow fields. Utilizing a series of test cases featuring blast waves interacting with combustible dispersed clouds in a small-scale test setup under well-controlled conditions, the predictive capabilities of a state-of-the-art code are demonstrated and validated. Leveraging physics-based, first principle models and solving large system of equations on highly-resolved grids, the combined effects of finite-rate/multi-phase chemical processes (including thermal ignition), turbulent mixing and shock interactions are captured across the spectrum of relevant time-scales and length scales. Since many scales of motion are generated in a post-detonation environment, even if the initial ambient conditions are quiescent, turbulent mixing plays a major role in the fireball afterburning as well as in dispersion, mixing, ignition and burn-out of combustible clouds in its vicinity. Validating these capabilities at the small scale is critical to establish a reliable predictive tool applicable to more complex and large-scale geometries of practical interest.

  9. Rheology and Extrusion of Cement-Fly Ashes Pastes

    NASA Astrophysics Data System (ADS)

    Micaelli, F.; Lanos, C.; Levita, G.

    2008-07-01

    The addition of fly ashes in cement pastes is tested to optimize the forming of cement based material by extrusion. Two sizes of fly ashes grains are examinated. The rheology of concentrated suspensions of ashes mixes is studied with a parallel plates rheometer. In stationary flow state, tested suspensions viscosities are satisfactorily described by the Krieger-Dougherty model. An "overlapped grain" suspensions model able to describe the bimodal suspensions behaviour is proposed. For higher values of solid volume fraction, Bingham viscoplastic behaviour is identified. Results showed that the plastic viscosity and plastic yield values present minimal values for the same optimal formulation of bimodal mixes. The rheological study is extended to more concentrated systems using an extruder. Finally it is observed that the addition of 30% vol. of optimized ashes mix determined a significant reduction of required extrusion load.

  10. Does the U.S. exercise contagion on Italy? A theoretical model and empirical evidence

    NASA Astrophysics Data System (ADS)

    Cerqueti, Roy; Fenga, Livio; Ventura, Marco

    2018-06-01

    This paper deals with the theme of contagion in financial markets. At this aim, we develop a model based on Mixed Poisson Processes to describe the abnormal returns of financial markets of two considered countries. In so doing, the article defines the theoretical conditions to be satisfied in order to state that one of them - the so-called leader - exercises contagion on the others - the followers. Specifically, we employ an invariant probabilistic result stating that a suitable transformation of a Mixed Poisson Process is still a Mixed Poisson Process. The theoretical claim is validated by implementing an extensive simulation analysis grounded on empirical data. The countries considered are the U.S. (as the leader) and Italy (as the follower) and the period under scrutiny is very large, ranging from 1970 to 2014.

  11. Evaluation and linking of effective parameters in particle-based models and continuum models for mixing-limited bimolecular reactions

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Papelis, Charalambos; Sun, Pengtao; Yu, Zhongbo

    2013-08-01

    Particle-based models and continuum models have been developed to quantify mixing-limited bimolecular reactions for decades. Effective model parameters control reaction kinetics, but the relationship between the particle-based model parameter (such as the interaction radius R) and the continuum model parameter (i.e., the effective rate coefficient Kf) remains obscure. This study attempts to evaluate and link R and Kf for the second-order bimolecular reaction in both the bulk and the sharp-concentration-gradient (SCG) systems. First, in the bulk system, the agent-based method reveals that R remains constant for irreversible reactions and decreases nonlinearly in time for a reversible reaction, while mathematical analysis shows that Kf transitions from an exponential to a power-law function. Qualitative link between R and Kf can then be built for the irreversible reaction with equal initial reactant concentrations. Second, in the SCG system with a reaction interface, numerical experiments show that when R and Kf decline as t-1/2 (for example, to account for the reactant front expansion), the two models capture the transient power-law growth of product mass, and their effective parameters have the same functional form. Finally, revisiting of laboratory experiments further shows that the best fit factor in R and Kf is on the same order, and both models can efficiently describe chemical kinetics observed in the SCG system. Effective model parameters used to describe reaction kinetics therefore may be linked directly, where the exact linkage may depend on the chemical and physical properties of the system.

  12. Physiological effects of diet mixing on consumer fitness: a meta-analysis.

    PubMed

    Lefcheck, Jonathan S; Whalen, Matthew A; Davenport, Theresa M; Stone, Joshua P; Duffy, J Emmett

    2013-03-01

    The degree of dietary generalism among consumers has important consequences for population, community, and ecosystem processes, yet the effects on consumer fitness of mixing food types have not been examined comprehensively. We conducted a meta-analysis of 161 peer-reviewed studies reporting 493 experimental manipulations of prey diversity to test whether diet mixing enhances consumer fitness based on the intrinsic nutritional quality of foods and consumer physiology. Averaged across studies, mixed diets conferred significantly higher fitness than the average of single-species diets, but not the best single prey species. More than half of individual experiments, however, showed maximal growth and reproduction on mixed diets, consistent with the predicted benefits of a balanced diet. Mixed diets including chemically defended prey were no better than the average prey type, opposing the prediction that a diverse diet dilutes toxins. Finally, mixed-model analysis showed that the effect of diet mixing was stronger for herbivores than for higher trophic levels. The generally weak evidence for the nutritional benefits of diet mixing in these primarily laboratory experiments suggests that diet generalism is not strongly favored by the inherent physiological benefits of mixing food types, but is more likely driven by ecological and environmental influences on consumer foraging.

  13. A multi-tracer approach coupled to numerical models to improve understanding of mountain block processes in a high elevation, semi-humid catchment

    NASA Astrophysics Data System (ADS)

    Dwivedi, R.; McIntosh, J. C.; Meixner, T.; Ferré, T. P. A.; Chorover, J.

    2016-12-01

    Mountain systems are critical sources of recharge to adjacent alluvial basins in dryland regions. Yet, mountain systems face poorly defined threats due to climate change in terms of reduced snowpack, precipitation changes, and increased temperatures. Fundamentally, the climate risks to mountain systems are uncertain due to our limited understanding of natural recharge processes. Our goal is to combine measurements and models to provide improved spatial and temporal descriptions of groundwater flow paths and transit times in a headwater catchment located in a sub-humid region. This information is important to quantifying groundwater age and, thereby, to providing more accurate assessments of the vulnerability of these systems to climate change. We are using: (a) combination of geochemical composition, along with 2H/18O and 3H isotopes to improve an existing conceptual model for mountain block recharge (MBR) for the Marshall Gulch Catchment (MGC) located within the Santa Catalina Mountains. The current model only focuses on shallow flow paths through the upper unconfined aquifer with no representation of the catchment's fractured-bedrock aquifer. Groundwater flow, solute transport, and groundwater age will be modeled throughout MGC using COMSOL Multiphysics® software. Competing models in terms of spatial distribution of required hydrologic parameters, e.g. hydraulic conductivity and porosity, will be proposed and these models will be used to design discriminatory data collection efforts based on multi-tracer methods. Initial end-member mixing results indicate that baseflow in MGC, if considered the same as the streamflow during the dry periods, is not represented by the chemistry of deep groundwater in the mountain system. In the ternary mixing space, most of the samples plot outside the mixing curve. Therefore, to further constrain the contributions of water from various reservoirs we are collecting stable water isotopes, tritium, and solute chemistry of precipitation, shallow groundwater, local spring water, MGC streamflow, and at a drainage location much lower than MGC outlet to better define and characterize each end-member of the ternary mixing model. Consequently, the end-member mixing results are expected to facilitate us in better understanding the MBR processes in and beyond MGC. Mountain systems are critical sources of recharge to adjacent alluvial basins in dryland regions. Yet, mountain systems face poorly defined threats due to climate change in terms of reduced snowpack, precipitation changes, and increased temperatures. Fundamentally, the climate risks to mountain systems are uncertain due to our limited understanding of natural recharge processes. Our goal is to combine measurements and models to provide improved spatial and temporal descriptions of groundwater flow paths and transit times in a headwater catchment located in a sub-humid region. This information is important to quantifying groundwater age and, thereby, to providing more accurate assessments of the vulnerability of these systems to climate change. We are using: (a) combination of geochemical composition, along with 2H/18O and 3H isotopes to improve an existing conceptual model for mountain block recharge (MBR) for the Marshall Gulch Catchment (MGC) located within the Santa Catalina Mountains. The current model only focuses on shallow flow paths through the upper unconfined aquifer with no representation of the catchment's fractured-bedrock aquifer. Groundwater flow, solute transport, and groundwater age will be modeled throughout MGC using COMSOL Multiphysics® software. Competing models in terms of spatial distribution of required hydrologic parameters, e.g. hydraulic conductivity and porosity, will be proposed and these models will be used to design discriminatory data collection efforts based on multi-tracer methods. Initial end-member mixing results indicate that baseflow in MGC, if considered the same as the streamflow during the dry periods, is not represented by the chemistry of deep groundwater in the mountain system. In the ternary mixing space, most of the samples plot outside the mixing curve. Therefore, to further constrain the contributions of water from various reservoirs we are collecting stable water isotopes, tritium, and solute chemistry of precipitation, shallow groundwater, local spring water, MGC streamflow, and at a drainage location much lower than MGC outlet to better define and characterize each end-member of the ternary mixing model. Consequently, the end-member mixing results are expected to facilitate us in better understanding the MBR processes in and beyond MGC.

  14. Assessing and evaluating multidisciplinary translational teams: a mixed methods approach.

    PubMed

    Wooten, Kevin C; Rose, Robert M; Ostir, Glenn V; Calhoun, William J; Ameredes, Bill T; Brasier, Allan R

    2014-03-01

    A case report illustrates how multidisciplinary translational teams can be assessed using outcome, process, and developmental types of evaluation using a mixed-methods approach. Types of evaluation appropriate for teams are considered in relation to relevant research questions and assessment methods. Logic models are applied to scientific projects and team development to inform choices between methods within a mixed-methods design. Use of an expert panel is reviewed, culminating in consensus ratings of 11 multidisciplinary teams and a final evaluation within a team-type taxonomy. Based on team maturation and scientific progress, teams were designated as (a) early in development, (b) traditional, (c) process focused, or (d) exemplary. Lessons learned from data reduction, use of mixed methods, and use of expert panels are explored.

  15. Models of Plumes: Their Flow, Their Geometric Spreading, and Their Mixing with Interplume Flow

    NASA Technical Reports Server (NTRS)

    Suess, Steven T.

    1998-01-01

    There are two types of plume flow models: (1) 1D models using ad hoc spreading functions, f(r); (2) MagnetoHydroDynamics (MHD) models. 1D models can be multifluid, time dependent, and incorporate very general descriptions of the energetics. They confirm empirical results that plume flow is slow relative to requirements for high speed wind. But, no published 1 D model incorporates the rapid local spreading at the base (fl(r)) which has an important effect on mass flux. The one published MHD model is isothermal, but confirms that if b=8*pi*p/absolute value(B)2<

  16. Mixed Membership Distributions with Applications to Modeling Multiple Strategy Usage

    ERIC Educational Resources Information Center

    Galyardt, April

    2012-01-01

    This dissertation examines two related questions. "How do mixed membership models work?" and "Can mixed membership be used to model how students use multiple strategies to solve problems?". Mixed membership models have been used in thousands of applications from text and image processing to genetic microarray analysis. Yet…

  17. Elucidating the Higher Stability of Vanadium (V) Cations in Mixed Acid Based Redox Flow Battery Electrolytes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vijayakumar, M.; Wang, Wei; Nie, Zimin

    2013-11-01

    The Vanadium (V) cation structures in mixed acid based electrolyte solution were analysed by density functional theory (DFT) based computational modelling and 51V and 35Cl Nuclear Magnetic Resonance (NMR) spectroscopy. The Vanadium (V) cation exists as di-nuclear [V2O3Cl2.6H2O]2+ compound at higher vanadium concentrations (≥1.75M). In particular, at high temperatures (>295K) this di-nuclear compound undergoes ligand exchange process with nearby solvent chlorine molecule and forms chlorine bonded [V2O3Cl2.6H2O]2+ compound. This chlorine bonded [V2O3Cl2.6H2O]2+ compound might be resistant to the de-protonation reaction which is the initial step in the precipitation reaction in Vanadium based electrolyte solutions. The combined theoretical and experimental approachmore » reveals that formation of chlorine bonded [V2O3Cl2.6H2O]2+ compound might be central to the observed higher thermal stability of mixed acid based Vanadium (V) electrolyte solutions.« less

  18. Examining the Variability of Sleep Patterns during Treatment for Chronic Insomnia: Application of a Location-Scale Mixed Model.

    PubMed

    Ong, Jason C; Hedeker, Donald; Wyatt, James K; Manber, Rachel

    2016-06-15

    The purpose of this study was to introduce a novel statistical technique called the location-scale mixed model that can be used to analyze the mean level and intra-individual variability (IIV) using longitudinal sleep data. We applied the location-scale mixed model to examine changes from baseline in sleep efficiency on data collected from 54 participants with chronic insomnia who were randomized to an 8-week Mindfulness-Based Stress Reduction (MBSR; n = 19), an 8-week Mindfulness-Based Therapy for Insomnia (MBTI; n = 19), or an 8-week self-monitoring control (SM; n = 16). Sleep efficiency was derived from daily sleep diaries collected at baseline (days 1-7), early treatment (days 8-21), late treatment (days 22-63), and post week (days 64-70). The behavioral components (sleep restriction, stimulus control) were delivered during late treatment in MBTI. For MBSR and MBTI, the pre-to-post change in mean levels of sleep efficiency were significantly larger than the change in mean levels for the SM control, but the change in IIV was not significantly different. During early and late treatment, MBSR showed a larger increase in mean levels of sleep efficiency and a larger decrease in IIV relative to the SM control. At late treatment, MBTI had a larger increase in the mean level of sleep efficiency compared to SM, but the IIV was not significantly different. The location-scale mixed model provides a two-dimensional analysis on the mean and IIV using longitudinal sleep diary data with the potential to reveal insights into treatment mechanisms and outcomes. © 2016 American Academy of Sleep Medicine.

  19. Effects of Precipitation on Ocean Mixed-Layer Temperature and Salinity as Simulated in a 2-D Coupled Ocean-Cloud Resolving Atmosphere Model

    NASA Technical Reports Server (NTRS)

    Li, Xiaofan; Sui, C.-H.; Lau, K-M.; Adamec, D.

    1999-01-01

    A two-dimensional coupled ocean-cloud resolving atmosphere model is used to investigate possible roles of convective scale ocean disturbances induced by atmospheric precipitation on ocean mixed-layer heat and salt budgets. The model couples a cloud resolving model with an embedded mixed layer-ocean circulation model. Five experiment are performed under imposed large-scale atmospheric forcing in terms of vertical velocity derived from the TOGA COARE observations during a selected seven-day period. The dominant variability of mixed-layer temperature and salinity are simulated by the coupled model with imposed large-scale forcing. The mixed-layer temperatures in the coupled experiments with 1-D and 2-D ocean models show similar variations when salinity effects are not included. When salinity effects are included, however, differences in the domain-mean mixed-layer salinity and temperature between coupled experiments with 1-D and 2-D ocean models could be as large as 0.3 PSU and 0.4 C respectively. Without fresh water effects, the nocturnal heat loss over ocean surface causes deep mixed layers and weak cooling rates so that the nocturnal mixed-layer temperatures tend to be horizontally-uniform. The fresh water flux, however, causes shallow mixed layers over convective areas while the nocturnal heat loss causes deep mixed layer over convection-free areas so that the mixed-layer temperatures have large horizontal fluctuations. Furthermore, fresh water flux exhibits larger spatial fluctuations than surface heat flux because heavy rainfall occurs over convective areas embedded in broad non-convective or clear areas, whereas diurnal signals over whole model areas yield high spatial correlation of surface heat flux. As a result, mixed-layer salinities contribute more to the density differences than do mixed-layer temperatures.

  20. A brief measure of attitudes toward mixed methods research in psychology

    PubMed Central

    Roberts, Lynne D.; Povee, Kate

    2014-01-01

    The adoption of mixed methods research in psychology has trailed behind other social science disciplines. Teaching psychology students, academics, and practitioners about mixed methodologies may increase the use of mixed methods within the discipline. However, tailoring and evaluating education and training in mixed methodologies requires an understanding of, and way of measuring, attitudes toward mixed methods research in psychology. To date, no such measure exists. In this article we present the development and initial validation of a new measure: Attitudes toward Mixed Methods Research in Psychology. A pool of 42 items developed from previous qualitative research on attitudes toward mixed methods research along with validation measures was administered via an online survey to a convenience sample of 274 psychology students, academics and psychologists. Principal axis factoring with varimax rotation on a subset of the sample produced a four-factor, 12-item solution. Confirmatory factor analysis on a separate subset of the sample indicated that a higher order four factor model provided the best fit to the data. The four factors; ‘Limited Exposure,’ ‘(in)Compatibility,’ ‘Validity,’ and ‘Tokenistic Qualitative Component’; each have acceptable internal reliability. Known groups validity analyses based on preferred research orientation and self-rated mixed methods research skills, and convergent and divergent validity analyses based on measures of attitudes toward psychology as a science and scientist and practitioner orientation, provide initial validation of the measure. This brief, internally reliable measure can be used in assessing attitudes toward mixed methods research in psychology, measuring change in attitudes as part of the evaluation of mixed methods education, and in larger research programs. PMID:25429281

  1. Modelling ice microphysics of mixed-phase clouds

    NASA Astrophysics Data System (ADS)

    Ahola, J.; Raatikainen, T.; Tonttila, J.; Romakkaniemi, S.; Kokkola, H.; Korhonen, H.

    2017-12-01

    The low-level Arctic mixed-phase clouds have a significant role for the Arctic climate due to their ability to absorb and reflect radiation. Since the climate change is amplified in polar areas, it is vital to apprehend the mixed-phase cloud processes. From a modelling point of view, this requires a high spatiotemporal resolution to capture turbulence and the relevant microphysical processes, which has shown to be difficult.In order to solve this problem about modelling mixed-phase clouds, a new ice microphysics description has been developed. The recently published large-eddy simulation cloud model UCLALES-SALSA offers a good base for a feasible solution (Tonttila et al., Geosci. Mod. Dev., 10:169-188, 2017). The model includes aerosol-cloud interactions described with a sectional SALSA module (Kokkola et al., Atmos. Chem. Phys., 8, 2469-2483, 2008), which represents a good compromise between detail and computational expense.Newly, the SALSA module has been upgraded to include also ice microphysics. The dynamical part of the model is based on well-known UCLA-LES model (Stevens et al., J. Atmos. Sci., 56, 3963-3984, 1999) which can be used to study cloud dynamics on a fine grid.The microphysical description of ice is sectional and the included processes consist of formation, growth and removal of ice and snow particles. Ice cloud particles are formed by parameterized homo- or heterogeneous nucleation. The growth mechanisms of ice particles and snow include coagulation and condensation of water vapor. Autoconversion from cloud ice particles to snow is parameterized. The removal of ice particles and snow happens by sedimentation and melting.The implementation of ice microphysics is tested by initializing the cloud simulation with atmospheric observations from the Indirect and Semi-Direct Aerosol Campaign (ISDAC). The results are compared to the model results shown in the paper of Ovchinnikov et al. (J. Adv. Model. Earth Syst., 6, 223-248, 2014) and they show a good match. One of the advantages of UCLALES-SALSA is that it can be used to quantify the effect of aerosol scavenging on cloud properties in a precise way.

  2. Turbulence closure for mixing length theories

    NASA Astrophysics Data System (ADS)

    Jermyn, Adam S.; Lesaffre, Pierre; Tout, Christopher A.; Chitre, Shashikumar M.

    2018-05-01

    We present an approach to turbulence closure based on mixing length theory with three-dimensional fluctuations against a two-dimensional background. This model is intended to be rapidly computable for implementation in stellar evolution software and to capture a wide range of relevant phenomena with just a single free parameter, namely the mixing length. We incorporate magnetic, rotational, baroclinic, and buoyancy effects exactly within the formalism of linear growth theories with non-linear decay. We treat differential rotation effects perturbatively in the corotating frame using a novel controlled approximation, which matches the time evolution of the reference frame to arbitrary order. We then implement this model in an efficient open source code and discuss the resulting turbulent stresses and transport coefficients. We demonstrate that this model exhibits convective, baroclinic, and shear instabilities as well as the magnetorotational instability. It also exhibits non-linear saturation behaviour, and we use this to extract the asymptotic scaling of various transport coefficients in physically interesting limits.

  3. Improved accuracy for finite element structural analysis via an integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  4. 3D Visualization of Global Ocean Circulation

    NASA Astrophysics Data System (ADS)

    Nelson, V. G.; Sharma, R.; Zhang, E.; Schmittner, A.; Jenny, B.

    2015-12-01

    Advanced 3D visualization techniques are seldom used to explore the dynamic behavior of ocean circulation. Streamlines are an effective method for visualization of flow, and they can be designed to clearly show the dynamic behavior of a fluidic system. We employ vector field editing and extraction software to examine the topology of velocity vector fields generated by a 3D global circulation model coupled to a one-layer atmosphere model simulating preindustrial and last glacial maximum (LGM) conditions. This results in a streamline-based visualization along multiple density isosurfaces on which we visualize points of vertical exchange and the distribution of properties such as temperature and biogeochemical tracers. Previous work involving this model examined the change in the energetics driving overturning circulation and mixing between simulations of LGM and preindustrial conditions. This visualization elucidates the relationship between locations of vertical exchange and mixing, as well as demonstrates the effects of circulation and mixing on the distribution of tracers such as carbon isotopes.

  5. Detection and quantification of adulteration of sesame oils with vegetable oils using gas chromatography and multivariate data analysis.

    PubMed

    Peng, Dan; Bi, Yanlan; Ren, Xiaona; Yang, Guolong; Sun, Shangde; Wang, Xuede

    2015-12-01

    This study was performed to develop a hierarchical approach for detection and quantification of adulteration of sesame oil with vegetable oils using gas chromatography (GC). At first, a model was constructed to discriminate the difference between authentic sesame oils and adulterated sesame oils using support vector machine (SVM) algorithm. Then, another SVM-based model is developed to identify the type of adulterant in the mixed oil. At last, prediction models for sesame oil were built for each kind of oil using partial least square method. To validate this approach, 746 samples were prepared by mixing authentic sesame oils with five types of vegetable oil. The prediction results show that the detection limit for authentication is as low as 5% in mixing ratio and the root-mean-square errors for prediction range from 1.19% to 4.29%, meaning that this approach is a valuable tool to detect and quantify the adulteration of sesame oil. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Upscaling of dilution and mixing using a trajectory based Spatial Markov random walk model in a periodic flow domain

    NASA Astrophysics Data System (ADS)

    Sund, Nicole L.; Porta, Giovanni M.; Bolster, Diogo

    2017-05-01

    The Spatial Markov Model (SMM) is an upscaled model that has been used successfully to predict effective mean transport across a broad range of hydrologic settings. Here we propose a novel variant of the SMM, applicable to spatially periodic systems. This SMM is built using particle trajectories, rather than travel times. By applying the proposed SMM to a simple benchmark problem we demonstrate that it can predict mean effective transport, when compared to data from fully resolved direct numerical simulations. Next we propose a methodology for using this SMM framework to predict measures of mixing and dilution, that do not just depend on mean concentrations, but are strongly impacted by pore-scale concentration fluctuations. We use information from trajectories of particles to downscale and reconstruct pore-scale approximate concentration fields from which mixing and dilution measures are then calculated. The comparison between measurements from fully resolved simulations and predictions with the SMM agree very favorably.

  7. THE SPECTRAL EVOLUTION OF CONVECTIVE MIXING WHITE DWARFS, THE NON-DA GAP, AND WHITE DWARF COSMOCHRONOLOGY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Eugene Y.; Hansen, Brad M. S., E-mail: eyc@mail.utexas.edu, E-mail: hansen@astro.ucla.edu

    The spectral distribution of field white dwarfs shows a feature called the 'non-DA gap'. As defined by Bergeron et al., this is a temperature range (5100-6100 K) where relatively few non-DA stars are found, even though such stars are abundant on either side of the gap. It is usually viewed as an indication that a significant fraction of white dwarfs switch their atmospheric compositions back and forth between hydrogen-rich and helium-rich as they cool. In this Letter, we present a Monte Carlo model of the Galactic disk white dwarf population, based on the spectral evolution model of Chen and Hansen.more » We find that the non-DA gap emerges naturally, even though our model only allows white dwarf atmospheres to evolve monotonically from hydrogen-rich to helium-rich through convective mixing. We conclude by discussing the effects of convective mixing on the white dwarf luminosity function and the use thereof for Cosmochronology.« less

  8. Hydrogeochemistry of sodium-bicarbonate type bedrock groundwater in the Pocheon spa area, South Korea: water rock interaction and hydrologic mixing

    NASA Astrophysics Data System (ADS)

    Chae, Gi-Tak; Yun, Seong-Taek; Kim, Kangjoo; Mayer, Bernhard

    2006-04-01

    The Pocheon spa-land area, South Korea occurs in a topographically steep, fault-bounded basin and is characterized by a hydraulic upwelling flow zone of thermal water (up to 44 °C) in its central part. Hydrogeochemical and environmental isotope data for groundwater in the study area suggested the occurrence of two distinct water types, a Ca-HCO 3 type and a Na-HCO 3 type. The former water type is characterized by relatively high concentrations of Ca, SO 4 and NO 3, which show significant temporal variation indicating a strong influence by surface processes. In contrast, the Na-HCO 3 type waters have high and temporally constant temperature, pH, TDS, Na, Cl, HCO 3 and F, indicating the attainment of a chemical steady state with respect to the host rocks (granite and gneiss). Oxygen, hydrogen and tritium isotope data also indicate the differences in hydrologic conditions between the two groups: the relatively lower δ 18O, δD and tritium values for Na-HCO 3 type waters suggest that they recharged at higher elevations and have comparatively long mean residence times. Considering the geologic and hydrogeologic conditions of the study area, Na-HCO 3 type waters possibly have evolved from Ca-HCO 3 type waters. Mass balance modeling revealed that the chemistry of Na-HCO 3 type water was regulated by dissolution of silicates and carbonates and concurrent ion exchange. Particularly, low Ca concentrations in Na-HCO 3 water was mainly caused by cation exchange. Multivariate mixing and mass balance modeling (M3 modeling) was performed to evaluate the hydrologic mixing and mass transfer between discrete water masses occurring in the shallow peripheral part of the central spa-land area, where hydraulic upwelling occurs. Based on Q-mode factor analysis and mixing modeling using PHREEQC, an ideal mixing among three major water masses (surface water, shallow groundwater of Ca-HCO 3 type, deep groundwater of Na-HCO 3 type) was proposed. M3 modeling suggests that all the groundwaters in the spa area can be described as mixtures of these end-members. After mixing, the net mole transfer by geochemical reaction was less than that without mixing. Therefore, it is likely that in the hydraulic mixing zone geochemical reactions are of minor importance and, therefore, that mixing regulates the groundwater geochemistry.

  9. The Littoral Combat Ship (LCS) Surface Warfare (SUW) Module: Determining the Surface-To-Surface Missile and Air-To-Surface Missile Mix

    DTIC Science & Technology

    2010-09-01

    agent-based modeling platform known as MANA. The simulation is exercised over a broad range of different weapon systems types with their capabilities...Navy B.A., University of Florida, 2004 Submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE IN MODELING ...aerial vehicle (UAV) will have. This study uses freely available data to build a simulation utilizing an agent-based modeling platform known as MANA

  10. A complex permittivity model for field estimation of soil water contents using time domain reflectometry

    USDA-ARS?s Scientific Manuscript database

    Accurate electromagnetic sensing of soil water contents (') under field conditions is complicated by the dependence of permittivity on specific surface area, temperature, and apparent electrical conductivity, all which may vary across space or time. We present a physically-based mixing model to pred...

  11. Problem Solving Under Time-Constraints.

    ERIC Educational Resources Information Center

    Richardson, Michael; Hunt, Earl

    A model of how automated and controlled processing can be mixed in computer simulations of problem solving is proposed. It is based on previous work by Hunt and Lansman (1983), who developed a model of problem solving that could reproduce the data obtained with several attention and performance paradigms, extending production-system notation to…

  12. Organizational Change, Absenteeism, and Welfare Dependency

    ERIC Educational Resources Information Center

    Roed, Knut; Fevang, Elisabeth

    2007-01-01

    Based on Norwegian register data, we set up a multivariate mixed proportional hazard model (MMPH) to analyze nurses' pattern of work, sickness absence, nonemployment, and social insurance dependency from 1992 to 2000, and how that pattern was affected by workplace characteristics. The model is estimated by means of the nonparametric…

  13. Tropical Cyclone Footprint in the Ocean Mixed Layer Observed by Argo in the Northwest Pacific

    DTIC Science & Technology

    2014-10-25

    668. Hu, A., and G. A. Meehl (2009), Effect of the Atlantic hurricanes on the oceanic meridional overturning circulation and heat transport, Geo...atmospheric circulation [Hart et al., 2007]. Several studies, based on observations and modeling, suggest that TC-induced energy input and mixing may play...an important role in climate variability through regulating the oceanic general circulation and its variability [e.g., Emanuel, 2001; Sriver and Huber

  14. Investigation of 2-Dimensional Isotropy of Under-Ice Roughness in the Beaufort Gyre and Implications for Mixed Layer Ocean Turbulence

    DTIC Science & Technology

    2008-03-01

    this roughness is important for numerical modeling and prediction of the Arctic air-ice-ocean system, which will play a significant role as the US Navy...is important for numerical modeling and prediction of the Arctic air-ice-ocean system, which will play a significant role as the US Navy increases... Model 1 is based on a sequence of plane parallel layers each with a constant gradient whereas Model 2 is based on a series of flat layers of

  15. Steady state RANS simulations of temperature fluctuations in single phase turbulent mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kickhofel, J.; Fokken, J.; Kapulla, R.

    2012-07-01

    Single phase turbulent mixing in nuclear power plant circuits where a strong temperature gradient is present is known to precipitate pipe failure due to thermal fatigue. Experiments in a square mixing channel offer the opportunity to study the phenomenon under simple and easily reproducible boundary conditions. Measurements of this kind have been performed extensively at the Paul Scherrer Inst. in Switzerland with a high density of instrumentation in the Generic Mixing Experiment (GEMIX). As a fundamental mixing phenomena study closely related to the thermal fatigue problem, the experimental results from GEMIX are valuable for the validation of CFD codes strivingmore » to accurately simulate both the temperature and velocity fields in single phase turbulent mixing. In the experiments two iso-kinetic streams meet at a shallow angle of 3 degrees and mix in a straight channel of square cross-section under various degrees of density, temperature, and viscosity stratification over a range of Reynolds numbers ranging from 5*10{sup 3} to 1*10{sup 5}. Conductivity measurements, using wire-mesh and wall sensors, as well as optical measurements, using particle image velocimetry, were conducted with high temporal and spatial resolutions (up to 2.5 kHz and 1 mm in the case of the wire mesh sensor) in the mixing zone, downstream of a splitter plate. The present paper communicates the results of RANS modeling of selected GEMIX tests. Steady-state CFD calculations using a RANS turbulence model represent an inexpensive method for analyzing large and complex components in commercial nuclear reactors, such as the downcomer and reactor pressure vessel heads. Crucial to real world applicability, however, is the ability to model turbulent heat fluctuations in the flow; the Turbulent Heat Flux Transport model developed by ANSYS CFX is capable, by implementation of a transport equation for turbulent heat fluxes, of readily modeling these values. Furthermore, the closure of the turbulent heat flux transport equation evokes a transport equation for the variance of the enthalpy. It is therefore possible to compare the modeled fluctuations of the liquid temperature directly with the scalar fluctuations recorded experimentally with the wire-mesh. Combined with a working Turbulent Heat Flux Transport model, complex mixing problems in large geometries could be better understood. We aim for the validation of Reynolds Stress based RANS simulations extended by the Turbulent Heat Flux Transport model by modeling the GEMIX experiments in detail. Numerical modeling has been performed using both BSL and SSG Reynolds Stress Models in a test matrix comprising experimental trials at the GEMIX facility. We expand on the turbulent mixing RANS CFD results of (Manera 2009) in a few ways. In the GEMIX facility we introduce density stratification in the flow while removing the characteristic large scale vorticity encountered in T-junctions and therefore find better conditions to check the diffusive conditions in the model. Furthermore, we study the performance of the model in a very different, simpler scalar fluctuation spectrum. The paper discusses the performance of the model regarding the dissipation of the turbulent kinetic energy and dissipation of the enthalpy variance. A novel element is the analyses of cases with density stratification. (authors)« less

  16. Baseline projections for Latin America: base-year assumptions, key drivers and greenhouse emissions

    DOE PAGES

    van Ruijven, Bas J.; Daenzer, Katie; Fisher-Vanden, Karen; ...

    2016-02-14

    This article provides an overview of the base-year assumptions and core baseline projections for the set of models participating in the LAMP and CLIMACAP projects. Here we present the range in core baseline projections for Latin America, and identify key differences between model projections including how these projections compare to historic trends. We find relatively large differences across models in base year assumptions related to population, GDP, energy and CO 2 emissions due to the use of different data sources, but also conclude that this does not influence the range of projections. We find that population and GDP projections acrossmore » models span a broad range, comparable to the range represented by the set of Shared Socioeconomic Pathways (SSPs). Kaya-factor decomposition indicates that the set of core baseline scenarios mirrors trends experienced over the past decades. Emissions in Latin America are projected to rise as result of GDP and population growth and a minor shift in the energy mix toward fossil fuels. Most scenarios assume a somewhat higher GDP growth than historically observed and continued decline of population growth. Minor changes in energy intensity or energy mix are projected over the next few decades.« less

  17. Mixed H2/H∞-Based Fusion Estimation for Energy-Limited Multi-Sensors in Wearable Body Networks

    PubMed Central

    Li, Chao; Zhang, Zhenjiang; Chao, Han-Chieh

    2017-01-01

    In wireless sensor networks, sensor nodes collect plenty of data for each time period. If all of data are transmitted to a Fusion Center (FC), the power of sensor node would run out rapidly. On the other hand, the data also needs a filter to remove the noise. Therefore, an efficient fusion estimation model, which can save the energy of the sensor nodes while maintaining higher accuracy, is needed. This paper proposes a novel mixed H2/H∞-based energy-efficient fusion estimation model (MHEEFE) for energy-limited Wearable Body Networks. In the proposed model, the communication cost is firstly reduced efficiently while keeping the estimation accuracy. Then, the parameters in quantization method are discussed, and we confirm them by an optimization method with some prior knowledge. Besides, some calculation methods of important parameters are researched which make the final estimates more stable. Finally, an iteration-based weight calculation algorithm is presented, which can improve the fault tolerance of the final estimate. In the simulation, the impacts of some pivotal parameters are discussed. Meanwhile, compared with the other related models, the MHEEFE shows a better performance in accuracy, energy-efficiency and fault tolerance. PMID:29280950

  18. Simulations of Turbulent Flows with Strong Shocks and Density Variations: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanjiva Lele

    2012-10-01

    The target of this SciDAC Science Application was to develop a new capability based on high-order and high-resolution schemes to simulate shock-turbulence interactions and multi-material mixing in planar and spherical geometries, and to study Rayleigh-Taylor and Richtmyer-Meshkov turbulent mixing. These fundamental problems have direct application in high-speed engineering flows, such as inertial confinement fusion (ICF) capsule implosions and scramjet combustion, and also in the natural occurrence of supernovae explosions. Another component of this project was the development of subgrid-scale (SGS) models for large-eddy simulations of flows involving shock-turbulence interaction and multi-material mixing, that were to be validated with the DNSmore » databases generated during the program. The numerical codes developed are designed for massively-parallel computer architectures, ensuring good scaling performance. Their algorithms were validated by means of a sequence of benchmark problems. The original multi-stage plan for this five-year project included the following milestones: 1) refinement of numerical algorithms for application to the shock-turbulence interaction problem and multi-material mixing (years 1-2); 2) direct numerical simulations (DNS) of canonical shock-turbulence interaction (years 2-3), targeted at improving our understanding of the physics behind the combined two phenomena and also at guiding the development of SGS models; 3) large-eddy simulations (LES) of shock-turbulence interaction (years 3-5), improving SGS models based on the DNS obtained in the previous phase; 4) DNS of planar/spherical RM multi-material mixing (years 3-5), also with the two-fold objective of gaining insight into the relevant physics of this instability and aiding in devising new modeling strategies for multi-material mixing; 5) LES of planar/spherical RM mixing (years 4-5), integrating the improved SGS and multi-material models developed in stages 3 and 5. This final report is outlined as follows. Section 2 shows an assessment of numerical algorithms that are best suited for the numerical simulation of compressible flows involving turbulence and shock phenomena. Sections 3 and 4 deal with the canonical shock-turbulence interaction problem, from the DNS and LES perspectives, respectively. Section 5 considers the shock-turbulence inter-action in spherical geometry, in particular, the interaction of a converging shock with isotropic turbulence as well as the problem of the blast wave. Section 6 describes the study of shock-accelerated mixing through planar and spherical Richtmyer-Meshkov mixing as well as the shock-curtain interaction problem In section 7 we acknowledge the different interactions between Stanford and other institutions participating in this SciDAC project, as well as several external collaborations made possible through it. Section 8 presents a list of publications and presentations that have been generated during the course of this SciDAC project. Finally, section 9 concludes this report with the list of personnel at Stanford University funded by this SciDAC project.« less

  19. A novel material detection algorithm based on 2D GMM-based power density function and image detail addition scheme in dual energy X-ray images.

    PubMed

    Pourghassem, Hossein

    2012-01-01

    Material detection is a vital need in dual energy X-ray luggage inspection systems at security of airport and strategic places. In this paper, a novel material detection algorithm based on statistical trainable models using 2-Dimensional power density function (PDF) of three material categories in dual energy X-ray images is proposed. In this algorithm, the PDF of each material category as a statistical model is estimated from transmission measurement values of low and high energy X-ray images by Gaussian Mixture Models (GMM). Material label of each pixel of object is determined based on dependency probability of its transmission measurement values in the low and high energy to PDF of three material categories (metallic, organic and mixed materials). The performance of material detection algorithm is improved by a maximum voting scheme in a neighborhood of image as a post-processing stage. Using two background removing and denoising stages, high and low energy X-ray images are enhanced as a pre-processing procedure. For improving the discrimination capability of the proposed material detection algorithm, the details of the low and high energy X-ray images are added to constructed color image which includes three colors (orange, blue and green) for representing the organic, metallic and mixed materials. The proposed algorithm is evaluated on real images that had been captured from a commercial dual energy X-ray luggage inspection system. The obtained results show that the proposed algorithm is effective and operative in detection of the metallic, organic and mixed materials with acceptable accuracy.

  20. Comparison of CTT and Rasch-based approaches for the analysis of longitudinal Patient Reported Outcomes.

    PubMed

    Blanchin, Myriam; Hardouin, Jean-Benoit; Le Neel, Tanguy; Kubis, Gildas; Blanchard, Claire; Mirallié, Eric; Sébille, Véronique

    2011-04-15

    Health sciences frequently deal with Patient Reported Outcomes (PRO) data for the evaluation of concepts, in particular health-related quality of life, which cannot be directly measured and are often called latent variables. Two approaches are commonly used for the analysis of such data: Classical Test Theory (CTT) and Item Response Theory (IRT). Longitudinal data are often collected to analyze the evolution of an outcome over time. The most adequate strategy to analyze longitudinal latent variables, which can be either based on CTT or IRT models, remains to be identified. This strategy must take into account the latent characteristic of what PROs are intended to measure as well as the specificity of longitudinal designs. A simple and widely used IRT model is the Rasch model. The purpose of our study was to compare CTT and Rasch-based approaches to analyze longitudinal PRO data regarding type I error, power, and time effect estimation bias. Four methods were compared: the Score and Mixed models (SM) method based on the CTT approach, the Rasch and Mixed models (RM), the Plausible Values (PV), and the Longitudinal Rasch model (LRM) methods all based on the Rasch model. All methods have shown comparable results in terms of type I error, all close to 5 per cent. LRM and SM methods presented comparable power and unbiased time effect estimations, whereas RM and PV methods showed low power and biased time effect estimations. This suggests that RM and PV methods should be avoided to analyze longitudinal latent variables. Copyright © 2010 John Wiley & Sons, Ltd.

Top