Sample records for mixing model defined

  1. Realistic simplified gaugino-higgsino models in the MSSM

    NASA Astrophysics Data System (ADS)

    Fuks, Benjamin; Klasen, Michael; Schmiemann, Saskia; Sunder, Marthijn

    2018-03-01

    We present simplified MSSM models for light neutralinos and charginos with realistic mass spectra and realistic gaugino-higgsino mixing, that can be used in experimental searches at the LHC. The formerly used naive approach of defining mass spectra and mixing matrix elements manually and independently of each other does not yield genuine MSSM benchmarks. We suggest the use of less simplified, but realistic MSSM models, whose mass spectra and mixing matrix elements are the result of a proper matrix diagonalisation. We propose a novel strategy targeting the design of such benchmark scenarios, accounting for user-defined constraints in terms of masses and particle mixing. We apply it to the higgsino case and implement a scan in the four relevant underlying parameters {μ , tan β , M1, M2} for a given set of light neutralino and chargino masses. We define a measure for the quality of the obtained benchmarks, that also includes criteria to assess the higgsino content of the resulting charginos and neutralinos. We finally discuss the distribution of the resulting models in the MSSM parameter space as well as their implications for supersymmetric dark matter phenomenology.

  2. Measuring case-mix complexity of tertiary care hospitals using DRGs.

    PubMed

    Park, Hayoung; Shin, Youngsoo

    2004-02-01

    The objectives of the study were to develop a model that measures and evaluates case-mix complexity of tertiary care hospitals, and to examine the characteristics of such a model. Physician panels defined three classes of case complexity and assigned disease categories represented by Adjacent Diagnosis Related Groups (ADRGs) to one of three case complexity classes. Three types of scores, indicating proportions of inpatients in each case complexity class standardized by the proportions at the national level, were defined to measure the case-mix complexity of a hospital. Discharge information for about 10% of inpatient episodes at 85 hospitals with bed size larger than 400 and their input structure and research and education activity were used to evaluate the case-mix complexity model. Results show its power to predict hospitals with the expected functions of tertiary care hospitals, i.e. resource intensive care, expensive input structure, and high levels of research and education activities.

  3. Improving Mixed Variable Optimization of Computational and Model Parameters Using Multiple Surrogate Functions

    DTIC Science & Technology

    2008-03-01

    multiplicative corrections as well as space mapping transformations for models defined over a lower dimensional space. A corrected surrogate model for the...correction functions used in [72]. If the low fidelity model g(x̃) is defined over a lower dimensional space then a space mapping transformation is...required. As defined in [21, 72], space mapping is a method of mapping between models of different dimensionality or fidelity. Let P denote the space

  4. The effect of different methods to compute N on estimates of mixing in stratified flows

    NASA Astrophysics Data System (ADS)

    Fringer, Oliver; Arthur, Robert; Venayagamoorthy, Subhas; Koseff, Jeffrey

    2017-11-01

    The background stratification is typically well defined in idealized numerical models of stratified flows, although it is more difficult to define in observations. This may have important ramifications for estimates of mixing which rely on knowledge of the background stratification against which turbulence must work to mix the density field. Using direct numerical simulation data of breaking internal waves on slopes, we demonstrate a discrepancy in ocean mixing estimates depending on the method in which the background stratification is computed. Two common methods are employed to calculate the buoyancy frequency N, namely a three-dimensionally resorted density field (often used in numerical models) and a locally-resorted vertical density profile (often used in the field). We show that how N is calculated has a significant effect on the flux Richardson number Rf, which is often used to parameterize turbulent mixing, and the turbulence activity number Gi, which leads to errors when estimating the mixing efficiency using Gi-based parameterizations. Supported by ONR Grant N00014-08-1-0904 and LLNL Contract DE-AC52-07NA27344.

  5. A multifluid model extended for strong temperature nonequilibrium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Chong

    2016-08-08

    We present a multifluid model in which the material temperature is strongly affected by the degree of segregation of each material. In order to track temperatures of segregated form and mixed form of the same material, they are defined as different materials with their own energy. This extension makes it necessary to extend multifluid models to the case in which each form is defined as a separate material. Statistical variations associated with the morphology of the mixture have to be simplified. Simplifications introduced include combining all molecularly mixed species into a single composite material, which is treated as another segregatedmore » material. Relative motion within the composite material, diffusion, is represented by material velocity of each component in the composite material. Compression work, momentum and energy exchange, virtual mass forces, and dissipation of the unresolved kinetic energy have been generalized to the heterogeneous mixture in temperature nonequilibrium. The present model can be further simplified by combining all mixed forms of materials into a composite material. Molecular diffusion in this case is modeled by the Stefan-Maxwell equations.« less

  6. Convex set and linear mixing model

    NASA Technical Reports Server (NTRS)

    Xu, P.; Greeley, R.

    1993-01-01

    A major goal of optical remote sensing is to determine surface compositions of the earth and other planetary objects. For assessment of composition, single pixels in multi-spectral images usually record a mixture of the signals from various materials within the corresponding surface area. In this report, we introduce a closed and bounded convex set as a mathematical model for linear mixing. This model has a clear geometric implication because the closed and bounded convex set is a natural generalization of a triangle in n-space. The endmembers are extreme points of the convex set. Every point in the convex closure of the endmembers is a linear mixture of those endmembers, which is exactly how linear mixing is defined. With this model, some general criteria for selecting endmembers could be described. This model can lead to a better understanding of linear mixing models.

  7. Modelling rainfall amounts using mixed-gamma model for Kuantan district

    NASA Astrophysics Data System (ADS)

    Zakaria, Roslinazairimah; Moslim, Nor Hafizah

    2017-05-01

    An efficient design of flood mitigation and construction of crop growth models depend upon good understanding of the rainfall process and characteristics. Gamma distribution is usually used to model nonzero rainfall amounts. In this study, the mixed-gamma model is applied to accommodate both zero and nonzero rainfall amounts. The mixed-gamma model presented is for the independent case. The formulae of mean and variance are derived for the sum of two and three independent mixed-gamma variables, respectively. Firstly, the gamma distribution is used to model the nonzero rainfall amounts and the parameters of the distribution (shape and scale) are estimated using the maximum likelihood estimation method. Then, the mixed-gamma model is defined for both zero and nonzero rainfall amounts simultaneously. The formulae of mean and variance for the sum of two and three independent mixed-gamma variables derived are tested using the monthly rainfall amounts from rainfall stations within Kuantan district in Pahang Malaysia. Based on the Kolmogorov-Smirnov goodness of fit test, the results demonstrate that the descriptive statistics of the observed sum of rainfall amounts is not significantly different at 5% significance level from the generated sum of independent mixed-gamma variables. The methodology and formulae demonstrated can be applied to find the sum of more than three independent mixed-gamma variables.

  8. Mixed-method research protocol: defining and operationalizing patient-related complexity of nursing care in acute care hospitals.

    PubMed

    Huber, Evelyn; Kleinknecht-Dolf, Michael; Müller, Marianne; Kugler, Christiane; Spirig, Rebecca

    2017-06-01

    To define the concept of patient-related complexity of nursing care in acute care hospitals and to operationalize it in a questionnaire. The concept of patient-related complexity of nursing care in acute care hospitals has not been conclusively defined in the literature. The operationalization in a corresponding questionnaire is necessary, given the increased significance of the topic, due to shortened lengths of stay and increased patient morbidity. Hybrid model of concept development and embedded mixed-methods design. The theoretical phase of the hybrid model involved a literature review and the development of a working definition. In the fieldwork phase of 2015 and 2016, an embedded mixed-methods design was applied with complexity assessments of all patients at five Swiss hospitals using our newly operationalized questionnaire 'Complexity of Nursing Care' over 1 month. These data will be analysed with structural equation modelling. Twelve qualitative case studies will be embedded. They will be analysed using a structured process of constructing case studies and content analysis. In the final analytic phase, the quantitative and qualitative data will be merged and added to the results of the theoretical phase for a common interpretation. Cantonal Ethics Committee Zurich judged the research programme as unproblematic in December 2014 and May 2015. Following the phases of the hybrid model and using an embedded mixed-methods design can reach an in-depth understanding of patient-related complexity of nursing care in acute care hospitals, a final version of the questionnaire and an acknowledged definition of the concept. © 2016 John Wiley & Sons Ltd.

  9. An efficient use of mixing model for computing the effective dielectric and thermal properties of the human head.

    PubMed

    Mishra, Varsha; Puthucheri, Smitha; Singh, Dharmendra

    2018-05-07

    As a preventive measure against the electromagnetic (EM) wave exposure to human body, EM radiation regulatory authorities such as ICNIRP and FCC defined the value of specific absorption rate (SAR) for the human head during EM wave exposure from mobile phone. SAR quantifies the absorption of EM waves in the human body and it mainly depends on the dielectric properties (ε', σ) of the corresponding tissues. The head part of the human body is more susceptible to EM wave exposure due to the usage of mobile phones. The human head is a complex structure made up of multiple tissues with intermixing of many layers; thus, the accurate measurement of permittivity (ε') and conductivity (σ) of the tissues of the human head is still a challenge. For computing the SAR, researchers are using multilayer model, which has some challenges for defining the boundary for layers. Therefore, in this paper, an attempt has been made to propose a method to compute effective complex permittivity of the human head in the range of 0.3 to 3.0 GHz by applying De-Loor mixing model. Similarly, for defining the thermal effect in the tissue, thermal properties of the human head have also been computed using the De-Loor mixing method. The effective dielectric and thermal properties of equivalent human head model are compared with the IEEE Std. 1528. Graphical abstract ᅟ.

  10. An R2 statistic for fixed effects in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver

    2008-12-20

    Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.

  11. Three novel approaches to structural identifiability analysis in mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2016-05-06

    Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Decision-case mix model for analyzing variation in cesarean rates.

    PubMed

    Eldenburg, L; Waller, W S

    2001-01-01

    This article contributes a decision-case mix model for analyzing variation in c-section rates. Like recent contributions to the literature, the model systematically takes into account the effect of case mix. Going beyond past research, the model highlights differences in physician decision making in response to obstetric factors. Distinguishing the effects of physician decision making and case mix is important in understanding why c-section rates vary and in developing programs to effect change in physician behavior. The model was applied to a sample of deliveries at a hospital where physicians exhibited considerable variation in their c-section rates. Comparing groups with a low versus high rate, the authors' general conclusion is that the difference in physician decision tendencies (to perform a c-section), in response to specific obstetric factors, is at least as important as case mix in explaining variation in c-section rates. The exact effects of decision making versus case mix depend on how the model application defines the obstetric condition of interest and on the weighting of deliveries by their estimated "risk of Cesarean." The general conclusion is supported by an additional analysis that uses the model's elements to predict individual physicians' annual c-section rates.

  13. Modeling of molecular diffusion and thermal conduction with multi-particle interaction in compressible turbulence

    NASA Astrophysics Data System (ADS)

    Tai, Y.; Watanabe, T.; Nagata, K.

    2018-03-01

    A mixing volume model (MVM) originally proposed for molecular diffusion in incompressible flows is extended as a model for molecular diffusion and thermal conduction in compressible turbulence. The model, established for implementation in Lagrangian simulations, is based on the interactions among spatially distributed notional particles within a finite volume. The MVM is tested with the direct numerical simulation of compressible planar jets with the jet Mach number ranging from 0.6 to 2.6. The MVM well predicts molecular diffusion and thermal conduction for a wide range of the size of mixing volume and the number of mixing particles. In the transitional region of the jet, where the scalar field exhibits a sharp jump at the edge of the shear layer, a smaller mixing volume is required for an accurate prediction of mean effects of molecular diffusion. The mixing time scale in the model is defined as the time scale of diffusive effects at a length scale of the mixing volume. The mixing time scale is well correlated for passive scalar and temperature. Probability density functions of the mixing time scale are similar for molecular diffusion and thermal conduction when the mixing volume is larger than a dissipative scale because the mixing time scale at small scales is easily affected by different distributions of intermittent small-scale structures between passive scalar and temperature. The MVM with an assumption of equal mixing time scales for molecular diffusion and thermal conduction is useful in the modeling of the thermal conduction when the modeling of the dissipation rate of temperature fluctuations is difficult.

  14. Site-Dependent Vibrational Coupling of CO Adsorbates on Well-Defined Step and Terrace Sites of Monocrystalline Platinum: Mixed-Isotope Studies at Pt(335) and Pt(111) in the Aqueous Electrochemical Environment

    DTIC Science & Technology

    1994-09-15

    and Terrace Sites of Monocrystalline Platinum: Mixed-Isotope Studies at Pt(335) and Pt(1 11) in the Aqueous Electrochemical Environment by Chung S. Kim... monocrystalline metals. These materials have structurally well-defined step and kink structures, which serve as models for the surface defect sites found on...and molecular interactions at stepped monocrystalline electrode surfaces [3,4]. A notable property of Pt(335)/CO is that the CO occupancy at step and

  15. A multi-tracer approach coupled to numerical models to improve understanding of mountain block processes in a high elevation, semi-humid catchment

    NASA Astrophysics Data System (ADS)

    Dwivedi, R.; McIntosh, J. C.; Meixner, T.; Ferré, T. P. A.; Chorover, J.

    2016-12-01

    Mountain systems are critical sources of recharge to adjacent alluvial basins in dryland regions. Yet, mountain systems face poorly defined threats due to climate change in terms of reduced snowpack, precipitation changes, and increased temperatures. Fundamentally, the climate risks to mountain systems are uncertain due to our limited understanding of natural recharge processes. Our goal is to combine measurements and models to provide improved spatial and temporal descriptions of groundwater flow paths and transit times in a headwater catchment located in a sub-humid region. This information is important to quantifying groundwater age and, thereby, to providing more accurate assessments of the vulnerability of these systems to climate change. We are using: (a) combination of geochemical composition, along with 2H/18O and 3H isotopes to improve an existing conceptual model for mountain block recharge (MBR) for the Marshall Gulch Catchment (MGC) located within the Santa Catalina Mountains. The current model only focuses on shallow flow paths through the upper unconfined aquifer with no representation of the catchment's fractured-bedrock aquifer. Groundwater flow, solute transport, and groundwater age will be modeled throughout MGC using COMSOL Multiphysics® software. Competing models in terms of spatial distribution of required hydrologic parameters, e.g. hydraulic conductivity and porosity, will be proposed and these models will be used to design discriminatory data collection efforts based on multi-tracer methods. Initial end-member mixing results indicate that baseflow in MGC, if considered the same as the streamflow during the dry periods, is not represented by the chemistry of deep groundwater in the mountain system. In the ternary mixing space, most of the samples plot outside the mixing curve. Therefore, to further constrain the contributions of water from various reservoirs we are collecting stable water isotopes, tritium, and solute chemistry of precipitation, shallow groundwater, local spring water, MGC streamflow, and at a drainage location much lower than MGC outlet to better define and characterize each end-member of the ternary mixing model. Consequently, the end-member mixing results are expected to facilitate us in better understanding the MBR processes in and beyond MGC. Mountain systems are critical sources of recharge to adjacent alluvial basins in dryland regions. Yet, mountain systems face poorly defined threats due to climate change in terms of reduced snowpack, precipitation changes, and increased temperatures. Fundamentally, the climate risks to mountain systems are uncertain due to our limited understanding of natural recharge processes. Our goal is to combine measurements and models to provide improved spatial and temporal descriptions of groundwater flow paths and transit times in a headwater catchment located in a sub-humid region. This information is important to quantifying groundwater age and, thereby, to providing more accurate assessments of the vulnerability of these systems to climate change. We are using: (a) combination of geochemical composition, along with 2H/18O and 3H isotopes to improve an existing conceptual model for mountain block recharge (MBR) for the Marshall Gulch Catchment (MGC) located within the Santa Catalina Mountains. The current model only focuses on shallow flow paths through the upper unconfined aquifer with no representation of the catchment's fractured-bedrock aquifer. Groundwater flow, solute transport, and groundwater age will be modeled throughout MGC using COMSOL Multiphysics® software. Competing models in terms of spatial distribution of required hydrologic parameters, e.g. hydraulic conductivity and porosity, will be proposed and these models will be used to design discriminatory data collection efforts based on multi-tracer methods. Initial end-member mixing results indicate that baseflow in MGC, if considered the same as the streamflow during the dry periods, is not represented by the chemistry of deep groundwater in the mountain system. In the ternary mixing space, most of the samples plot outside the mixing curve. Therefore, to further constrain the contributions of water from various reservoirs we are collecting stable water isotopes, tritium, and solute chemistry of precipitation, shallow groundwater, local spring water, MGC streamflow, and at a drainage location much lower than MGC outlet to better define and characterize each end-member of the ternary mixing model. Consequently, the end-member mixing results are expected to facilitate us in better understanding the MBR processes in and beyond MGC.

  16. A surface temperature and moisture parameterization for use in mesoscale numerical models

    NASA Technical Reports Server (NTRS)

    Tremback, C. J.; Kessler, R.

    1985-01-01

    A modified multi-level soil moisture and surface temperature model is presented for use as in defining lower boundary conditions in mesoscale weather models. Account is taken of the hydraulic and thermal diffusion properties of the soil, their variations with soil type, and the mixing ratio at the surface. Techniques are defined for integrating the surface input into the multi-level scheme. Sample simulation runs were performed with the modified model and the original model defined by Pielke, et al. (1977, 1981). The models were applied to regional weather forecasting over soils composed of sand and clay loam. The new form of the model avoided iterations necessary in the earlier version of the model and achieved convergence at reasonable profiles for surface temperature and moisture in regions where the earlier version of the model failed.

  17. A numerical study of automotive turbocharger mixed flow turbine inlet geometry for off design performance

    NASA Astrophysics Data System (ADS)

    Leonard, T.; Spence, S.; Early, J.; Filsinger, D.

    2013-12-01

    Mixed flow turbines represent a potential solution to the increasing requirement for high pressure, low velocity ratio operation in turbocharger applications. While literature exists for the use of these turbines at such operating conditions, there is a lack of detailed design guidance for defining the basic geometry of the turbine, in particular, the cone angle - the angle at which the inlet of the mixed flow turbine is inclined to the axis. This investigates the effect and interaction of such mixed flow turbine design parameters. Computational Fluids Dynamics was initially used to investigate the performance of a modern radial turbine to create a baseline for subsequent mixed flow designs. Existing experimental data was used to validate this model. Using the CFD model, a number of mixed flow turbine designs were investigated. These included studies varying the cone angle and the associated inlet blade angle. The results of this analysis provide insight into the performance of a mixed flow turbine with respect to cone and inlet blade angle.

  18. Enabling complex queries to drug information sources through functional composition.

    PubMed

    Peters, Lee; Mortensen, Jonathan; Nguyen, Thang; Bodenreider, Olivier

    2013-01-01

    Our objective was to enable an end-user to create complex queries to drug information sources through functional composition, by creating sequences of functions from application program interfaces (API) to drug terminologies. The development of a functional composition model seeks to link functions from two distinct APIs. An ontology was developed using Protégé to model the functions of the RxNorm and NDF-RT APIs by describing the semantics of their input and output. A set of rules were developed to define the interoperable conditions for functional composition. The operational definition of interoperability between function pairs is established by executing the rules on the ontology. We illustrate that the functional composition model supports common use cases, including checking interactions for RxNorm drugs and deploying allergy lists defined in reference to drug properties in NDF-RT. This model supports the RxMix application (http://mor.nlm.nih.gov/RxMix/), an application we developed for enabling complex queries to the RxNorm and NDF-RT APIs.

  19. Experimental testing and modeling analysis of solute mixing at water distribution pipe junctions.

    PubMed

    Shao, Yu; Jeffrey Yang, Y; Jiang, Lijie; Yu, Tingchao; Shen, Cheng

    2014-06-01

    Flow dynamics at a pipe junction controls particle trajectories, solute mixing and concentrations in downstream pipes. The effect can lead to different outcomes of water quality modeling and, hence, drinking water management in a distribution network. Here we have investigated solute mixing behavior in pipe junctions of five hydraulic types, for which flow distribution factors and analytical equations for network modeling are proposed. First, based on experiments, the degree of mixing at a cross is found to be a function of flow momentum ratio that defines a junction flow distribution pattern and the degree of departure from complete mixing. Corresponding analytical solutions are also validated using computational-fluid-dynamics (CFD) simulations. Second, the analytical mixing model is further extended to double-Tee junctions. Correspondingly the flow distribution factor is modified to account for hydraulic departure from a cross configuration. For a double-Tee(A) junction, CFD simulations show that the solute mixing depends on flow momentum ratio and connection pipe length, whereas the mixing at double-Tee(B) is well represented by two independent single-Tee junctions with a potential water stagnation zone in between. Notably, double-Tee junctions differ significantly from a cross in solute mixing and transport. However, it is noted that these pipe connections are widely, but incorrectly, simplified as cross junctions of assumed complete solute mixing in network skeletonization and water quality modeling. For the studied pipe junction types, analytical solutions are proposed to characterize the incomplete mixing and hence may allow better water quality simulation in a distribution network. Published by Elsevier Ltd.

  20. A mathematical model of microbial enhanced oil recovery (MEOR) method for mixed type rock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitnikov, A.A.; Eremin, N.A.; Ibattulin, R.R.

    1994-12-31

    This paper deals with the microbial enhanced oil recovery method. It covers: (1) Mechanism of microbial influence on the reservoir was analyzed; (2) The main groups of metabolites affected by the hydrodynamic characteristics of the reservoir were determined; (3) The criterions of use of microbial influence method on the reservoir are defined. The mathematical model of microbial influence on the reservoir was made on this basis. The injection of molasse water solution with Clostridium bacterias into the mixed type of rock was used in this model. And the results of calculations were compared with experimental data.

  1. Time and frequency domain analysis of sampled data controllers via mixed operation equations

    NASA Technical Reports Server (NTRS)

    Frisch, H. P.

    1981-01-01

    Specification of the mathematical equations required to define the dynamic response of a linear continuous plant, subject to sampled data control, is complicated by the fact that the digital components of the control system cannot be modeled via linear ordinary differential equations. This complication can be overcome by introducing two new mathematical operations; namely, the operation of zero order hold and digial delay. It is shown that by direct utilization of these operations, a set of linear mixed operation equations can be written and used to define the dynamic response characteristics of the controlled system. It also is shown how these linear mixed operation equations lead, in an automatable manner, directly to a set of finite difference equations which are in a format compatible with follow on time and frequency domain analysis methods.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werth, David; Buckley, Robert

    Here, the detectability of emission sources, defined by a low-level of mixing with other sources, was estimated for various locations surrounding the Sea of Japan, including a site within North Korea. A high-resolution meteorological model coupled to a dispersion model was used to simulate plume dynamics for four periods, and two metrics of airborne plume mixing were calculated for each source. While emissions from several known sources in this area tended to blend with others while dispersing downwind, the North Korean plume often remained relatively distinct, thereby making it potentially easier to unambiguously ‘backtrack’ it to its source.

  3. A Study of Mexican Free-Tailed Bat Chirp Syllables: Bayesian Functional Mixed Models for Nonstationary Acoustic Time Series.

    PubMed

    Martinez, Josue G; Bohn, Kirsten M; Carroll, Raymond J; Morris, Jeffrey S

    2013-06-01

    We describe a new approach to analyze chirp syllables of free-tailed bats from two regions of Texas in which they are predominant: Austin and College Station. Our goal is to characterize any systematic regional differences in the mating chirps and assess whether individual bats have signature chirps. The data are analyzed by modeling spectrograms of the chirps as responses in a Bayesian functional mixed model. Given the variable chirp lengths, we compute the spectrograms on a relative time scale interpretable as the relative chirp position, using a variable window overlap based on chirp length. We use 2D wavelet transforms to capture correlation within the spectrogram in our modeling and obtain adaptive regularization of the estimates and inference for the regions-specific spectrograms. Our model includes random effect spectrograms at the bat level to account for correlation among chirps from the same bat, and to assess relative variability in chirp spectrograms within and between bats. The modeling of spectrograms using functional mixed models is a general approach for the analysis of replicated nonstationary time series, such as our acoustical signals, to relate aspects of the signals to various predictors, while accounting for between-signal structure. This can be done on raw spectrograms when all signals are of the same length, and can be done using spectrograms defined on a relative time scale for signals of variable length in settings where the idea of defining correspondence across signals based on relative position is sensible.

  4. A Multidimensional Partial Credit Model with Associated Item and Test Statistics: An Application to Mixed-Format Tests

    ERIC Educational Resources Information Center

    Yao, Lihua; Schwarz, Richard D.

    2006-01-01

    Multidimensional item response theory (IRT) models have been proposed for better understanding the dimensional structure of data or to define diagnostic profiles of student learning. A compensatory multidimensional two-parameter partial credit model (M-2PPC) for constructed-response items is presented that is a generalization of those proposed to…

  5. Nursing home case mix in Wisconsin. Findings and policy implications.

    PubMed

    Arling, G; Zimmerman, D; Updike, L

    1989-02-01

    Along with many other states, Wisconsin is considering a case mix approach to Medicaid nursing home reimbursement. To support this effort, a nursing home case mix model was developed from a representative sample of 410 Medicaid nursing home residents from 56 facilities in Wisconsin. The model classified residents into mutually exclusive groups that were homogeneous in their use of direct care resources, i.e., minutes of direct care time (weighted for nurse skill level) over a 7-day period. Groups were defined initially by intense, Special, or Routine nursing requirements. Within these nursing requirement categories, subgroups were formed by the presence/absence of behavioral problems and dependency in activities of daily living (ADL). Wisconsin's current Skilled/Intermediate Care (SNF/ICF) classification system was analyzed in light of the case mix model and found to be less effective in distinguishing residents by resource use. The case mix model accounted for 48% of the variance in resource use, whereas the SNF/ICF classification system explained 22%. Comparisons were drawn with nursing home case mix models in New York State (RUG-II) and Minnesota. Despite progress in the study of nursing home case mix and its application to reimbursement reform, methodologic and policy issues remain. These include the differing operational definitions for nursing requirements and ADL dependency, the inconsistency in findings concerning psychobehavioral problems, and the problem of promoting positive health and functional outcomes based on models that may be insensitive to change in resident conditions over time.

  6. How we compute N matters to estimates of mixing in stratified flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arthur, Robert S.; Venayagamoorthy, Subhas K.; Koseff, Jeffrey R.

    We know that most commonly used models for turbulent mixing in the ocean rely on a background stratification against which turbulence must work to stir the fluid. While this background stratification is typically well defined in idealized numerical models, it is more difficult to capture in observations. Here, a potential discrepancy in ocean mixing estimates due to the chosen calculation of the background stratification is explored using direct numerical simulation data of breaking internal waves on slopes. There are two different methods for computing the buoyancy frequencymore » $N$$, one based on a three-dimensionally sorted density field (often used in numerical models) and the other based on locally sorted vertical density profiles (often used in the field), are used to quantify the effect of$$N$$on turbulence quantities. It is shown that how$$N$$is calculated changes not only the flux Richardson number$$R_{f}$$, which is often used to parameterize turbulent mixing, but also the turbulence activity number or the Gibson number$$Gi$$, leading to potential errors in estimates of the mixing efficiency using$$Gi$-based parameterizations.« less

  7. How we compute N matters to estimates of mixing in stratified flows

    DOE PAGES

    Arthur, Robert S.; Venayagamoorthy, Subhas K.; Koseff, Jeffrey R.; ...

    2017-10-13

    We know that most commonly used models for turbulent mixing in the ocean rely on a background stratification against which turbulence must work to stir the fluid. While this background stratification is typically well defined in idealized numerical models, it is more difficult to capture in observations. Here, a potential discrepancy in ocean mixing estimates due to the chosen calculation of the background stratification is explored using direct numerical simulation data of breaking internal waves on slopes. There are two different methods for computing the buoyancy frequencymore » $N$$, one based on a three-dimensionally sorted density field (often used in numerical models) and the other based on locally sorted vertical density profiles (often used in the field), are used to quantify the effect of$$N$$on turbulence quantities. It is shown that how$$N$$is calculated changes not only the flux Richardson number$$R_{f}$$, which is often used to parameterize turbulent mixing, but also the turbulence activity number or the Gibson number$$Gi$$, leading to potential errors in estimates of the mixing efficiency using$$Gi$-based parameterizations.« less

  8. Toward topology-based characterization of small-scale mixing in compressible turbulence

    NASA Astrophysics Data System (ADS)

    Suman, Sawan; Girimaji, Sharath

    2011-11-01

    Turbulent mixing rate at small scales of motion (molecular mixing) is governed by the steepness of the scalar-gradient field which in turn is dependent upon the prevailing velocity gradients. Thus motivated, we propose a velocity-gradient topology-based approach for characterizing small-scale mixing in compressible turbulence. We define a mixing efficiency metric that is dependent upon the topology of the solenoidal and dilatational deformation rates of a fluid element. The mixing characteristics of solenoidal and dilatational velocity fluctuations are clearly delineated. We validate this new approach by employing mixing data from direct numerical simulations (DNS) of compressible decaying turbulence with passive scalar. For each velocity-gradient topology, we compare the mixing efficiency predicted by the topology-based model with the corresponding conditional scalar variance obtained from DNS. The new mixing metric accurately distinguishes good and poor mixing topologies and indeed reasonably captures the numerical values. The results clearly demonstrate the viability of the proposed approach for characterizing and predicting mixing in compressible flows.

  9. Hydrogeology, ground-water quality, and source of ground water causing water-quality changes in the Davis well field at Memphis, Tennessee

    USGS Publications Warehouse

    Parks, William S.; Mirecki, June E.; Kingsbury, James A.

    1995-01-01

    NETPATH geochemical model code was used to mix waters from the alluvial aquifer with water from the Memphis aquifer using chloride as a conservative tracer. The resulting models indicated that a mixture containing 3 percent alluvial aquifer water mixed with 97 percent unaffected Memphis aquifer water would produce the chloride concentration measured in water from the Memphis aquifer well most affected by water-quality changes. NETPATH also was used to calculate mixing percentages of alluvial and Memphis aquifer Abstract waters based on changes in the concentrations of selected dissolved major inorganic and trace element constituents that define the dominant reactions that occur during mixing. These models indicated that a mixture containing 18 percent alluvial aquifer water and 82 percent unaffected Memphis aquifer water would produce the major constituent and trace element concentrations measured in water from the Memphis aquifer well most affected by water-quality changes. However, these model simulations predicted higher dissolved methane concentrations than were measured in water samples from the Memphis aquifer wells.

  10. A Study of Mexican Free-Tailed Bat Chirp Syllables: Bayesian Functional Mixed Models for Nonstationary Acoustic Time Series

    PubMed Central

    MARTINEZ, Josue G.; BOHN, Kirsten M.; CARROLL, Raymond J.

    2013-01-01

    We describe a new approach to analyze chirp syllables of free-tailed bats from two regions of Texas in which they are predominant: Austin and College Station. Our goal is to characterize any systematic regional differences in the mating chirps and assess whether individual bats have signature chirps. The data are analyzed by modeling spectrograms of the chirps as responses in a Bayesian functional mixed model. Given the variable chirp lengths, we compute the spectrograms on a relative time scale interpretable as the relative chirp position, using a variable window overlap based on chirp length. We use 2D wavelet transforms to capture correlation within the spectrogram in our modeling and obtain adaptive regularization of the estimates and inference for the regions-specific spectrograms. Our model includes random effect spectrograms at the bat level to account for correlation among chirps from the same bat, and to assess relative variability in chirp spectrograms within and between bats. The modeling of spectrograms using functional mixed models is a general approach for the analysis of replicated nonstationary time series, such as our acoustical signals, to relate aspects of the signals to various predictors, while accounting for between-signal structure. This can be done on raw spectrograms when all signals are of the same length, and can be done using spectrograms defined on a relative time scale for signals of variable length in settings where the idea of defining correspondence across signals based on relative position is sensible. PMID:23997376

  11. Lake Number, a quantitative indicator of mixing used to estimate changes in dissolved oxygen

    USGS Publications Warehouse

    Robertson, Dale M.; Imberger, Jorg

    1994-01-01

    Lake Number, LN, values are shown to be quantitative indicators of deep mixing in lakes and reservoirs that can be used to estimate changes in deep water dissolved oxygen (DO) concentrations. LN is a dimensionless parameter defined as the ratio of the moments about the center of volume of the water body, of the stabilizing force of gravity associated with density stratification to the destabilizing forces supplied by wind, cooling, inflow, outflow, and other artificial mixing devices. To demonstrate the universality of this parameter, LN values are used to describe the extent of deep mixing and are compared with changes in DO concentrations in three reservoirs in Australia and four lakes in the U.S.A., which vary in productivity and mixing regimes. A simple model is developed which relates changes in LN values, i.e., the extent of mixing, to changes in near bottom DO concentrations. After calibrating the model for a specific system, it is possible to use real-time LN values, calculated using water temperature profiles and surface wind velocities, to estimate changes in DO concentrations (assuming unchanged trophic conditions).

  12. Model verification of mixed dynamic systems. [POGO problem in liquid propellant rockets

    NASA Technical Reports Server (NTRS)

    Chrostowski, J. D.; Evensen, D. A.; Hasselman, T. K.

    1978-01-01

    A parameter-estimation method is described for verifying the mathematical model of mixed (combined interactive components from various engineering fields) dynamic systems against pertinent experimental data. The model verification problem is divided into two separate parts: defining a proper model and evaluating the parameters of that model. The main idea is to use differences between measured and predicted behavior (response) to adjust automatically the key parameters of a model so as to minimize response differences. To achieve the goal of modeling flexibility, the method combines the convenience of automated matrix generation with the generality of direct matrix input. The equations of motion are treated in first-order form, allowing for nonsymmetric matrices, modeling of general networks, and complex-mode analysis. The effectiveness of the method is demonstrated for an example problem involving a complex hydraulic-mechanical system.

  13. Mixed Integer Linear Programming model for Crude Palm Oil Supply Chain Planning

    NASA Astrophysics Data System (ADS)

    Sembiring, Pasukat; Mawengkang, Herman; Sadyadharma, Hendaru; Bu'ulolo, F.; Fajriana

    2018-01-01

    The production process of crude palm oil (CPO) can be defined as the milling process of raw materials, called fresh fruit bunch (FFB) into end products palm oil. The process usually through a series of steps producing and consuming intermediate products. The CPO milling industry considered in this paper does not have oil palm plantation, therefore the FFB are supplied by several public oil palm plantations. Due to the limited availability of FFB, then it is necessary to choose from which plantations would be appropriate. This paper proposes a mixed integer linear programming model the supply chain integrated problem, which include waste processing. The mathematical programming model is solved using neighborhood search approach.

  14. Heterosis and outbreeding depression: A multi-locus model and an application to salmon production

    USGS Publications Warehouse

    Emlen, John M.

    1991-01-01

    Both artificial propagation and efforts to preserve or augment natural populations sometimes involve, wittingly or unwittingly, the mixing of different gene pools. The advantages of such mixing vis-à-vis the alleviation of inbreeding depression are well known. Acknowledged, but less well understood, are the complications posed by outbreeding depression. This paper derives a simple model of outbreeding depression and demonstrates that it is reasonably possible to predict the generation-to-generation fitness course of hybrids derived from parents from different origins. Genetic difference, or distance between parental types, is defined by the drop in fitness experienced by one type reared at the site to which the other is locally adapted. For situations where decisions involving stock mixing must be made in the absence of complete information, a sensitivity analysis-based conflict resolution method (the Good-Bad-Ugly model) is described.

  15. Research misconduct oversight: defining case costs.

    PubMed

    Gammon, Elizabeth; Franzini, Luisa

    2013-01-01

    This study uses a sequential mixed method study design to define cost elements of research misconduct among faculty at academic medical centers. Using time driven activity based costing, the model estimates a per case cost for 17 cases of research misconduct reported by the Office of Research Integrity for the period of 2000-2005. Per case cost of research misconduct was found to range from $116,160 to $2,192,620. Research misconduct cost drivers are identified.

  16. SOURCE ASSESSMENT: ASPHALT HOT MIX

    EPA Science Inventory

    This report summarizes data on air emissions from the asphalt hot mix industry. A representative asphalt hot mix plant was defined, based on the results of an industrial survey, to assess the severity of emissions from this industry. Source severity was defined as the ratio of th...

  17. Characterizing the detectability of emission signals from a North Korean nuclear detonation

    DOE PAGES

    Werth, David; Buckley, Robert

    2017-02-01

    Here, the detectability of emission sources, defined by a low-level of mixing with other sources, was estimated for various locations surrounding the Sea of Japan, including a site within North Korea. A high-resolution meteorological model coupled to a dispersion model was used to simulate plume dynamics for four periods, and two metrics of airborne plume mixing were calculated for each source. While emissions from several known sources in this area tended to blend with others while dispersing downwind, the North Korean plume often remained relatively distinct, thereby making it potentially easier to unambiguously ‘backtrack’ it to its source.

  18. Natural remanent magnetization acquisition in bioturbated sediment: General theory and implications for relative paleointensity reconstructions

    NASA Astrophysics Data System (ADS)

    Egli, R.; Zhao, X.

    2015-04-01

    We present a general theory for the acquisition of natural remanent magnetizations (NRM) in sediment under the influence of (a) magnetic torques, (b) randomizing torques, and (c) torques resulting from interaction forces. Dynamic equilibrium between (a) and (b) in the water column and at the sediment-water interface generates a detrital remanent magnetization (DRM), while much stronger randomizing torques may be provided by bioturbation inside the mixed layer. These generate a so-called mixed remanent magnetization (MRM), which is stabilized by mechanical interaction forces. During the time required to cross the surface mixed layer, DRM is lost and MRM is acquired at a rate that depends on bioturbation intensity. Both processes are governed by a MRM lock-in function. The final NRM intensity is controlled mainly by a single parameter γ that is defined as the product of rotational diffusion and mixed-layer thickness, divided by sedimentation rate. This parameter defines three regimes: (1) slow mixing (γ < 0.2) leading to DRM preservation and insignificant MRM acquisition, (2) fast mixing (γ > 10) with MRM acquisition and full DRM randomization, and (3) intermediate mixing. Because the acquisition efficiency of DRM is larger than that of MRM, NRM intensity is particularly sensitive to γ in case of mixed regimes, generating variable NRM acquisition efficiencies. This model explains (1) lock-in delays that can be matched with empirical reconstructions from paleomagnetic records, (2) the existence of small lock-in depths that lead to DRM preservation, (3) specific NRM acquisition efficiencies of magnetofossil-rich sediments, and (4) some relative paleointensity artifacts.

  19. Does the U.S. exercise contagion on Italy? A theoretical model and empirical evidence

    NASA Astrophysics Data System (ADS)

    Cerqueti, Roy; Fenga, Livio; Ventura, Marco

    2018-06-01

    This paper deals with the theme of contagion in financial markets. At this aim, we develop a model based on Mixed Poisson Processes to describe the abnormal returns of financial markets of two considered countries. In so doing, the article defines the theoretical conditions to be satisfied in order to state that one of them - the so-called leader - exercises contagion on the others - the followers. Specifically, we employ an invariant probabilistic result stating that a suitable transformation of a Mixed Poisson Process is still a Mixed Poisson Process. The theoretical claim is validated by implementing an extensive simulation analysis grounded on empirical data. The countries considered are the U.S. (as the leader) and Italy (as the follower) and the period under scrutiny is very large, ranging from 1970 to 2014.

  20. Active Control of Mixing and Combustion, from Mechanisms to Implementation

    NASA Astrophysics Data System (ADS)

    Ghoniem, Ahmed F.

    2001-11-01

    Implementation of active control in complex processes, of the type encountered in high Reynolds number mixing and combustion, is predicated upon the identification of the underlying mechanisms and the construction of reduced order models that capture their essential characteristics. The mechanisms of interest must be shown to be amenable to external actuations, allowing optimal control strategies to exploit the delicate interactions that lead to the desired outcome. Reduced order models are utilized in defining the form and requisite attributes of actuation, its relationship to the monitoring system and the relevant control algorithms embedded in a feedforward or a feedback loop. The talk will review recent work on active control of mixing in combustion devices in which strong shear zones concur with mixing, combustion stabilization and flame anchoring. The underlying mechanisms, e.g., stability of shear flows, formation/evolution of large vortical structures in separating and swirling flows, their mutual interactions with acoustic fields, flame fronts and chemical kinetics, etc., are discussed in light of their key roles in mixing, burning enhancement/suppression, and combustion instability. Subtle attributes of combustion mechanisms are used to suggest the requisite control strategies.

  1. The kinetic study of hydrogen bacteria and methanotrophs in pure and defined mixed cultures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arora, D.K.

    The kinetics of pure and mixed cultures of Alcaligenes eutrophus H 16 and Methylobacterium organophilum CRL 26 under double substrate limited conditions were studied. In pure culture growth kinetics, a non-interactive model was found to fit the experimental data best. The yield of biomass on limiting substrate was found to vary with the dilution rate. The variation in the biomass yield may be attributed to the change in metabolic pathways resulting from a shift in the limiting substrates. Both species exhibited wall growth in the chemostat under dark conditions. However, under illuminated conditions, there was significant reduction in wall growth.more » Poly-{beta}-hydroxybutyric acid was synthesized by both species under ammonia and oxygen limiting conditions. The feed gas mixture was optimized to achieve the steady-state coexistence of these two species in a chemostate for the first time. In mixed cultures, the biomass species assays were differentiated on the basis of their selective growth on particular compounds: Sarcosine and D-arabinose were selected for hydrogen bacteria and methylotrophs, respectively. The kinetics parameters estimated from pure cultures were used to predict the growth kinetics of these species in defined mixed cultures.« less

  2. Stakeholders' Views of South Korea's Higher Education Internationalization Policy

    ERIC Educational Resources Information Center

    Cho, Young Ha; Palmer, John D.

    2013-01-01

    The study investigated the stakeholders' perceptions of South Korea's higher education internationalization policy. Based on the research framework that defines four policy values--propriety, effectiveness, diversity, and engagement, the convergence model was employed with a concurrent mixed method sampling strategy to analyze the stakeholders'…

  3. Natural remananent magnetization acquisition through sediment mixing: theory and implications for relative paleointensity

    NASA Astrophysics Data System (ADS)

    Egli, Ramon; Zhao, Xiangyu

    2015-04-01

    We present a general theory on the acquisition of natural remanent magnetizations (NRM) in sediment under the influence of (a) magnetic torques, (b) randomizing torques (e.g. from bioturbation), and (c) torques resulting from interaction forces between remanence carriers and other particles. Dynamic equilibrium between (a) and (b) in the water column and sediment-water interface produce a detrital remanent magnetization (DRM), while much stronger randomizing forces occur in the mixed layer of sediment due to bioturbation forces. These generate a so-called mixing remanent magnetization (MRM), which is stabilized by interaction forces. During the time required to cross the mixed layer, DRM is lost and MRM is acquired at a rate that depends on bioturbation intensity. Both processes are governed by the same MRM lock-in function. The final NRM intensity is controlled mainly by a single parameter defined as the product of rotational diffusion constant and mixed layer thickness, divided by the sedimentation rate. This parameter defines three regimes: (1) slow mixing, leading to DRM preservation and insignificant MRM acquisition, (2) fast mixing with MRM acquisition and full randomization of the original DRM, and (3) intermediate mixing. Because the acquisition efficiency of DRM is expectedly larger than that of a MRM, MRM is particularly sensitive to the mixing rate in case of intermediate regimes, and generates variable NRM acquisition efficiencies. Our model explains (1) lock-in delays that can be matched with empirical reconstructions from paleomagnetic records, (2) the existence of small lock-in depths leading to DRM preservation, (3) NRM acquisition efficiencies of magnetofossil-rich sediments, and (4) relative paleointensity artifacts reported in some recent studies.

  4. Dark-Fermentative Biological Hydrogen Production from Mixed Biowastes Using Defined Mixed Cultures.

    PubMed

    Patel, Sanjay K S; Lee, Jung-Kul; Kalia, Vipin C

    2017-06-01

    Biological hydrogen (H 2 ) production from the biowastes is widely recognized as a suitable alternative approach to utilize low cost feed instead of costly individual sugars. In the present investigation, pure and mixed biowastes were fermented by defined sets of mixed cultures for hydrolysis and H 2 production. Under batch conditions, up to 65, 67 and 70 L H 2 /kg total solids (2%, TS) were evolved from apple pomace (AP), onion peels (OP) and potato peels (PP) using a combination of hydrolytic mixed culture (MHC5) and mixed microbial cultures (MMC4 or MMC6), respectively. Among the different combinations of mixed biowastes including AP, OP, PP and pea-shells, the combination of OP and PP exhibited maximum H 2 production of 73 and 84 L/kg TS with MMC4 and MMC6, respectively. This study suggested that H 2 production can be effectively regulated by using defined sets of mixed cultures for hydrolysis and H 2 production from pure and mixed biowastes as feed even under unsterile conditions.

  5. Mixed ice accretion on aircraft wings

    NASA Astrophysics Data System (ADS)

    Janjua, Zaid A.; Turnbull, Barbara; Hibberd, Stephen; Choi, Kwing-So

    2018-02-01

    Ice accretion is a problematic natural phenomenon that affects a wide range of engineering applications including power cables, radio masts, and wind turbines. Accretion on aircraft wings occurs when supercooled water droplets freeze instantaneously on impact to form rime ice or runback as water along the wing to form glaze ice. Most models to date have ignored the accretion of mixed ice, which is a combination of rime and glaze. A parameter we term the "freezing fraction" is defined as the fraction of a supercooled droplet that freezes on impact with the top surface of the accretion ice to explore the concept of mixed ice accretion. Additionally we consider different "packing densities" of rime ice, mimicking the different bulk rime densities observed in nature. Ice accretion is considered in four stages: rime, primary mixed, secondary mixed, and glaze ice. Predictions match with existing models and experimental data in the limiting rime and glaze cases. The mixed ice formulation however provides additional insight into the composition of the overall ice structure, which ultimately influences adhesion and ice thickness, and shows that for similar atmospheric parameter ranges, this simple mixed ice description leads to very different accretion rates. A simple one-dimensional energy balance was solved to show how this freezing fraction parameter increases with decrease in atmospheric temperature, with lower freezing fraction promoting glaze ice accretion.

  6. Pulse Jet Mixing Tests With Noncohesive Solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Perry A.; Bamberger, Judith A.; Enderlin, Carl W.

    2009-05-11

    This report summarizes results from pulse jet mixing (PJM) tests with noncohesive solids in Newtonian liquid conducted during FY 2007 and 2008 to support the design of mixing systems for the Hanford Waste Treatment and Immobilization Plant (WTP). Tests were conducted at three geometric scales using noncohesive simulants. The test data were used to independently develop mixing models that can be used to predict full-scale WTP vessel performance and to rate current WTP mixing system designs against two specific performance requirements. One requirement is to ensure that all solids have been disturbed during the mixing action, which is important tomore » release gas from the solids. The second requirement is to maintain a suspended solids concentration below 20 weight percent at the pump inlet. The models predict the height to which solids will be lifted by the PJM action, and the minimum velocity needed to ensure all solids have been lifted from the floor. From the cloud height estimate we can calculate the concentration of solids at the pump inlet. The velocity needed to lift the solids is slightly more demanding than "disturbing" the solids, and is used as a surrogate for this metric. We applied the models to assess WTP mixing vessel performance with respect to the two perform¬ance requirements. Each mixing vessel was evaluated against these two criteria for two defined waste conditions. One of the wastes was defined by design limits and one was derived from Hanford waste characterization reports. The assessment predicts that three vessel types will satisfy the design criteria for all conditions evaluated. Seven vessel types will not satisfy the performance criteria used for any of the conditions evaluated. The remaining three vessel types provide varying assessments when the different particle characteristics are evaluated. The assessment predicts that three vessel types will satisfy the design criteria for all conditions evaluated. Seven vessel types will not satisfy the performance criteria used for any of the conditions evaluated. The remaining three vessel types provide varying assessments when the different particle characteristics are evaluated. The HLP-022 vessel was also evaluated using 12 m/s pulse jet velocity with 6-in. nozzles, and this design also did not satisfy the criteria for all of the conditions evaluated.« less

  7. An empirical approach to sufficient similarity in dose-responsiveness: Utilization of statistical distance as a similarity measure.

    EPA Science Inventory

    Using statistical equivalence testing logic and mixed model theory an approach has been developed, that extends the work of Stork et al (JABES,2008), to define sufficient similarity in dose-response for chemical mixtures containing the same chemicals with different ratios ...

  8. Characterization of Ferroplasma acidiphilum growing in pure and mixed culture with Leptospirillum ferriphilum.

    PubMed

    Merino, M P; Andrews, B A; Parada, P; Asenjo, J A

    2016-11-01

    Biomining is defined as biotechnology for metal recovery from minerals, and is promoted by the concerted effort of a consortium of acidophile prokaryotes, comprised of members of the Bacteria and Archaea domains. Ferroplasma acidiphilum and Leptospirillum ferriphilum are the dominant species in extremely acid environments and have great use in bioleaching applications; however, the role of each species in this consortia is still a subject of research. The hypothesis of this work is that F. acidiphilum uses the organic matter secreted by L. ferriphilum for growth, maintaining low levels of organic compounds in the culture medium, preventing their toxic effects on L. ferriphilum. To test this hypothesis, a characterization of Ferroplasma acidiphilum strain BRL-115 was made with the objective of determining its optimal growth conditions. Subsequently, under the optimal conditions, L. ferriphilum and F. acidiphilum were tested growing in each other's supernatant, in order to define if there was exchange of metabolites between the species. With these results, a mixed culture in batch cyclic operation was performed to obtain main specific growth rates, which were used to evaluate a mixed metabolic model previously developed by our group. It was observed that F. acidiphilum, strain BRL-115 is a chemomixotrophic organism, and its growth is maximized with yeast extract at a concentration of 0.04% wt/vol. From the experiments of L. ferriphilum growing on F. acidiphilum supernatant and vice versa, it was observed that in both cases cell growth is favorably affected by the presence of the filtered medium of the other microorganism, proving a synergistic interaction between these species. Specific growth rates were obtained in cyclic batch operation of the mixed culture and were used as input data for a Flux Balance Analysis of the mixed metabolic model, obtaining a reasonable behavior of the metabolic fluxes and the system as a whole, therefore consolidating the model previously developed. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1390-1396, 2016. © 2016 American Institute of Chemical Engineers.

  9. THE SPECTRAL EVOLUTION OF CONVECTIVE MIXING WHITE DWARFS, THE NON-DA GAP, AND WHITE DWARF COSMOCHRONOLOGY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Eugene Y.; Hansen, Brad M. S., E-mail: eyc@mail.utexas.edu, E-mail: hansen@astro.ucla.edu

    The spectral distribution of field white dwarfs shows a feature called the 'non-DA gap'. As defined by Bergeron et al., this is a temperature range (5100-6100 K) where relatively few non-DA stars are found, even though such stars are abundant on either side of the gap. It is usually viewed as an indication that a significant fraction of white dwarfs switch their atmospheric compositions back and forth between hydrogen-rich and helium-rich as they cool. In this Letter, we present a Monte Carlo model of the Galactic disk white dwarf population, based on the spectral evolution model of Chen and Hansen.more » We find that the non-DA gap emerges naturally, even though our model only allows white dwarf atmospheres to evolve monotonically from hydrogen-rich to helium-rich through convective mixing. We conclude by discussing the effects of convective mixing on the white dwarf luminosity function and the use thereof for Cosmochronology.« less

  10. Globally fixed-time synchronization of coupled neutral-type neural network with mixed time-varying delays.

    PubMed

    Zheng, Mingwen; Li, Lixiang; Peng, Haipeng; Xiao, Jinghua; Yang, Yixian; Zhang, Yanping; Zhao, Hui

    2018-01-01

    This paper mainly studies the globally fixed-time synchronization of a class of coupled neutral-type neural networks with mixed time-varying delays via discontinuous feedback controllers. Compared with the traditional neutral-type neural network model, the model in this paper is more general. A class of general discontinuous feedback controllers are designed. With the help of the definition of fixed-time synchronization, the upper right-hand derivative and a defined simple Lyapunov function, some easily verifiable and extensible synchronization criteria are derived to guarantee the fixed-time synchronization between the drive and response systems. Finally, two numerical simulations are given to verify the correctness of the results.

  11. Globally fixed-time synchronization of coupled neutral-type neural network with mixed time-varying delays

    PubMed Central

    2018-01-01

    This paper mainly studies the globally fixed-time synchronization of a class of coupled neutral-type neural networks with mixed time-varying delays via discontinuous feedback controllers. Compared with the traditional neutral-type neural network model, the model in this paper is more general. A class of general discontinuous feedback controllers are designed. With the help of the definition of fixed-time synchronization, the upper right-hand derivative and a defined simple Lyapunov function, some easily verifiable and extensible synchronization criteria are derived to guarantee the fixed-time synchronization between the drive and response systems. Finally, two numerical simulations are given to verify the correctness of the results. PMID:29370248

  12. Development of an Empirical Methods for Predicting Jet Mixing Noise of Cold Flow Rectangular Jets

    NASA Technical Reports Server (NTRS)

    Russell, James W.

    1999-01-01

    This report presents an empirical method for predicting the jet mixing noise levels of cold flow rectangular jets. The report presents a detailed analysis of the methodology used in development of the prediction method. The empirical correlations used are based on narrow band acoustic data for cold flow rectangular model nozzle tests conducted in the NASA Langley Jet Noise Laboratory. There were 20 separate nozzle test operating conditions. For each operating condition 60 Hz bandwidth microphone measurements were made over a frequency range from 0 to 60,000 Hz. Measurements were performed at 16 polar directivity angles ranging from 45 degrees to 157.5 degrees. At each polar directivity angle, measurements were made at 9 azimuth directivity angles. The report shows the methods employed to remove screech tones and shock noise from the data in order to obtain the jet mixing noise component. The jet mixing noise was defined in terms of one third octave band spectral content, polar and azimuth directivity, and overall power level. Empirical correlations were performed over the range of test conditions to define each of these jet mixing noise parameters as a function of aspect ratio, jet velocity, and polar and azimuth directivity angles. The report presents the method for predicting the overall power level, the average polar directivity, the azimuth directivity and the location and shape of the spectra for jet mixing noise of cold flow rectangular jets.

  13. On complexity and homogeneity measures in predicting biological aggressiveness of prostate cancer; Implication of the cellular automata model of tumor growth.

    PubMed

    Tanase, Mihai; Waliszewski, Przemyslaw

    2015-12-01

    We propose a novel approach for the quantitative evaluation of aggressiveness in prostate carcinomas. The spatial distribution of cancer cell nuclei was characterized by the global spatial fractal dimensions D0, D1, and D2. Two hundred eighteen prostate carcinomas were stratified into the classes of equivalence using results of ROC analysis. A simulation of the cellular automata mix defined a theoretical frame for a specific geometric representation of the cell nuclei distribution called a local structure correlation diagram (LSCD). The LSCD and dispersion Hd were computed for each carcinoma. Data mining generated some quantitative criteria describing tumor aggressiveness. Alterations in tumor architecture along progression were associated with some changes in both shape and the quantitative characteristics of the LSCD consistent with those in the automata mix model. Low-grade prostate carcinomas with low complexity and very low biological aggressiveness are defined by the condition D0 < 1.545 and Hd < 38. High-grade carcinomas with high complexity and very high biological aggressiveness are defined by the condition D0 > 1.764 and Hd < 38. The novel homogeneity measure Hd identifies carcinomas with very low aggressiveness within the class of complexity C1 or carcinomas with very high aggressiveness in the class C7. © 2015 Wiley Periodicals, Inc.

  14. Effect of Stability on Mixing in Open Canopies. Chapter 4

    NASA Technical Reports Server (NTRS)

    Lee, Young-Hee; Mahrt, L.

    2005-01-01

    In open canopies, the within-canopy flux from the ground surface and understory can account for a significant fraction of the total flux above the canopy. This study incorporates the important influence of within-canopy stability on turbulent mixing and subcanopy fluxes into a first-order closure scheme. Toward this goal, we analyze within-canopy eddy-correlation data from the old aspen site in the Boreal Ecosystem - Atmosphere Study (BOREAS) and a mature ponderosa pine site in Central Oregon, USA. A formulation of within-canopy transport is framed in terms of a stability- dependent mixing length, which approaches Monin-Obukhov similarity theory above the canopy roughness sublayer. The new simple formulation is an improvement upon the usual neglect of the influence of within-canopy stability in simple models. However, frequent well-defined cold air drainage within the pine subcanopy inversion reduces the utility of simple models for nocturnal transport. Other shortcomings of the formulation are discussed.

  15. Quantifying inter- and intra-population niche variability using hierarchical bayesian stable isotope mixing models.

    PubMed

    Semmens, Brice X; Ward, Eric J; Moore, Jonathan W; Darimont, Chris T

    2009-07-09

    Variability in resource use defines the width of a trophic niche occupied by a population. Intra-population variability in resource use may occur across hierarchical levels of population structure from individuals to subpopulations. Understanding how levels of population organization contribute to population niche width is critical to ecology and evolution. Here we describe a hierarchical stable isotope mixing model that can simultaneously estimate both the prey composition of a consumer diet and the diet variability among individuals and across levels of population organization. By explicitly estimating variance components for multiple scales, the model can deconstruct the niche width of a consumer population into relevant levels of population structure. We apply this new approach to stable isotope data from a population of gray wolves from coastal British Columbia, and show support for extensive intra-population niche variability among individuals, social groups, and geographically isolated subpopulations. The analytic method we describe improves mixing models by accounting for diet variability, and improves isotope niche width analysis by quantitatively assessing the contribution of levels of organization to the niche width of a population.

  16. Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rossi, R; Gallagher, B; Neville, J

    Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied ourmore » model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.« less

  17. DSM-5-defined 'mixed features' and Benazzi's mixed depression: which is practically useful to discriminate bipolar disorder from unipolar depression in patients with depression?

    PubMed

    Takeshima, Minoru; Oka, Takashi

    2015-02-01

    Irritability, psychomotor agitation, and distractibility in a major depressive episode (MDE) should not be counted as manic/hypomanic symptoms of DSM-5-defined mixed features; however, this remains controversial. The practical usefulness of this definition in discriminating bipolar disorder (BP) from major depressive disorder (MDD) in patients with depression was compared with that of Benazzi's mixed depression, which includes these symptoms. The prevalence of both definitions of mixed depression in 217 patients with MDE (57 bipolar II disorder, 35 BP not otherwise specified, and 125 MDD cases), and their operating characteristics regarding BP diagnosis were compared. The prevalence of both Benazzi's mixed depression and DSM-5-defined mixed features was significantly higher in patients with BP than it was in patients with MDD, with the latter being quite low (62.0% vs 12.8% [P < 0.0001], and 7.6% vs 0% [P < 0.0021], respectively). The area under the receiver operating curve for BP diagnosis according to the number of all manic/hypomanic symptoms was numerically larger than that according to the number of manic/hypomanic symptoms excluding the above-mentioned three symptoms (0.798; 95% confidence interval, 0.736-0.859 vs 0.722; 95% confidence interval, 0.654-0.790). The sensitivity/specificity of DSM-5-defined mixed features and Benazzi's mixed depression for BP diagnosis were 5.1%/100% and 55.1%/87.2%, respectively. DSM-5-defined mixed features were too restrictive to discriminate BP from MDD in patients with depression compared with Benazzi's definition. To confirm this finding, studies that include patients with BP-I and using tools to assess manic/hypomanic symptoms during MDE are necessary. © 2014 The Authors. Psychiatry and Clinical Neurosciences © 2014 Japanese Society of Psychiatry and Neurology.

  18. Combustor assembly for use in a turbine engine and methods of assembling same

    DOEpatents

    Uhm, Jong Ho; Johnson, Thomas Edward

    2013-05-14

    A fuel nozzle assembly for use with a turbine engine is described herein. The fuel nozzle assembly includes a plurality of fuel nozzles positioned within an air plenum defined by a casing. Each of the plurality of fuel nozzles is coupled to a combustion liner defining a combustion chamber. Each of the plurality of fuel nozzles includes a housing that includes an inner surface that defines a cooling fluid plenum and a fuel plenum therein, and a plurality of mixing tubes extending through the housing. Each of the mixing tubes includes an inner surface defining a flow channel extending between the air plenum and the combustion chamber. At least one mixing tube of the plurality of mixing tubes including at least one cooling fluid aperture for channeling a flow of cooling fluid from the cooling fluid plenum to the flow channel.

  19. Introducing "Emotioncy" as a Potential Source of Test Bias: A Mixed Rasch Modeling Study

    ERIC Educational Resources Information Center

    Pishghadam, Reza; Baghaei, Purya; Seyednozadi, Zahra

    2017-01-01

    This article attempts to present emotioncy as a potential source of test bias to inform the analysis of test item performance. Emotioncy is defined as a hierarchy, ranging from "exvolvement" (auditory, visual, and kinesthetic) to "involvement" (inner and arch), to emphasize the emotions evoked by the senses. This study…

  20. ROLE OF CANOPY-SCALE PHOTOCHEMISTRY IN MODIFYING BIOGENIC-ATMOSPHERE EXCHANGE OF REACTIVE TERPENE SPECIES: RESULTS FROM THE CELTIC FIELD STUDY

    EPA Science Inventory

    A one-dimensional canopy model was used to quantify the impact of photochemistry in modifying biosphere-atmosphere exchange of trace gases. Canopy escape efficiencies, defined as the fraction of emission that escapes into the well-mixed boundary layer, were calculated for reactiv...

  1. Crossover between structured and well-mixed networks in an evolutionary prisoner's dilemma game

    NASA Astrophysics Data System (ADS)

    Dai, Qionglin; Cheng, Hongyan; Li, Haihong; Li, Yuting; Zhang, Mei; Yang, Junzhong

    2011-07-01

    In a spatial evolutionary prisoner’s dilemma game (PDG), individuals interact with their neighbors and update their strategies according to some rules. As is well known, cooperators are destined to become extinct in a well-mixed population, whereas they could emerge and be sustained on a structured network. In this work, we introduce a simple model to investigate the crossover between a structured network and a well-mixed one in an evolutionary PDG. In the model, each link j is designated a rewiring parameter τj, which defines the time interval between two successive rewiring events for link j. By adjusting the rewiring parameter τ (the mean time interval for any link in the network), we could change a structured network into a well-mixed one. For the link rewiring events, three situations are considered: one synchronous situation and two asynchronous situations. Simulation results show that there are three regimes of τ: large τ where the density of cooperators ρc rises to ρc,∞ (the value of ρc for the case without link rewiring), small τ where the mean-field description for a well-mixed network is applicable, and moderate τ where the crossover between a structured network and a well-mixed one happens.

  2. Taxonomy of Magma Mixing II: Thermochemistry of Mixed Crystal-Bearing Magmas Using the Magma Chamber Simulator

    NASA Astrophysics Data System (ADS)

    Bohrson, W. A.; Spera, F. J.; Neilson, R.; Ghiorso, M. S.

    2013-12-01

    Magma recharge and magma mixing contribute to the diversity of melt and crystal populations, the abundance and phase state of volatiles, and thermal and mass characteristics of crustal magma systems. The literature is replete with studies documenting mixing end-members and associated products, from mingled to hybridized, and a catalytic link between recharge/mixing and eruption is likely. Given its importance and the investment represented by thousands of detailed magma mixing studies, a multicomponent, multiphase magma mixing taxonomy is necessary to systematize the array of governing parameters (e.g., pressure (P), temperature (T), composition (X)) and attendant outcomes. While documenting the blending of two melts to form a third melt is straightforward, quantification of the mixing of two magmas and the subsequent evolution of hybrid magma requires application of an open-system thermodynamic model. The Magma Chamber Simulator (MCS) is a thermodynamic, energy, and mass constrained code that defines thermal, mass and compositional (major, trace element and isotope) characteristics of melt×minerals×fluid phase in a composite magma body-recharge magma-crustal wallrock system undergoing recharge (magma mixing), assimilation, and crystallization. In order to explore fully hybridized products, in MCS, energy and mass of recharge magma (R) are instantaneously delivered to resident magma (M), and M and R are chemically homogenized and thermally equilibrated. The hybrid product achieves a new equilibrium state, which may include crystal resorption or precipitation and/or evolution of a fluid phase. Hundreds of simulations systematize the roles that PTX (and hence mineral identity and abundance) and the mixing ratio (mass of M/mass of R) have in producing mixed products. Combinations of these parameters define regime diagrams that illustrate possible outcomes, including: (1) Mixed melt composition is not necessarily a mass weighted mixture of M and R magmas because crystals may precipitate or resorb. (2) Although a typical expectation is that the mixed magma T is between those of M and R, in some cases, T is lower than both due to the enthalpy cost of mineral resorption. (3) Addition of cooler silicic R to mafic M might be expected to promote crystallization, but in some cases, hybrid melt moves away from phase saturation surface(s) due to compositional effects, and crystallization is suppressed. (4) Addition of R can cause either enhancement or suppression of crystallization, depending on PTX conditions. Phases stable in M may cease to crystallize after mixing, producing a gap in the crystal record. (5) Volatile saturation is likely to be complex, and investigating volatile behavior will help define the thermodynamic states under which mixed magmas may catastrophically vesiculate, perhaps triggering eruption. Use of the magma mixing taxonomy will enhance the ability to quantify key parameters that influence particular magma mixing scenarios and will illuminate MCS enhancements required for handling additional types of magma mixing (e.g., mingling).

  3. Mid-depth temperature maximum in an estuarine lake

    NASA Astrophysics Data System (ADS)

    Stepanenko, V. M.; Repina, I. A.; Artamonov, A. Yu; Gorin, S. L.; Lykossov, V. N.; Kulyamin, D. V.

    2018-03-01

    The mid-depth temperature maximum (TeM) was measured in an estuarine Bol’shoi Vilyui Lake (Kamchatka peninsula, Russia) in summer 2015. We applied 1D k-ɛ model LAKE to the case, and found it successfully simulating the phenomenon. We argue that the main prerequisite for mid-depth TeM development is a salinity increase below the freshwater mixed layer, sharp enough in order to increase the temperature with depth not to cause convective mixing and double diffusion there. Given that this condition is satisfied, the TeM magnitude is controlled by physical factors which we identified as: radiation absorption below the mixed layer, mixed-layer temperature dynamics, vertical heat conduction and water-sediments heat exchange. In addition to these, we formulate the mechanism of temperature maximum ‘pumping’, resulting from the phase shift between diurnal cycles of mixed-layer depth and temperature maximum magnitude. Based on the LAKE model results we quantify the contribution of the above listed mechanisms and find their individual significance highly sensitive to water turbidity. Relying on physical mechanisms identified we define environmental conditions favouring the summertime TeM development in salinity-stratified lakes as: small-mixed layer depth (roughly, ~< 2 m), transparent water, daytime maximum of wind and cloudless weather. We exemplify the effect of mixed-layer depth on TeM by a set of selected lakes.

  4. Transversal mixing in the gastrointestinal tract

    NASA Astrophysics Data System (ADS)

    Vainchtein, Dmitri; Orthey, Perry; Parkman, Henry

    2015-11-01

    We discuss results of numerical simulations and analytical modeling of transversal intraluminal mixing in the GI tract produced by segmentation and peristaltic contractions. Particles that start in different parts of the small intestine are traced over several contractions and mixing is described using the particles' probability distribution function. We show that there is optimal set of parameters of contractions, such as the depth and frequency, that produces the most efficient mixing. We show that contractions create well-defined advection patterns in transversal direction. The research is inspired by several applications. First, there is the study of bacteria populating the walls of the intestine, which rely on fluid mixing for nutrients. Second, there are gastrointestinal diseases, such as Crohn's disease, which can be treated effectively using a drug delivery capsule through GI tract, for which it is needed to know how long it takes for a released drug to reach the intestinal wall. And finally, certain neurological and muscular deceases change the parameters of contractions, thus reducing the efficiency of mixing. Understanding an admissible range of the parameters (when mixing is still sufficient for biological purposes) may indicate when the medical action is required.

  5. Fuel nozzle assembly for use in turbine engines and methods of assembling same

    DOEpatents

    Uhm, Jong Ho; Johnson, Thomas Edward

    2015-02-03

    A fuel nozzle for use with a turbine engine is described herein. The fuel nozzle includes a housing that is coupled to a combustor liner defining a combustion chamber. The housing includes an endwall that at least partially defines the combustion chamber. A plurality of mixing tubes extends through the housing for channeling fuel to the combustion chamber. Each mixing tube of the plurality of mixing tubes includes an inner surface that extends between an inlet portion and an outlet portion. The outlet portion is oriented adjacent the housing endwall. At least one of the plurality of mixing tubes includes a plurality of projections that extend outwardly from the outlet portion. Adjacent projections are spaced a circumferential distance apart such that a groove is defined between each pair of circumferentially-apart projections to facilitate enhanced mixing of fuel in the combustion chamber.

  6. Health economics, equity, and efficiency: are we almost there?

    PubMed

    Ferraz, Marcos Bosi

    2015-01-01

    Health care is a highly complex, dynamic, and creative sector of the economy. While health economics has to continue its efforts to improve its methods and tools to better inform decisions, the application needs to be aligned with the insights and models of other social sciences disciplines. Decisions may be guided by four concept models based on ethical and distributive justice: libertarian, communitarian, egalitarian, and utilitarian. The societal agreement on one model or a defined mix of models is critical to avoid inequity and unfair decisions in a public and/or private insurance-based health care system. The excess use of methods and tools without fully defining the basic goals and philosophical principles of the health care system and without evaluating the fitness of these measures to reaching these goals may not contribute to an efficient improvement of population health.

  7. Health economics, equity, and efficiency: are we almost there?

    PubMed Central

    Ferraz, Marcos Bosi

    2015-01-01

    Health care is a highly complex, dynamic, and creative sector of the economy. While health economics has to continue its efforts to improve its methods and tools to better inform decisions, the application needs to be aligned with the insights and models of other social sciences disciplines. Decisions may be guided by four concept models based on ethical and distributive justice: libertarian, communitarian, egalitarian, and utilitarian. The societal agreement on one model or a defined mix of models is critical to avoid inequity and unfair decisions in a public and/or private insurance-based health care system. The excess use of methods and tools without fully defining the basic goals and philosophical principles of the health care system and without evaluating the fitness of these measures to reaching these goals may not contribute to an efficient improvement of population health. PMID:25709481

  8. Effects of mixing on resolved and unresolved scales on stratospheric age of air

    NASA Astrophysics Data System (ADS)

    Dietmüller, Simone; Garny, Hella; Plöger, Felix; Jöckel, Patrick; Cai, Duy

    2017-06-01

    Mean age of air (AoA) is a widely used metric to describe the transport along the Brewer-Dobson circulation. We seek to untangle the effects of different processes on the simulation of AoA, using the chemistry-climate model EMAC (ECHAM/MESSy Atmospheric Chemistry) and the Chemical Lagrangian Model of the Stratosphere (CLaMS). Here, the effects of residual transport and two-way mixing on AoA are calculated. To do so, we calculate the residual circulation transit time (RCTT). The difference of AoA and RCTT is defined as aging by mixing. However, as diffusion is also included in this difference, we further use a method to directly calculate aging by mixing on resolved scales. Comparing these two methods of calculating aging by mixing allows for separating the effect of unresolved aging by mixing (which we term aging by diffusion in the following) in EMAC and CLaMS. We find that diffusion impacts AoA by making air older, but its contribution plays a minor role (order of 10 %) in all simulations. However, due to the different advection schemes of the two models, aging by diffusion has a larger effect on AoA and mixing efficiency in EMAC, compared to CLaMS. Regarding the trends in AoA, in CLaMS the AoA trend is negative throughout the stratosphere except in the Northern Hemisphere middle stratosphere, consistent with observations. This slight positive trend is neither reproduced in a free-running nor in a nudged simulation with EMAC - in both simulations the AoA trend is negative throughout the stratosphere. Trends in AoA are mainly driven by the contributions of RCTT and aging by mixing, whereas the contribution of aging by diffusion plays a minor role.

  9. Relevance of workplace social mixing during influenza pandemics: an experimental modelling study of workplace cultures.

    PubMed

    Timpka, T; Eriksson, H; Holm, E; Strömgren, M; Ekberg, J; Spreco, A; Dahlström, Ö

    2016-07-01

    Workplaces are one of the most important regular meeting places in society. The aim of this study was to use simulation experiments to examine the impact of different workplace cultures on influenza dissemination during pandemics. The impact is investigated by experiments with defined social-mixing patterns at workplaces using semi-virtual models based on authentic sociodemographic and geographical data from a North European community (population 136 000). A simulated pandemic outbreak was found to affect 33% of the total population in the community with the reference academic-creative workplace culture; virus transmission at the workplace accounted for 10·6% of the cases. A model with a prevailing industrial-administrative workplace culture generated 11% lower incidence than the reference model, while the model with a self-employed workplace culture (also corresponding to a hypothetical scenario with all workplaces closed) produced 20% fewer cases. The model representing an academic-creative workplace culture with restricted workplace interaction generated 12% lower cumulative incidence compared to the reference model. The results display important theoretical associations between workplace social-mixing cultures and community-level incidence rates during influenza pandemics. Social interaction patterns at workplaces should be taken into consideration when analysing virus transmission patterns during influenza pandemics.

  10. Measuring trends of outpatient antibiotic use in Europe: jointly modelling longitudinal data in defined daily doses and packages.

    PubMed

    Bruyndonckx, Robin; Hens, Niel; Aerts, Marc; Goossens, Herman; Molenberghs, Geert; Coenen, Samuel

    2014-07-01

    To complement analyses of the linear trend and seasonal fluctuation of European outpatient antibiotic use expressed in defined daily doses (DDD) by analyses of data in packages, to assess the agreement between both measures and to study changes in the number of DDD per package over time. Data on outpatient antibiotic use, aggregated at the level of the active substance (WHO version 2011) were collected from 2000 to 2007 for 31 countries and expressed in DDD and packages per 1000 inhabitants per day (DID and PID, respectively). Data expressed in DID and PID were analysed separately using non-linear mixed models while the agreement between these measurements was analysed through a joint non-linear mixed model. The change in DDD per package over time was studied with a linear mixed model. Total outpatient antibiotic and penicillin use in Europe and their seasonal fluctuation significantly increased in DID, but not in PID. The use of combinations of penicillins significantly increased in DID and in PID. Broad-spectrum penicillin use did not increase significantly in DID and decreased significantly in PID. For all but one subgroup, country-specific deviations moved in the same direction whether measured in DID or PID. The correlations are not perfect. The DDD per package increased significantly over time for all but one subgroup. Outpatient antibiotic use in Europe shows contrasting trends, depending on whether DID or PID is used as the measure. The increase of the DDD per package corroborates the recommendation to adopt PID to monitor outpatient antibiotic use in Europe. © The Author 2014. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. 26 CFR 1.1092(b)-4T - Mixed straddles; mixed straddle account (temporary).

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 11 2010-04-01 2010-04-01 true Mixed straddles; mixed straddle account... Mixed straddles; mixed straddle account (temporary). (a) In general. A taxpayer may elect (in accordance with paragraph (f) of this section) to establish one or more mixed straddle accounts (as defined in...

  12. Invasion of cooperators in lattice populations: linear and non-linear public good games.

    PubMed

    Vásárhelyi, Zsóka; Scheuring, István

    2013-08-01

    A generalized version of the N-person volunteer's dilemma (NVD) Game has been suggested recently for illustrating the problem of N-person social dilemmas. Using standard replicator dynamics it can be shown that coexistence of cooperators and defectors is typical in this model. However, the question of how a rare mutant cooperator could invade a population of defectors is still open. Here we examined the dynamics of individual based stochastic models of the NVD. We analyze the dynamics in well-mixed and viscous populations. We show in both cases that coexistence between cooperators and defectors is possible; moreover, spatial aggregation of types in viscous populations can easily lead to pure cooperation. Furthermore we analyze the invasion of cooperators in populations consisting predominantly of defectors. In accordance with analytical results, in deterministic systems, we found the invasion of cooperators successful in the well-mixed case only if their initial concentration was higher than a critical threshold, defined by the replicator dynamics of the NVD. In the viscous case, however, not the initial concentration but the initial number determines the success of invasion. We show that even a single mutant cooperator can invade with a high probability, because the local density of aggregated cooperators exceeds the threshold defined by the game. Comparing the results to models using different benefit functions (linear or sigmoid), we show that the role of the benefit function is much more important in the well-mixed than in the viscous case. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Personalized prediction of chronic wound healing: an exponential mixed effects model using stereophotogrammetric measurement.

    PubMed

    Xu, Yifan; Sun, Jiayang; Carter, Rebecca R; Bogie, Kath M

    2014-05-01

    Stereophotogrammetric digital imaging enables rapid and accurate detailed 3D wound monitoring. This rich data source was used to develop a statistically validated model to provide personalized predictive healing information for chronic wounds. 147 valid wound images were obtained from a sample of 13 category III/IV pressure ulcers from 10 individuals with spinal cord injury. Statistical comparison of several models indicated the best fit for the clinical data was a personalized mixed-effects exponential model (pMEE), with initial wound size and time as predictors and observed wound size as the response variable. Random effects capture personalized differences. Other models are only valid when wound size constantly decreases. This is often not achieved for clinical wounds. Our model accommodates this reality. Two criteria to determine effective healing time outcomes are proposed: r-fold wound size reduction time, t(r-fold), is defined as the time when wound size reduces to 1/r of initial size. t(δ) is defined as the time when the rate of the wound healing/size change reduces to a predetermined threshold δ < 0. Healing rate differs from patient to patient. Model development and validation indicates that accurate monitoring of wound geometry can adaptively predict healing progression and that larger wounds heal more rapidly. Accuracy of the prediction curve in the current model improves with each additional evaluation. Routine assessment of wounds using detailed stereophotogrammetric imaging can provide personalized predictions of wound healing time. Application of a valid model will help the clinical team to determine wound management care pathways. Published by Elsevier Ltd.

  14. Experimental and computational fluid dynamic studies of mixing for complex oral health products

    NASA Astrophysics Data System (ADS)

    Garcia, Marti Cortada; Mazzei, Luca; Angeli, Panagiota

    2015-11-01

    Mixing high viscous non-Newtonian fluids is common in the consumer health industry. Sometimes this process is empirical and involves many pilot plants trials which are product specific. The first step to study the mixing process is to build on knowledge on the rheology of the fluids involved. In this research a systematic approach is used to validate the rheology of two liquids: glycerol and a gel formed by polyethylene glycol and carbopol. Initially, the constitutive equation is determined which relates the viscosity of the fluids with temperature, shear rate, and concentration. The key variable for the validation is the power required for mixing, which can be obtained both from CFD and experimentally using a stirred tank and impeller of well-defined geometries at different impeller speeds. A good agreement between the two values indicates a successful validation of the rheology and allows the CFD model to be used for the study of mixing in the complex vessel geometries and increased sizes encountered during scale up.

  15. Stochastic models to study the impact of mixing on a fed-batch culture of Saccharomyces cerevisiae.

    PubMed

    Delvigne, F; Lejeune, A; Destain, J; Thonart, P

    2006-01-01

    The mechanisms of interaction between microorganisms and their environment in a stirred bioreactor can be modeled by a stochastic approach. The procedure comprises two submodels: a classical stochastic model for the microbial cell circulation and a Markov chain model for the concentration gradient calculus. The advantage lies in the fact that the core of each submodel, i.e., the transition matrix (which contains the probabilities to shift from a perfectly mixed compartment to another in the bioreactor representation), is identical for the two cases. That means that both the particle circulation and fluid mixing process can be analyzed by use of the same modeling basis. This assumption has been validated by performing inert tracer (NaCl) and stained yeast cells dispersion experiments that have shown good agreement with simulation results. The stochastic model has been used to define a characteristic concentration profile experienced by the microorganisms during a fermentation test performed in a scale-down reactor. The concentration profiles obtained in this way can explain the scale-down effect in the case of a Saccharomyces cerevisiae fed-batch process. The simulation results are analyzed in order to give some explanations about the effect of the substrate fluctuation dynamics on S. cerevisiae.

  16. Evaluation of subgrid-scale models in large-eddy simulations of turbulent flow in a centrifugal pump impeller

    NASA Astrophysics Data System (ADS)

    Yang, Zhengjun; Wang, Fujun; Zhou, Peijian

    2012-09-01

    The current research of large eddy simulation (LES) of turbulent flow in pumps mainly concentrates in applying conventional subgrid-scale (SGS) model to simulate turbulent flow, which aims at obtaining the flow field in pump. The selection of SGS model is usually not considered seriously, so the accuracy and efficiency of the simulation cannot be ensured. Three SGS models including Smagorinsky-Lilly model, dynamic Smagorinsky model and dynamic mixed model are comparably studied by using the commercial CFD code Fluent combined with its user define function. The simulations are performed for the turbulent flow in a centrifugal pump impeller. The simulation results indicate that the mean flows predicted by the three SGS models agree well with the experimental data obtained from the test that detailed measurements of the flow inside the rotating passages of a six-bladed shrouded centrifugal pump impeller performed using particle image velocimetry (PIV) and laser Doppler velocimetry (LDV). The comparable results show that dynamic mixed model gives the most accurate results for mean flow in the centrifugal pump impeller. The SGS stress of dynamic mixed model is decompose into the scale similar part and the eddy viscous part. The scale similar part of SGS stress plays a significant role in high curvature regions, such as the leading edge and training edge of pump blade. It is also found that the dynamic mixed model is more adaptive to compute turbulence in the pump impeller. The research results presented is useful to improve the computational accuracy and efficiency of LES for centrifugal pumps, and provide important reference for carrying out simulation in similar fluid machineries.

  17. Quantitative assessment of the flow pattern in the southern Arava Valley (Israel) by environmental tracers and a mixing cell model

    NASA Astrophysics Data System (ADS)

    Adar, E. M.; Rosenthal, E.; Issar, A. S.; Batelaan, O.

    1992-08-01

    This paper demonstrates the implementation of a novel mathematical model to quantify subsurface inflows from various sources into the arid alluvial basin of the southern Arava Valley divided between Israel and Jordan. The model is based on spatial distribution of environmental tracers and is aimed for use on basins with complex hydrogeological structure and/or with scarce physical hydrologic information. However, a sufficient qualified number of wells and springs are required to allow water sampling for chemical and isotopic analyses. Environmental tracers are used in a multivariable cluster analysis to define potential sources of recharge, and also to delimit homogeneous mixing compartments within the modeled aquifer. Six mixing cells were identified based on 13 constituents. A quantitative assessment of 11 significant subsurface inflows was obtained. Results revealed that the total recharge into the southern Arava basin is around 12.52 × 10 6m3year-1. The major source of inflow into the alluvial aquifer is from the Nubian sandstone aquifer which comprises 65-75% of the total recharge. Only 19-24% of the recharge, but the most important source of fresh water, originates over the eastern Jordanian mountains and alluvial fans.

  18. Coupled effects of vertical mixing and benthic grazing on phytoplankton populations in shallow, turbid estuaries

    USGS Publications Warehouse

    Koseff, Jeffrey R.; Holen, Jacqueline K.; Monismith, Stephen G.; Cloern, James E.

    1993-01-01

    Coastal ocean waters tend to have very different patterns of phytoplankton biomass variability from the open ocean, and the connections between physical variability and phytoplankton bloom dynamics are less well established for these shallow systems. Predictions of biological responses to physical variability in these environments is inherently difficult because the recurrent seasonal patterns of mixing are complicated by aperiodic fluctuations in river discharge and the high-frequency components of tidal variability. We might expect, then, less predictable and more complex bloom dynamics in these shallow coastal systems compared with the open ocean. Given this complex and dynamic physical environment, can we develop a quantitative framework to define the physical regimes necessary for bloom inception, and can we identify the important mechanisms of physical-biological coupling that lead to the initiation and termination of blooms in estuaries and shallow coastal waters? Numerical modeling provides one approach to address these questions. Here we present results of simulation experiments with a refined version of Cloern's (1991) model in which mixing processes are treated more realistically to reflect the dynamic nature of turbulence generation in estuaries. We investigated several simple models for the turbulent mixing coefficient. We found that the addition of diurnal tidal variation to Cloern's model greatly reduces biomass growth indicating that variations of mixing on the time scale of hours are crucial. Furthermore, we found that for conditions representative of South San Francisco Bay, numerical simulations only allowed for bloom development when the water column was stratified and when minimal mixing was prescribed in the upper layer. Stratification, however, itself is not sufficient to ensure that a bloom will develop: minimal wind stirring is a further prerequisite to bloom development in shallow turbid estuaries with abundant populations of benthic suspension feeders.

  19. Hawaii Ocean Mixing Experiment: Program Summary

    NASA Technical Reports Server (NTRS)

    Ray, Richard D.; Chao, Benjamin F. (Technical Monitor)

    2002-01-01

    It is becoming apparent that insufficient mixing occurs in the pelagic ocean to maintain the large scale thermohaline circulation. Observed mixing rates fall a factor of ten short of classical indices such as Munk's "Abyssal Recipe." The growing suspicion is that most of the mixing in the sea occurs near topography. Exciting recent observations by Polzin et al., among others, fuel this speculation. If topographic mixing is indeed important, it must be acknowledged that its geographic distribution, both laterally and vertically, is presently unknown. The vertical distribution of mixing plays a critical role in the Stommel Arons model of the ocean interior circulation. In recent numerical studies, Samelson demonstrates the extreme sensitivity of flow in the abyssal ocean to the spatial distribution of mixing. We propose to study the topographic mixing problem through an integrated program of modeling and observation. We focus on tidally forced mixing as the global energetics of this process have received (and are receiving) considerable study. Also, the well defined frequency of the forcing and the unique geometry of tidal scattering serve to focus the experiment design. The Hawaiian Ridge is selected as a study site. Strong interaction between the barotropic tide and the Ridge is known to take place. The goals of the Hawaiian Ocean Mixing Experiment (HOME) are to quantify the rate of tidal energy loss to mixing at the Ridge and to identify the mechanisms by which energy is lost and mixing generated. We are challenged to develop a sufficiently comprehensive picture that results can be generalized from Hawaii to the global ocean. To achieve these goals, investigators from five institutions have designed HOME, a program of historic data analysis, modeling and field observation. The Analysis and Modeling efforts support the design of the field experiments. As the program progresses, a global model of the barotropic (depth independent) tide, and two models of the baroclinic (depth varying) tide, all validated with near-Ridge data, will be applied, to reveal the mechanisms of tidal energy conversion along the Ridge, and allow spatial and temporal integration of the rate of conversion. Field experiments include a survey to identify "hot spots" of enhanced mixing and barotropic to baroclinic conversion, a Nearfield study identifying the dominant mechanisms responsible for topographic mixing, and a Farfield program which quantifies the barotropic energy flux convergence at the Ridge and the flux divergence associated with low mode baroclinic waves radiation. The difference is a measure of the tidal power available for mixing at the Ridge. Field work is planned from years 2000 through 2002, with analysis and modeling efforts extending through early 2006. If successful, HOME will yield an understanding of the dominant topographic mixing processes applicable throughout the global ocean. It will advance understanding of two central problems in ocean science, the maintenance of the abyssal stratification, and the dissipation of the tides. HOME data will be used to improve the parameterization of dissipation in models which presently assimilate TOPEX-POSEIDON observations. The improved understanding of the dynamics and spatial distribution of mixing processes will benefit future long-term programs such as CLIVAR.

  20. How Choice of Depth Horizon Influences the Estimated Spatial Patterns and Global Magnitude of Ocean Carbon Export Flux

    NASA Astrophysics Data System (ADS)

    Palevsky, Hilary I.; Doney, Scott C.

    2018-05-01

    Estimated rates and efficiency of ocean carbon export flux are sensitive to differences in the depth horizons used to define export, which often vary across methodological approaches. We evaluate sinking particulate organic carbon (POC) flux rates and efficiency (e-ratios) in a global earth system model, using a range of commonly used depth horizons: the seasonal mixed layer depth, the particle compensation depth, the base of the euphotic zone, a fixed depth horizon of 100 m, and the maximum annual mixed layer depth. Within this single dynamically consistent model framework, global POC flux rates vary by 30% and global e-ratios by 21% across different depth horizon choices. Zonal variability in POC flux and e-ratio also depends on the export depth horizon due to pronounced influence of deep winter mixing in subpolar regions. Efforts to reconcile conflicting estimates of export need to account for these systematic discrepancies created by differing depth horizon choices.

  1. Scale dependence of entrainment-mixing mechanisms in cumulus clouds

    DOE PAGES

    Lu, Chunsong; Liu, Yangang; Niu, Shengjie; ...

    2014-12-17

    This work empirically examines the dependence of entrainment-mixing mechanisms on the averaging scale in cumulus clouds using in situ aircraft observations during the Routine Atmospheric Radiation Measurement Aerial Facility Clouds with Low Optical Water Depths Optical Radiative Observations (RACORO) field campaign. A new measure of homogeneous mixing degree is defined that can encompass all types of mixing mechanisms. Analysis of the dependence of the homogenous mixing degree on the averaging scale shows that, on average, the homogenous mixing degree decreases with increasing averaging scales, suggesting that apparent mixing mechanisms gradually approach from homogeneous mixing to extreme inhomogeneous mixing with increasingmore » scales. The scale dependence can be well quantified by an exponential function, providing first attempt at developing a scale-dependent parameterization for the entrainment-mixing mechanism. The influences of three factors on the scale dependence are further examined: droplet-free filament properties (size and fraction), microphysical properties (mean volume radius and liquid water content of cloud droplet size distributions adjacent to droplet-free filaments), and relative humidity of entrained dry air. It is found that the decreasing rate of homogeneous mixing degree with increasing averaging scales becomes larger with larger droplet-free filament size and fraction, larger mean volume radius and liquid water content, or higher relative humidity. The results underscore the necessity and possibility of considering averaging scale in representation of entrainment-mixing processes in atmospheric models.« less

  2. Mixed Emotions Across Adulthood: When, Where, and Why?

    PubMed Central

    Charles, Susan T.; Piazza, Jennifer R.; Urban, Emily J.

    2017-01-01

    Psychologists often interpret mixed emotional experiences, defined as experiencing more than one emotion over a given period of time, as indicative of greater emotional complexity and more adaptive functioning. In the present paper, we briefly review studies that have examined these experiences across adulthood. We describe how mixed emotions have been defined in the lifespan literature, and how the various studies examining age differences in this phenomenon have yielded discrepant results. We then discuss future research directions that could clarify the nature of mixed emotions and their utility in adulthood, including the assessment of situational context, understanding when mixed emotions are adaptive in daily life, and determining how cognitive functioning is involved in these experiences. PMID:29085868

  3. Penalized nonparametric scalar-on-function regression via principal coordinates

    PubMed Central

    Reiss, Philip T.; Miller, David L.; Wu, Pei-Shien; Hua, Wen-Yu

    2016-01-01

    A number of classical approaches to nonparametric regression have recently been extended to the case of functional predictors. This paper introduces a new method of this type, which extends intermediate-rank penalized smoothing to scalar-on-function regression. In the proposed method, which we call principal coordinate ridge regression, one regresses the response on leading principal coordinates defined by a relevant distance among the functional predictors, while applying a ridge penalty. Our publicly available implementation, based on generalized additive modeling software, allows for fast optimal tuning parameter selection and for extensions to multiple functional predictors, exponential family-valued responses, and mixed-effects models. In an application to signature verification data, principal coordinate ridge regression, with dynamic time warping distance used to define the principal coordinates, is shown to outperform a functional generalized linear model. PMID:29217963

  4. Brine evolution and mineral deposition in hydrologically open evaporite basins

    USGS Publications Warehouse

    Sanford, W.E.; Wood, W.W.

    1991-01-01

    A lumped-parameter, solute mass-balance model is developed to define the role of water outflow from a well-mixed basin. A mass-balance model is analyzed with a geochemical model designed for waters with high ionic strengths. Two typical waters, seawater and a Na-HCO3 ground water, are analyzed to illustrate the control that the leakage ratio (or hydrologic openness of the basin) has on brine evolution and the suite and thicknesses of evaporite minerals deposited. The analysis suggests that brines evolve differently under different leakage conditions. -from Authors

  5. Reproduction numbers for epidemic models with households and other social structures. I. Definition and calculation of R0

    PubMed Central

    Pellis, Lorenzo; Ball, Frank; Trapman, Pieter

    2012-01-01

    The basic reproduction number R0 is one of the most important quantities in epidemiology. However, for epidemic models with explicit social structure involving small mixing units such as households, its definition is not straightforward and a wealth of other threshold parameters has appeared in the literature. In this paper, we use branching processes to define R0, we apply this definition to models with households or other more complex social structures and we provide methods for calculating it. PMID:22085761

  6. The Growing Segmentation of the Charter School Sector in North Carolina

    ERIC Educational Resources Information Center

    Ladd, Helen F.; Clotfelter, Charles T.; Holbein, John B.

    2017-01-01

    A defining characteristic of charter schools is that they introduce a strong market element into public education. In this paper, we examine through the lens of a market model the evolution of the charter school sector in North Carolina between 1999 and 2012. We examine trends in the mix of students enrolled in charter schools, the racial…

  7. Predicted Hematologic and Plasma Volume Responses Following Rapid Ascent to Progressive Altitudes

    DTIC Science & Technology

    2014-06-01

    of these changes, and define baseline demographics and physiologic descriptors that are important in predicting these changes. The overall impact of... physiologic descriptors that are important in predicting these changes. Using general linear mixed models and a comprehensive relational database...accomplished using a comprehensive relational database containing individual ascent profiles, demographics, and physiologic subject descriptors as well as

  8. 24 CFR 960.403 - Applicability.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Mixed Population Projects § 960.403 Applicability. (a) This subpart applies to all dwelling units in mixed population projects (as defined in § 960.405), or portions of mixed population projects, assisted...

  9. 24 CFR 960.403 - Applicability.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Mixed Population Projects § 960.403 Applicability. (a) This subpart applies to all dwelling units in mixed population projects (as defined in § 960.405), or portions of mixed population projects, assisted...

  10. 24 CFR 960.403 - Applicability.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Mixed Population Projects § 960.403 Applicability. (a) This subpart applies to all dwelling units in mixed population projects (as defined in § 960.405), or portions of mixed population projects, assisted...

  11. 24 CFR 960.403 - Applicability.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Mixed Population Projects § 960.403 Applicability. (a) This subpart applies to all dwelling units in mixed population projects (as defined in § 960.405), or portions of mixed population projects, assisted...

  12. 24 CFR 960.403 - Applicability.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Mixed Population Projects § 960.403 Applicability. (a) This subpart applies to all dwelling units in mixed population projects (as defined in § 960.405), or portions of mixed population projects, assisted...

  13. Mixed integer simulation optimization for optimal hydraulic fracturing and production of shale gas fields

    NASA Astrophysics Data System (ADS)

    Li, J. C.; Gong, B.; Wang, H. G.

    2016-08-01

    Optimal development of shale gas fields involves designing a most productive fracturing network for hydraulic stimulation processes and operating wells appropriately throughout the production time. A hydraulic fracturing network design-determining well placement, number of fracturing stages, and fracture lengths-is defined by specifying a set of integer ordered blocks to drill wells and create fractures in a discrete shale gas reservoir model. The well control variables such as bottom hole pressures or production rates for well operations are real valued. Shale gas development problems, therefore, can be mathematically formulated with mixed-integer optimization models. A shale gas reservoir simulator is used to evaluate the production performance for a hydraulic fracturing and well control plan. To find the optimal fracturing design and well operation is challenging because the problem is a mixed integer optimization problem and entails computationally expensive reservoir simulation. A dynamic simplex interpolation-based alternate subspace (DSIAS) search method is applied for mixed integer optimization problems associated with shale gas development projects. The optimization performance is demonstrated with the example case of the development of the Barnett Shale field. The optimization results of DSIAS are compared with those of a pattern search algorithm.

  14. Modelling exhaust plume mixing in the near field of an aircraft

    NASA Astrophysics Data System (ADS)

    Garnier, F.; Brunet, S.; Jacquin, L.

    1997-11-01

    A simplified approach has been applied to analyse the mixing and entrainment processes of the engine exhaust through their interaction with the vortex wake of an aircraft. Our investigation is focused on the near field, extending from the exit nozzle until about 30 s after the wake is generated, in the vortex phase. This study was performed by using an integral model and a numerical simulation for two large civil aircraft: a two-engine Airbus 330 and a four-engine Boeing 747. The influence of the wing-tip vortices on the dilution ratio (defined as a tracer concentration) shown. The mixing process is also affected by the buoyancy effect, but only after the jet regime, when the trapping in the vortex core has occurred. In the early wake, the engine jet location (i.e. inboard or outboard engine jet) has an important influence on the mixing rate. The plume streamlines inside the vortices are subject to distortion and stretching, and the role of the descent of the vortices on the maximum tracer concentration is discussed. Qualitative comparison with contrail photograph shows similar features. Finally, tracer concentration of inboard engine centreline of B-747 are compared with other theoretical analyses and measured data.

  15. Analyses and simulations of the upper ocean's response to Hurricane Felix at the Bermuda Testbed Mooring site: 13-23 August 1995

    NASA Astrophysics Data System (ADS)

    Zedler, S. E.; Dickey, T. D.; Doney, S. C.; Price, J. F.; Yu, X.; Mellor, G. L.

    2002-12-01

    The center of Hurricane Felix passed 85 km to the southwest of the Bermuda Testbed Mooring (BTM; 31°44'N, 64°10'W) site on 15 August 1995. Data collected in the upper ocean from the BTM during this encounter provide a rare opportunity to investigate the physical processes that occur in a hurricane's wake. Data analyses indicate that the storm caused a large increase in kinetic energy at near-inertial frequencies, internal gravity waves in the thermocline, and inertial pumping, mixed layer deepening, and significant vertical redistribution of heat, with cooling of the upper 30 m and warming at depths of 30-70 m. The temperature evolution was simulated using four one-dimensional mixed layer models: Price-Weller-Pinkel (PWP), K Profile Parameterization (KPP), Mellor-Yamada 2.5 (MY), and a modified version of MY2.5 (MY2). The primary differences in the model results were in their simulations of temperature evolution. In particular, when forced using a drag coefficient that had a linear dependence on wind speed, the KPP model predicted sea surface cooling, mixed layer currents, and the maximum depth of cooling closer to the observations than any of the other models. This was shown to be partly because of a special parameterization for gradient Richardson number (RgKPP) shear instability mixing in response to resolved shear in the interior. The MY2 model predicted more sea surface cooling and greater depth penetration of kinetic energy than the MY model. In the MY2 model the dissipation rate of turbulent kinetic energy is parameterized as a function of a locally defined Richardson number (RgMY2) allowing for a reduction in dissipation rate for stable Richardson numbers (RgMY2) when internal gravity waves are likely to be present. Sensitivity simulations with the PWP model, which has specifically defined mixing procedures, show that most of the heat lost from the upper layer was due to entrainment (parameterized as a function of bulk Richardson number RbPWP), with the remainder due to local Richardson number (RgPWP) instabilities. With the exception of the MY model the models predicted reasonable estimates of the north and east current components during and after the hurricane passage at 25 and 45 m. Although the results emphasize differences between the modeled responses to a given wind stress, current controversy over the formulation of wind stress from wind speed measurements (including possible sea state and wave age and sheltering effects) cautions against using our results for assessing model skill. In particular, sensitivity studies show that MY2 simulations of the temperature evolution are excellent when the wind stress is increased, albeit with currents that are larger than observed. Sensitivity experiments also indicate that preexisting inertial motion modulated the amplitude of poststorm currents, but that there was probably not a significant resonant response because of clockwise wind rotation for our study site.

  16. MO200: a model for evaluation safeguards through material accountability for a 200 tonne per year mixed-oxide fuel-rod fabrication plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandborn, R.H.

    1976-01-01

    M0200, a computer simulation model, was used to investigate the safeguarding of plutonium dioxide. The computer program operating the model was constructed so that replicate runs could provide data for statistical analysis of the distributions of the randomized variables. The plant model was divided into material balance areas associated with definable unit processes. Indicators of plant operations studied were modified end-of-shift material balances, end-of-blend errors formed by closing material balances between blends, and cumulative sums of the differences between actual and expected performances. (auth)

  17. Predicting the temporal and spatial probability of orographic cloud cover in the Luquillo Experimental Forest in Puerto Rico using generalized linear (mixed) models.

    Treesearch

    Wei Wu; Charlesb Hall; Lianjun Zhang

    2006-01-01

    We predicted the spatial pattern of hourly probability of cloud cover in the Luquillo Experimental Forest (LEF) in North-Eastern Puerto Rico using four different models. The probability of cloud cover (defined as “the percentage of the area covered by clouds in each pixel on the map” in this paper) at any hour and any place is a function of three topographic variables...

  18. Treatment recommendations for DSM-5-defined mixed features.

    PubMed

    Rosenblat, Joshua D; McIntyre, Roger S

    2017-04-01

    The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) mixed features specifier provides a less restrictive definition of mixed mood states, compared to the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision (DSM-IV-TR), including mood episodes that manifest with subthreshold symptoms of the opposite mood state. A limited number of studies have assessed the efficacy of treatments specifically for DSM-5-defined mixed features in mood disorders. As such, there is currently an inadequate amount of data to appropriately inform evidence-based treatment guidelines of DSM-5 defined mixed features. However, given the high prevalence and morbidity of mixed features, treatment recommendations based on the currently available evidence along with expert opinion may be of benefit. This article serves to provide these interim treatment recommendations while humbly acknowledging the limited amount of evidence currently available. Second-generation antipsychotics (SGAs) appear to have the greatest promise in the treatment of bipolar disorder (BD) with mixed features. Conventional mood stabilizing agents (ie, lithium and divalproex) may also be of benefit; however, they have been inadequately studied. In the treatment of major depressive disorder (MDD) with mixed features, the comparable efficacy of antidepressants versus other treatments, such as SGAs, remains unknown. As such, antidepressants remain first-line treatment of MDD with or without mixed features; however, there are significant safety concerns associated with antidepressant monotherapy when mixed features are present, which merits increased monitoring. Lurasidone is the only SGA monotherapy that has been shown to be efficacious specifically in the treatment of MDD with mixed features. Further research is needed to accurately determine the efficacy, safety, and tolerability of treatments specifically for mood episodes with mixed features to adequately inform future treatment guidelines.

  19. Forest gradient response in Sierran landscapes: the physical template

    USGS Publications Warehouse

    Urban, Dean L.; Miller, Carol; Halpin, Patrick N.; Stephenson, Nathan L.

    2000-01-01

    Vegetation pattern on landscapes is the manifestation of physical gradients, biotic response to these gradients, and disturbances. Here we focus on the physical template as it governs the distribution of mixed-conifer forests in California's Sierra Nevada. We extended a forest simulation model to examine montane environmental gradients, emphasizing factors affecting the water balance in these summer-dry landscapes. The model simulates the soil moisture regime in terms of the interaction of water supply and demand: supply depends on precipitation and water storage, while evapotranspirational demand varies with solar radiation and temperature. The forest cover itself can affect the water balance via canopy interception and evapotranspiration. We simulated Sierran forests as slope facets, defined as gridded stands of homogeneous topographic exposure, and verified simulated gradient response against sample quadrats distributed across Sequoia National Park. We then performed a modified sensitivity analysis of abiotic factors governing the physical gradient. Importantly, the model's sensitivity to temperature, precipitation, and soil depth varies considerably over the physical template, particularly relative to elevation. The physical drivers of the water balance have characteristic spatial scales that differ by orders of magnitude. Across large spatial extents, temperature and precipitation as defined by elevation primarily govern the location of the mixed conifer zone. If the analysis is constrained to elevations within the mixed-conifer zone, local topography comes into play as it influences drainage. Soil depth varies considerably at all measured scales, and is especially dominant at fine (within-stand) scales. Physical site variables can influence soil moisture deficit either by affecting water supply or water demand; these effects have qualitatively different implications for forest response. These results have clear implications about purely inferential approaches to gradient analysis, and bear strongly on our ability to use correlative approaches in assessing the potential responses of montane forests to anthropogenic climatic change.

  20. Apparatus for mixing fuel in a gas turbine nozzle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barker, Carl Robert

    A fuel nozzle in a combustion turbine engine that includes: a fuel plenum defined between an circumferentially extending shroud and axially by a forward tube-sheet and an aft tube-sheet; and a mixing-tube that extends across the fuel plenum that defines a passageway connecting an inlet formed through the forward tube-sheet and an outlet formed through the aft tube-sheet, the mixing-tube comprising one or more fuel ports that fluidly communicate with the fuel plenum. The mixing-tube may include grooves on an outer surface, and be attached to the forward tube-sheet by a connection having a fail-safe leakage path.

  1. Coaxial fuel and air premixer for a gas turbine combustor

    DOEpatents

    York, William D; Ziminsky, Willy S; Lacy, Benjamin P

    2013-05-21

    An air/fuel premixer comprising a peripheral wall defining a mixing chamber, a nozzle disposed at least partially within the peripheral wall comprising an outer annular wall spaced from the peripheral wall so as to define an outer air passage between the peripheral wall and the outer annular wall, an inner annular wall disposed at least partially within and spaced from the outer annular wall, so as to define an inner air passage, and at least one fuel gas annulus between the outer annular wall and the inner annular wall, the at least one fuel gas annulus defining at least one fuel gas passage, at least one air inlet for introducing air through the inner air passage and the outer air passage to the mixing chamber, and at least one fuel inlet for injecting fuel through the fuel gas passage to the mixing chamber to form an air/fuel mixture.

  2. Parameterization of large-scale turbulent diffusion in the presence of both well-mixed and weakly mixed patchy layers

    NASA Astrophysics Data System (ADS)

    Osman, M. K.; Hocking, W. K.; Tarasick, D. W.

    2016-06-01

    Vertical diffusion and mixing of tracers in the upper troposphere and lower stratosphere (UTLS) are not uniform, but primarily occur due to patches of turbulence that are intermittent in time and space. The effective diffusivity of regions of patchy turbulence is related to statistical parameters describing the morphology of turbulent events, such as lifetime, number, width, depth and local diffusivity (i.e., diffusivity within the turbulent patch) of the patches. While this has been recognized in the literature, the primary focus has been on well-mixed layers, with few exceptions. In such cases the local diffusivity is irrelevant, but this is not true for weakly and partially mixed layers. Here, we use both theory and numerical simulations to consider the impact of intermediate and weakly mixed layers, in addition to well-mixed layers. Previous approaches have considered only one dimension (vertical), and only a small number of layers (often one at each time step), and have examined mixing of constituents. We consider a two-dimensional case, with multiple layers (10 and more, up to hundreds and even thousands), having well-defined, non-infinite, lengths and depths. We then provide new formulas to describe cases involving well-mixed layers which supersede earlier expressions. In addition, we look in detail at layers that are not well mixed, and, as an interesting variation on previous models, our procedure is based on tracking the dispersion of individual particles, which is quite different to the earlier approaches which looked at mixing of constituents. We develop an expression which allows determination of the degree of mixing, and show that layers used in some previous models were in fact not well mixed and so produced erroneous results. We then develop a generalized model based on two dimensional random-walk theory employing Rayleigh distributions which allows us to develop a universal formula for diffusion rates for multiple two-dimensional layers with general degrees of mixing. We show that it is the largest, most vigorous and less common turbulent layers that make the major contribution to global diffusion. Finally, we make estimates of global-scale diffusion coefficients in the lower stratosphere and upper troposphere. For the lower stratosphere, κeff ≈ 2x10-2 m2 s-1, assuming no other processes contribute to large-scale diffusion.

  3. Predicting the mixed-mode I/II spatial damage propagation along 3D-printed soft interfacial layer via a hyperelastic softening model

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Li, Yaning

    2018-07-01

    A methodology was developed to use a hyperelastic softening model to predict the constitutive behavior and the spatial damage propagation of nonlinear materials with damage-induced softening under mixed-mode loading. A user subroutine (ABAQUS/VUMAT) was developed for numerical implementation of the model. 3D-printed wavy soft rubbery interfacial layer was used as a material system to verify and validate the methodology. The Arruda - Boyce hyperelastic model is incorporated with the softening model to capture the nonlinear pre-and post- damage behavior of the interfacial layer under mixed Mode I/II loads. To characterize model parameters of the 3D-printed rubbery interfacial layer, a series of scarf-joint specimens were designed, which enabled systematic variation of stress triaxiality via a single geometric parameter, the slant angle. It was found that the important model parameter m is exponentially related to the stress triaxiality. Compact tension specimens of the sinusoidal wavy interfacial layer with different waviness were designed and fabricated via multi-material 3D printing. Finite element (FE) simulations were conducted to predict the spatial damage propagation of the material within the wavy interfacial layer. Compact tension experiments were performed to verify the model prediction. The results show that the model developed is able to accurately predict the damage propagation of the 3D-printed rubbery interfacial layer under complicated stress-state without pre-defined failure criteria.

  4. Identifying ontogenetic, environmental and individual components of forest tree growth

    PubMed Central

    Chaubert-Pereira, Florence; Caraglio, Yves; Lavergne, Christian; Guédon, Yann

    2009-01-01

    Background and Aims This study aimed to identify and characterize the ontogenetic, environmental and individual components of forest tree growth. In the proposed approach, the tree growth data typically correspond to the retrospective measurement of annual shoot characteristics (e.g. length) along the trunk. Methods Dedicated statistical models (semi-Markov switching linear mixed models) were applied to data sets of Corsican pine and sessile oak. In the semi-Markov switching linear mixed models estimated from these data sets, the underlying semi-Markov chain represents both the succession of growth phases and their lengths, while the linear mixed models represent both the influence of climatic factors and the inter-individual heterogeneity within each growth phase. Key Results On the basis of these integrative statistical models, it is shown that growth phases are not only defined by average growth level but also by growth fluctuation amplitudes in response to climatic factors and inter-individual heterogeneity and that the individual tree status within the population may change between phases. Species plasticity affected the response to climatic factors while tree origin, sampling strategy and silvicultural interventions impacted inter-individual heterogeneity. Conclusions The transposition of the proposed integrative statistical modelling approach to cambial growth in relation to climatic factors and the study of the relationship between apical growth and cambial growth constitute the next steps in this research. PMID:19684021

  5. Determination of timescales of nitrate contamination by groundwater age models in a complex aquifer system

    NASA Astrophysics Data System (ADS)

    Koh, E. H.; Lee, E.; Kaown, D.; Lee, K. K.; Green, C. T.

    2017-12-01

    Timing and magnitudes of nitrate contamination are determined by various factors like contaminant loading, recharge characteristics and geologic system. Information of an elapsed time since recharged water traveling to a certain outlet location, which is defined as groundwater age, can provide indirect interpretation related to the hydrologic characteristics of the aquifer system. There are three major methods (apparent ages, lumped parameter model, and numerical model) to date groundwater ages, which differently characterize groundwater mixing resulted by various groundwater flow pathways in a heterogeneous aquifer system. Therefore, in this study, we compared the three age models in a complex aquifer system by using observed age tracer data and reconstructed history of nitrate contamination by long-term source loading. The 3H-3He and CFC-12 apparent ages, which did not consider the groundwater mixing, estimated the most delayed response time and a highest period of the nitrate loading had not reached yet. However, the lumped parameter model could generate more recent loading response than the apparent ages and the peak loading period influenced the water quality. The numerical model could delineate various groundwater mixing components and its different impacts on nitrate dynamics in the complex aquifer system. The different age estimation methods lead to variations in the estimated contaminant loading history, in which the discrepancy in the age estimation was dominantly observed in the complex aquifer system.

  6. Assessing the interruption of the transmission of human helminths with mass drug administration alone: optimizing the design of cluster randomized trials.

    PubMed

    Anderson, Roy; Farrell, Sam; Turner, Hugo; Walson, Judd; Donnelly, Christl A; Truscott, James

    2017-02-17

    A method is outlined for the use of an individual-based stochastic model of parasite transmission dynamics to assess different designs for a cluster randomized trial in which mass drug administration (MDA) is employed in attempts to eliminate the transmission of soil-transmitted helminths (STH) in defined geographic locations. The hypothesis to be tested is: Can MDA alone interrupt the transmission of STH species in defined settings? Clustering is at a village level and the choice of clusters of villages is stratified by transmission intensity (low, medium and high) and parasite species mix (either Ascaris, Trichuris or hookworm dominant). The methodological approach first uses an age-structured deterministic model to predict the MDA coverage required for treating pre-school aged children (Pre-SAC), school aged children (SAC) and adults (Adults) to eliminate transmission (crossing the breakpoint in transmission created by sexual mating in dioecious helminths) with 3 rounds of annual MDA. Stochastic individual-based models are then used to calculate the positive and negative predictive values (PPV and NPV, respectively, for observing elimination or the bounce back of infection) for a defined prevalence of infection 2 years post the cessation of MDA. For the arm only involving the treatment of Pre-SAC and SAC, the failure rate is predicted to be very high (particularly for hookworm-infected villages) unless transmission intensity is very low (R 0 , or the effective reproductive number R, just above unity in value). The calculations are designed to consider various trial arms and stratifications; namely, community-based treatment and Pre-SAC and SAC only treatment (the two arms of the trial), different STH transmission settings of low, medium and high, and different STH species mixes. Results are considered in the light of the complications introduced by the choice of statistic to define success or failure, varying adherence to treatment, migration and parameter uncertainty.

  7. Comparing the appropriate geographic region for assessing built environmental correlates with walking trips using different metrics and model approaches

    PubMed Central

    Tribby, Calvin P.; Miller, Harvey J.; Brown, Barbara B.; Smith, Ken R.; Werner, Carol M.

    2017-01-01

    There is growing international evidence that supportive built environments encourage active travel such as walking. An unsettled question is the role of geographic regions for analyzing the relationship between the built environment and active travel. This paper examines the geographic region question by assessing walking trip models that use two different regions: walking activity spaces and self-defined neighborhoods. We also use two types of built environment metrics, perceived and audit data, and two types of study design, cross-sectional and longitudinal, to assess these regions. We find that the built environment associations with walking are dependent on the type of metric and the type of model. Audit measures summarized within walking activity spaces better explain walking trips compared to audit measures within self-defined neighborhoods. Perceived measures summarized within self-defined neighborhoods have mixed results. Finally, results differ based on study design. This suggests that results may not be comparable among different regions, metrics and designs; researchers need to consider carefully these choices when assessing active travel correlates. PMID:28237743

  8. Prenatal and early life stress and risk of eating disorders in adolescent girls and young women.

    PubMed

    Su, Xiujuan; Liang, Hong; Yuan, Wei; Olsen, Jørn; Cnattingius, Sven; Li, Jiong

    2016-11-01

    Females are more likely than males to develop eating disorders (EDs) in the adolescence and youth, and the etiology remains unclear. We aimed to estimate the effect of severe early life stress following bereavement, the death of a close relative, on the risk of EDs among females aged 10-26 years. This population-based cohort study included girls born in Denmark (from 1973 to 2000) or Sweden (from 1970 to 1997). Girls were categorized as exposed if they were born to mothers who lost a close relative 1 year prior to or during pregnancy or if the girl herself lost a parent or a sibling within the first 10 years of life. All other girls were included in unexposed group. An ED case was defined by a diagnosis of EDs at ages of 10-26 years, including broadly defined bulimia nervosa, broadly defined anorexia nervosa and mixed EDs. Poisson regression models were used to estimate the incidence rate ratio (IRR) between exposed group and unexposed group.A total of 64453 (3.05 %) girls were included in the exposed group. We identified 9477 girls with a diagnosis of EDs, of whom 307 (3.24 %) were from the exposed group. Both prenatal and postnatal exposure following bereavement by unexpected death was associated with an increased overall risk of EDs (IRR prenatal : 1.49, 95 % CI: 1.01-2.19 and IRR postnatal : 1.34, 95 % CI: 1.05-1.71). We observed similar results for subtypes of broadly defined bulimia nervosa (IRR: 2.47, 95 % CI: 1.67-3.65) and mixed EDs (IRR: 1.45, 95 % CI: 1.02-2.07).Our findings suggest that prenatal and early postnatal life stress due to unexpected death of a close relative is associated with an increased overall risk of eating disorders in adolescent girls and young women. The increased risk might be driven mainly by differences in broadly defined bulimia nervosa and mixed eating disorders, but not broadly defined anorexia nervosa.

  9. Weakening gravity on redshift-survey scales with kinetic matter mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Amico, Guido; Huang, Zhiqi; Mancarella, Michele

    We explore general scalar-tensor models in the presence of a kinetic mixing between matter and the scalar field, which we call Kinetic Matter Mixing. In the frame where gravity is de-mixed from the scalar this is due to disformal couplings of matter species to the gravitational sector, with disformal coefficients that depend on the gradient of the scalar field. In the frame where matter is minimally coupled, it originates from the so-called beyond Horndeski quadratic Lagrangian. We extend the Effective Theory of Interacting Dark Energy by allowing disformal coupling coefficients to depend on the gradient of the scalar field asmore » well. In this very general approach, we derive the conditions to avoid ghost and gradient instabilities and we define Kinetic Matter Mixing independently of the frame metric used to described the action. We study its phenomenological consequences for a ΛCDM background evolution, first analytically on small scales. Then, we compute the matter power spectrum and the angular spectra of the CMB anisotropies and the CMB lensing potential, on all scales. We employ the public version of COOP, a numerical Einstein-Boltzmann solver that implements very general scalar-tensor modifications of gravity. Rather uniquely, Kinetic Matter Mixing weakens gravity on short scales, predicting a lower σ{sub 8} with respect to the ΛCDM case. We propose this as a possible solution to the tension between the CMB best-fit model and low-redshift observables.« less

  10. Medicaid payment rates, case-mix reimbursement, and nursing home staffing--1996-2004.

    PubMed

    Feng, Zhanlian; Grabowski, David C; Intrator, Orna; Zinn, Jacqueline; Mor, Vincent

    2008-01-01

    We examined the impact of state Medicaid payment rates and case-mix reimbursement on direct care staffing levels in US nursing homes. We used a recent time series of national nursing home data from the Online Survey Certification and Reporting system for 1996-2004, merged with annual state Medicaid payment rates and case-mix reimbursement information. A 5-category response measure of total staffing levels was defined according to expert recommended thresholds, and examined in a multinomial logistic regression model. Facility fixed-effects models were estimated separately for Registered Nurse (RN), Licensed Practical Nurse (LPN), and Certified Nurse Aide (CNA) staffing levels measured as average hours per resident day. Higher Medicaid payment rates were associated with increases in total staffing levels to meet a higher recommended threshold. However, these gains in overall staffing were accompanied by a reduction of RN staffing and an increase in both LPN and CNA staffing levels. Under case-mix reimbursement, the likelihood of nursing homes achieving higher recommended staffing thresholds decreased, as did levels of professional staffing. Independent of the effects of state, market, and facility characteristics, there was a significant downward trend in RN staffing and an upward trend in both LPN and CNA staffing. Although overall staffing may increase in response to more generous Medicaid reimbursement, it may not translate into improvements in the skill mix of staff. Adjusting for reimbursement levels and resident acuity, total staffing has not increased after the implementation of case-mix reimbursement.

  11. Frontolysis by surface heat flux in the Agulhas Return Current region with a focus on mixed layer processes: observation and a high-resolution CGCM

    NASA Astrophysics Data System (ADS)

    Ohishi, Shun; Tozuka, Tomoki; Komori, Nobumasa

    2016-12-01

    Detailed mechanisms for frontogenesis/frontolysis of the Agulhas Return Current (ARC) Front, defined as the maximum of the meridional sea surface temperature (SST) gradient at each longitude within the ARC region (40°-50°E, 55°-35°S), are investigated using observational datasets. Due to larger (smaller) latent heat release to the atmosphere on the northern (southern) side of the front, the meridional gradient of surface net heat flux (NHF) is found throughout the year. In austral summer, surface warming is weaker (stronger) on the northern (southern) side, and thus the NHF tends to relax the SST front. The weaker (stronger) surface warming, at the same time, leads to the deeper (shallower) mixed layer on the northern (southern) side. This enhances the frontolysis, because deeper (shallower) mixed layer is less (more) sensitive to surface warming. In austral winter, stronger (weaker) surface cooling on the northern (southern) side contributes to the frontolysis. However, deeper (shallower) mixed layer is induced by stronger (weaker) surface cooling on the northern (southern) side and suppresses the frontolysis, because the deeper (shallower) mixed layer is less (more) sensitive to surface cooling. Therefore, the frontolysis by the NHF becomes stronger (weaker) through the mixed layer processes in austral summer (winter). The cause of the meridional gradient of mixed layer depth is estimated using diagnostic entrainment velocity and the Monin-Obukhov depth. Furthermore, the above mechanisms obtained from the observation are confirmed using outputs from a high-resolution coupled general circulation model. Causes of model biases are also discussed.

  12. Nonideal Rayleigh–Taylor mixing

    PubMed Central

    Lim, Hyunkyung; Iwerks, Justin; Glimm, James; Sharp, David H.

    2010-01-01

    Rayleigh–Taylor mixing is a classical hydrodynamic instability that occurs when a light fluid pushes against a heavy fluid. The two main sources of nonideal behavior in Rayleigh–Taylor (RT) mixing are regularizations (physical and numerical), which produce deviations from a pure Euler equation, scale invariant formulation, and nonideal (i.e., experimental) initial conditions. The Kolmogorov theory of turbulence predicts stirring at all length scales for the Euler fluid equations without regularization. We interpret mathematical theories of existence and nonuniqueness in this context, and we provide numerical evidence for dependence of the RT mixing rate on nonideal regularizations; in other words, indeterminacy when modeled by Euler equations. Operationally, indeterminacy shows up as nonunique solutions for RT mixing, parametrized by Schmidt and Prandtl numbers, in the large Reynolds number (Euler equation) limit. Verification and validation evidence is presented for the large eddy simulation algorithm used here. Mesh convergence depends on breaking the nonuniqueness with explicit use of the laminar Schmidt and Prandtl numbers and their turbulent counterparts, defined in terms of subgrid scale models. The dependence of the mixing rate on the Schmidt and Prandtl numbers and other physical parameters will be illustrated. We demonstrate numerically the influence of initial conditions on the mixing rate. Both the dominant short wavelength initial conditions and long wavelength perturbations are observed to play a role. By examination of two classes of experiments, we observe the absence of a single universal explanation, with long and short wavelength initial conditions, and the various physical and numerical regularizations contributing in different proportions in these two different contexts. PMID:20615983

  13. Seasonal cycle of oceanic mixed layer and upper-ocean heat fluxes in the Mediterranean Sea from in-situ observations.

    NASA Astrophysics Data System (ADS)

    Houpert, Loïc; Testor, Pierre; Durrieu de Madron, Xavier; Estournel, Claude; D'Ortenzio, Fabrizio

    2013-04-01

    Heat fluxes across the ocean-atmosphere interface play a crucial role in the upper turbulent mixing. The depth reached by this turbulent mixing is indicated by an homogenization of seawater properties in the surface layer, and is defined as the Mixed Layer Depth (MLD). The thickness of the mixed layer determines also the heat content of the layer that directly interacts with the atmosphere. The seasonal variability of these air-sea fluxes is crucial in the calculation of heat budget. An improvement in the estimate of these fluxes is needed for a better understanding of the Mediterranean ocean circulation and climate, in particular in Regional Climate Models. There are few estimations of surface heat fluxes based on oceanic observations in the Mediterranean, and none of them are based on mixed layer observations. So, we proposed here new estimations of these upper-ocean heat fluxes based on mixed layer. We present high resolution Mediterranean climatology (0.5°) of the mean MLD based on a comprehensive collection of temperature profiles of last 43 years (1969-2012). The database includes more than 150,000 profiles, merging CTD, XBT, ARGO Profiling floats, and gliders observations. This dataset is first used to describe the seasonal cycle of the mixed layer depth on the whole Mediterranean on a monthly climatological basis. Our analysis discriminates several regions with coherent behaviors, in particular the deep water formation sites, characterized by significant differences in the winter mixing intensity. Heat storage rates (HSR) were calculated as the time rate of change of the heat content integrated from the surface down to a specific depth that is defined as the MLD plus an integration constant. Monthly climatology of net heat flux (NHF) from ERA-Interim reanalysis was balanced by the 1°x1° resolution heat storage rate climatology. Local heat budget balance and seasonal variability in the horizontal heat flux are then discussed by taking into account uncertainties, due to errors in monthly value estimation and to intra-annual and inter-annual variability.

  14. 7 CFR 810.801 - Definition of mixed grain.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false Definition of mixed grain. 810.801 Section 810.801 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... GRAIN United States Standards for Mixed Grain Terms Defined § 810.801 Definition of mixed grain. Any...

  15. 7 CFR 810.801 - Definition of mixed grain.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 7 2012-01-01 2012-01-01 false Definition of mixed grain. 810.801 Section 810.801 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... GRAIN United States Standards for Mixed Grain Terms Defined § 810.801 Definition of mixed grain. Any...

  16. 7 CFR 810.801 - Definition of mixed grain.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 7 2013-01-01 2013-01-01 false Definition of mixed grain. 810.801 Section 810.801 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... GRAIN United States Standards for Mixed Grain Terms Defined § 810.801 Definition of mixed grain. Any...

  17. 7 CFR 810.801 - Definition of mixed grain.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 7 2014-01-01 2014-01-01 false Definition of mixed grain. 810.801 Section 810.801 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... GRAIN United States Standards for Mixed Grain Terms Defined § 810.801 Definition of mixed grain. Any...

  18. 7 CFR 810.801 - Definition of mixed grain.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Definition of mixed grain. 810.801 Section 810.801 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... GRAIN United States Standards for Mixed Grain Terms Defined § 810.801 Definition of mixed grain. Any...

  19. Use of multispectral satellite remote sensing to assess mixing of suspended sediment downstream of large river confluences

    NASA Astrophysics Data System (ADS)

    Umar, M.; Rhoads, Bruce L.; Greenberg, Jonathan A.

    2018-01-01

    Although past work has noted that contrasts in turbidity often are detectable on remotely sensed images of rivers downstream from confluences, no systematic methodology has been developed for assessing mixing over distance of confluent flows with differing surficial suspended sediment concentrations (SSSC). In contrast to field measurements of mixing below confluences, satellite remote-sensing can provide detailed information on spatial distributions of SSSC over long distances. This paper presents a methodology that uses remote-sensing data to estimate spatial patterns of SSSC downstream of confluences along large rivers and to determine changes in the amount of mixing over distance from confluences. The method develops a calibrated Random Forest (RF) model by relating training SSSC data from river gaging stations to derived spectral indices for the pixels corresponding to gaging-station locations. The calibrated model is then used to predict SSSC values for every river pixel in a remotely sensed image, which provides the basis for mapping of spatial variability in SSSCs along the river. The pixel data are used to estimate average surficial values of SSSC at cross sections spaced uniformly along the river. Based on the cross-section data, a mixing metric is computed for each cross section. The spatial pattern of change in this metric over distance can be used to define rates and length scales of surficial mixing of suspended sediment downstream of a confluence. This type of information is useful for exploring the potential influence of various controlling factors on mixing downstream of confluences, for evaluating how mixing in a river system varies over time and space, and for determining how these variations influence water quality and ecological conditions along the river.

  20. Perception: a concept analysis.

    PubMed

    McDonald, Susan M

    2012-02-01

    Concept analysis methodology by Walker and Avant (2005) was used to define, describe, and delimit the concept of perception. Nursing literature in the Medline database was searched for definitions of "perception." Definitions, uses, and defining attributes of perception were identified; model and contrary cases were developed; and antecedents, consequences, and empirical referents were determined. An operational definition for the concept was developed. Nurses need to be cognizant of how perceptual differences impact the delivery of nursing care. In research, a mixed methodology approach may yield a richer description of the phenomenon and provide useful information for clinical practice. © 2011, The Author. International Journal of Nursing Knowledge © 2011, NANDA International.

  1. Family nonuniversal Z' models with protected flavor-changing interactions

    NASA Astrophysics Data System (ADS)

    Celis, Alejandro; Fuentes-Martín, Javier; Jung, Martin; Serôdio, Hugo

    2015-07-01

    We define a new class of Z' models with neutral flavor-changing interactions at tree level in the down-quark sector. They are related in an exact way to elements of the quark mixing matrix due to an underlying flavored U(1)' gauge symmetry, rendering these models particularly predictive. The same symmetry implies lepton-flavor nonuniversal couplings, fully determined by the gauge structure of the model. Our models allow us to address presently observed deviations from the standard model and specific correlations among the new physics contributions to the Wilson coefficients C9,10' ℓ can be tested in b →s ℓ+ℓ- transitions. We furthermore predict lepton-universality violations in Z' decays, testable at the LHC.

  2. Gravity Wave Mixing and Effective Diffusivity for Minor Chemical Constituents in the Mesosphere/Lower Thermosphere

    NASA Astrophysics Data System (ADS)

    Grygalashvyly, M.; Becker, E.; Sonnemann, G. R.

    2012-06-01

    The influence of gravity waves (GWs) on the distributions of minor chemical constituents in the mesosphere-lower thermosphere (MLT) is studied on the basis of the effective diffusivity concept. The mixing ratios of chemical species used for calculations of the effective diffusivity are obtained from numerical experiments with an off-line coupled model of the dynamics and chemistry abbreviated as KMCM-MECTM (Kuehlungsborn Mechanistic general Circulation Model—MEsospheric Chemistry-Transport Model). In our control simulation the MECTM is driven with the full dynamical fields from an annual cycle simulation with the KMCM, where mid-frequency GWs down to horizontal wavelengths of 350 km are resolved and their wave-mean flow interaction is self-consistently induced by an advanced turbulence model. A perturbation simulation with the MECTM is defined by eliminating all meso-scale variations with horizontal wavelengths shorter than 1000 km from the dynamical fields by means of spectral filtering before running the MECTM. The response of the MECTM to GWs perturbations reveals strong effects on the minor chemical constituents. We show by theoretical arguments and numerical diagnostics that GWs have direct, down-gradient mixing effects on all long-lived minor chemical species that possess a mean vertical gradient in the MLT. Introducing the term wave diffusion (WD) and showing that wave mixing yields approximately the same WD coefficient for different chemical constituents, we argue that it is a useful tool for diagnostic irreversible transport processes. We also present a detailed discussion of the gravity-wave mixing effects on the photochemistry and highlight the consequences for the general circulation of the MLT.

  3. Development of a numerical model for calculating exposure to toxic and nontoxic stressors in the water column and sediment from drilling discharges.

    PubMed

    Rye, Henrik; Reed, Mark; Frost, Tone Karin; Smit, Mathijs G D; Durgut, Ismail; Johansen, Øistein; Ditlevsen, May Kristin

    2008-04-01

    Drilling discharges are complex mixtures of chemical components and particles which might lead to toxic and nontoxic stress in the environment. In order to be able to evaluate the potential environmental consequences of such discharges in the water column and in sediments, a numerical model was developed. The model includes water column stratification, ocean currents and turbulence, natural burial, bioturbation, and biodegradation of organic matter in the sediment. Accounting for these processes, the fate of the discharge is modeled for the water column, including near-field mixing and plume motion, far-field mixing, and transport. The fate of the discharge is also modeled for the sediment, including sea floor deposition, and mixing due to bioturbation. Formulas are provided for the calculation of suspended matter and chemical concentrations in the water column, and burial, change in grain size, oxygen depletion, and chemical concentrations in the sediment. The model is fully 3-dimensional and time dependent. It uses a Lagrangian approach for the water column based on moving particles that represent the properties of the release and an Eulerian approach for the sediment based on calculation of the properties of matter in a grid. The model will be used to calculate the environmental risk, both in the water column and in sediments, from drilling discharges. It can serve as a tool to define risk mitigating measures, and as such it provides guidance towards the "zero harm" goal.

  4. Large-eddy simulation of a turbulent mixing layer

    NASA Technical Reports Server (NTRS)

    Mansour, N. N.; Ferziger, J. H.; Reynolds, W. C.

    1978-01-01

    The three dimensional, time dependent (incompressible) vorticity equations were used to simulate numerically the decay of isotropic box turbulence and time developing mixing layers. The vorticity equations were spatially filtered to define the large scale turbulence field, and the subgrid scale turbulence was modeled. A general method was developed to show numerical conservation of momentum, vorticity, and energy. The terms that arise from filtering the equations were treated (for both periodic boundary conditions and no stress boundary conditions) in a fast and accurate way by using fast Fourier transforms. Use of vorticity as the principal variable is shown to produce results equivalent to those obtained by use of the primitive variable equations.

  5. The Case for Mixed-Age Grouping in Early Education.

    ERIC Educational Resources Information Center

    Katz, Lilian G.; And Others

    In six brief chapters, mixed-age grouping of young children in schools and child care centers is explored and advocated. Chapter 1 defines mixed-age grouping, examines limitations of single-age grouping, and points out positive characteristics of mixed-age classes. Chapter 2 discusses social development as seen in children's interactions in…

  6. Insights into hydrologic and hydrochemical processes based on concentration-discharge and end-member mixing analyses in the mid-Merced River Basin, Sierra Nevada, California

    NASA Astrophysics Data System (ADS)

    Liu, Fengjing; Conklin, Martha H.; Shaw, Glenn D.

    2017-01-01

    Both concentration-discharge relation and end-member mixing analysis were explored to elucidate the connectivity of hydrologic and hydrochemical processes using chemical data collected during 2006-2008 at Happy Isles (468 km2), Pohono Bridge (833 km2), and Briceburg (1873 km2) in the snowmelt-fed mid-Merced River basin, augmented by chemical data collected by the USGS during 1990-2014 at Happy Isles. Concentration-discharge (C-Q) in streamflow was dominated by a well-defined power law relation, with the magnitude of exponent (0.02-0.6) and R2 values (p < 0.001) lower on rising than falling limbs. Concentrations of conservative solutes in streamflow resulted from mixing of two end-members at Happy Isles and Pohono Bridge and three at Briceburg, with relatively constant solute concentrations in end-members. The fractional contribution of groundwater was higher on rising than falling limbs at all basin scales. The relationship between the fractional contributions of subsurface flow and groundwater and streamflow (F-Q) followed the same relation as C-Q as a result of end-member mixing. The F-Q relation was used as a simple model to simulate subsurface flow and groundwater discharges to Happy Isles from 1990 to 2014 and was successfully validated by solute concentrations measured by the USGS. It was also demonstrated that the consistency of F-Q and C-Q relations is applicable to other catchments where end-members and the C-Q relationships are well defined, suggesting hydrologic and hydrochemical processes are strongly coupled and mutually predictable. Combining concentration-discharge and end-member mixing analyses could be used as a diagnostic tool to understand streamflow generation and hydrochemical controls in catchment hydrologic studies.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Yunfei; Zhang Daxin

    We make a detailed analysis of the proton decay in a supersymmetric SO(10) model proposed by K. Babu, I. Gogoladze, P. Nath, and R. Syed. We introduce quark mixing, and find that this model can generate fermion mass without breaking the experimental bound on proton decay. We also predict large Cabibbo-Kobayashi-Maskawa (CKM) unitarity violations. The CKM matrix V in this paper is defined as normal, i.e. d{sub i}{sup '}=V{sup ij}d{sub j}, where ij run from 1 to 3. The primed field is the weak eigenstate and the unprimed field is the mass eigenstate.

  8. Water Droplet Impingement on Simulated Glaze, Mixed, and Rime Ice Accretions

    NASA Technical Reports Server (NTRS)

    Papadakis, Michael; Rachman, Arief; Wong, See-Cheuk; Yeong, Hsiung-Wei; Hung, Kuohsing E.; Vu, Giao T.; Bidwell, Colin S.

    2007-01-01

    Water droplet impingement data were obtained at the NASA Glenn Icing Research Tunnel (IRT) for a 36-in. chord NACA 23012 airfoil with and without simulated ice using a dye-tracer method. The simulated ice shapes were defined with the NASA Glenn LEWICE 2.2 ice accretion program and including one rime, four mixed and five glaze ice shapes. The impingement experiments were performed with spray clouds having median volumetric diameters of 20, 52, 111, 154, and 236 micron. Comparisons to the experimental data were generated which showed good agreement for the rime and mixed shapes at lower drop sizes. For larger drops sizes LEWICE 2.2 over predicted the collection efficiencies due to droplet splashing effects which were not modeled in the program. Also for the more complex glaze ice shapes interpolation errors resulted in the over prediction of collection efficiencies in cove or shadow regions of ice shapes.

  9. Multiple jet study data correlations. [data correlation for jet mixing flow of air jets

    NASA Technical Reports Server (NTRS)

    Walker, R. E.; Eberhardt, R. G.

    1975-01-01

    Correlations are presented which allow determination of penetration and mixing of multiple cold air jets injected normal to a ducted subsonic heated primary air stream. Correlations were obtained over jet-to-primary stream momentum flux ratios of 6 to 60 for locations from 1 to 30 jet diameters downstream of the injection plane. The range of geometric and operating variables makes the correlations relevant to gas turbine combustors. Correlations were obtained for the mixing efficiency between jets and primary stream using an energy exchange parameter. Also jet centerplane velocity and temperature trajectories were correlated and centerplane dimensionless temperature distributions defined. An assumption of a Gaussian vertical temperature distribution at all stations is shown to result in a reasonable temperature field model. Data are presented which allow comparison of predicted and measured values over the range of conditions specified above.

  10. Mixing characterization of highly underexpanded fluid jets with real gas expansion

    NASA Astrophysics Data System (ADS)

    Förster, Felix J.; Baab, Steffen; Steinhausen, Christoph; Lamanna, Grazia; Ewart, Paul; Weigand, Bernhard

    2018-03-01

    We report a comprehensive speed of sound database for multi-component mixing of underexpanded fuel jets with real gas expansion. The paper presents several reference test cases with well-defined experimental conditions providing quantitative data for validation of computational simulations. Two injectant fluids, fundamentally different with respect to their critical properties, are brought to supercritical state and discharged into cold nitrogen at different pressures. The database features a wide range of nozzle pressure ratios covering the regimes that are generally classified as highly and extremely highly underexpanded jets. Further variation is introduced by investigating different injection temperatures. Measurements are obtained along the centerline at different axial positions. In addition, an adiabatic mixing model based on non-ideal thermodynamic mixture properties is used to extract mixture compositions from the experimental speed of sound data. The concentration data obtained are complemented by existing experimental data and represented by an empirical fit.

  11. Development and testing of meteorology and air dispersion models for Mexico City

    NASA Astrophysics Data System (ADS)

    Williams, M. D.; Brown, M. J.; Cruz, X.; Sosa, G.; Streit, G.

    Los Alamos National Laboratory and Instituto Mexicano del Petróleo are completing a joint study of options for improving air quality in Mexico City. We have modified a three-dimensional, prognostic, higher-order turbulence model for atmospheric circulation (HOTMAC) and a Monte Carlo dispersion and transport model (RAPTAD) to treat domains that include an urbanized area. We used the meteorological model to drive models which describe the photochemistry and air transport and dispersion. The photochemistry modeling is described in a separate paper. We tested the model against routine measurements and those of a major field program. During the field program, measurements included: (1) lidar measurements of aerosol transport and dispersion, (2) aircraft measurements of winds, turbulence, and chemical species aloft, (3) aircraft measurements of skin temperatures, and (4) Tethersonde measurements of winds and ozone. We modified the meteorological model to include provisions for time-varying synoptic-scale winds, adjustments for local wind effects, and detailed surface-coverage descriptions. We developed a new method to define mixing-layer heights based on model outputs. The meteorology and dispersion models were able to provide reasonable representations of the measurements and to define the sources of some of the major uncertainties in the model-measurement comparisons.

  12. A Simple Parameterization of Mixing of Passive Scalars in Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Nithianantham, Ajithshanthar; Venayagamoorthy, Karan

    2015-11-01

    A practical model for quantifying the turbulent diascalar diffusivity is proposed as Ks = 1 . 1γ'LTk 1 / 2 , where LT is defined as the Thorpe length scale, k is the turbulent kinetic energy and γ' is one-half of the mechanical to scalar time scale ratio, which was shown by previous researchers to be approximately 0 . 7 . The novelty of the proposed model lies in the use of LT, which is a widely used length scale in stably stratified flows (almost exclusively used in oceanography), for quantifying turbulent mixing in unstratified flows. LT can be readily obtained in the field using a Conductivity, Temperature and Depth (CTD) profiler. The turbulent kinetic energy is mostly contained in the large scales of the flow field and hence can be measured in the field or modeled in numerical simulations. Comparisons using DNS data show remarkably good agreement between the predicted and exact diffusivities. Office of Naval Research and National Science Foundation.

  13. Impact of Patient and Procedure Mix on Finances of Perinatal Centres – Theoretical Models for Economic Strategies in Perinatal Centres

    PubMed Central

    Hildebrandt, T.; Kraml, F.; Wagner, S.; Hack, C. C.; Thiel, F. C.; Kehl, S.; Winkler, M.; Frobenius, W.; Faschingbauer, F.; Beckmann, M. W.; Lux, M. P.

    2013-01-01

    Introduction: In Germany, cost and revenue structures of hospitals with defined treatment priorities are currently being discussed to identify uneconomic services. This discussion has also affected perinatal centres (PNCs) and represents a new economic challenge for PNCs. In addition to optimising the time spent in hospital, the hospital management needs to define the “best” patient mix based on costs and revenues. Method: Different theoretical models were proposed based on the cost and revenue structures of the University Perinatal Centre for Franconia (UPF). Multi-step marginal costing was then used to show the impact on operating profits of changes in services and bed occupancy rates. The current contribution margin accounting used by the UPF served as the basis for the calculations. The models demonstrated the impact of changes in services on costs and revenues of a level 1 PNC. Results: Contribution margin analysis was used to calculate profitable and unprofitable DRGs based on average inpatient cost per day. Nineteen theoretical models were created. The current direct costing used by the UPF and a theoretical model with a 100 % bed occupancy rate were used as reference models. Significantly higher operating profits could be achieved by doubling the number of profitable DRGs and halving the number of less profitable DRGs. Operating profits could be increased even more by changing the rates of profitable DRGs per bed occupancy. The exclusive specialisation on pathological and high-risk pregnancies resulted in operating losses. All models which increased the numbers of caesarean sections or focused exclusively on c-sections resulted in operating losses. Conclusion: These theoretical models offer a basis for economic planning. They illustrate the enormous impact potential changes can have on the operating profits of PNCs. Level 1 PNCs require high bed occupancy rates and a profitable patient mix to cover the extremely high costs incurred due to the services they are legally required to offer. Based on our theoretical models it must be stated that spontaneous vaginal births (not caesarean sections) were the most profitable procedures in the current DRG system. Overall, it currently makes economic sense for level I PNCs to treat as many low-risk pregnancies and neonates as possible to cover costs. PMID:24771932

  14. Analysis of mixed model in gear transmission based on ADAMS

    NASA Astrophysics Data System (ADS)

    Li, Xiufeng; Wang, Yabin

    2012-09-01

    The traditional method of mechanical gear driving simulation includes gear pair method and solid to solid contact method. The former has higher solving efficiency but lower results accuracy; the latter usually obtains higher precision of results while the calculation process is complex, also it is not easy to converge. Currently, most of the researches are focused on the description of geometric models and the definition of boundary conditions. However, none of them can solve the problems fundamentally. To improve the simulation efficiency while ensure the results with high accuracy, a mixed model method which uses gear tooth profiles to take the place of the solid gear to simulate gear movement is presented under these circumstances. In the process of modeling, build the solid models of the mechanism in the SolidWorks firstly; Then collect the point coordinates of outline curves of the gear using SolidWorks API and create fit curves in Adams based on the point coordinates; Next, adjust the position of those fitting curves according to the position of the contact area; Finally, define the loading conditions, boundary conditions and simulation parameters. The method provides gear shape information by tooth profile curves; simulates the mesh process through tooth profile curve to curve contact and offer mass as well as inertia data via solid gear models. This simulation process combines the two models to complete the gear driving analysis. In order to verify the validity of the method presented, both theoretical derivation and numerical simulation on a runaway escapement are conducted. The results show that the computational efficiency of the mixed model method is 1.4 times over the traditional method which contains solid to solid contact. Meanwhile, the simulation results are more closely to theoretical calculations. Consequently, mixed model method has a high application value regarding to the study of the dynamics of gear mechanism.

  15. Skill Mix in the Health Care Workforce: Reviewing the Evidence.

    ERIC Educational Resources Information Center

    Buchan, James; Dal Poz, Mario R.

    2002-01-01

    The reasons a skill mix among health workers is important to health care systems were examined. The analysis was based on a review of studies conducted primarily in the United States. "Skill mix" was defined as the mix of posts, grades, or occupations in an organization and the combinations of activities or skills needed for each job…

  16. The Case for Mixed-Age Grouping in Early Childhood Education Programs.

    ERIC Educational Resources Information Center

    Katz, Lilian G.; And Others

    The seven brief chapters of this paper advocate mixed-age grouping in schools and child care centers. Discussion defines mixed-age grouping and examines some limitations of single-age grouping. Research findings on social and cognitive aspects of mixed-age grouping are reviewed. Social aspects are discussed by considering in turn the following…

  17. Multi-stage mixing in subduction zone: Application to Merapi volcano, Indonesia

    NASA Astrophysics Data System (ADS)

    Debaille, V.; Doucelance, R.; Weis, D.; Schiano, P.

    2003-04-01

    Basalts sampling subduction zone volcanism (IAB) often show binary mixing relationship in classical Sr-Nd, Pb-Pb, Sr-Pb isotopic diagrams, generally interpreted as reflecting the involvement of two components in their source. However, several authors have highlighted the presence of minimum three components in such a geodynamical context: mantle wedge, subducted and altered oceanic crust and subducted sediments. The overlying continental crust can also contribute by contamination and assimilation in magma chambers and/or during magma ascent. Here we present a multi-stage model to obtain a two end-member mixing from three components (mantle wedge, altered oceanic crust and sediments). The first stage of the model considers the metasomatism of the mantle wedge by fluids and/or melts released by subducted materials (altered oceanic crust and associated sediments), considering mobility and partition coefficient of trace elements in hydrated fluids and silicate melts. This results in the generation of two distinct end-members, reducing the number of components (mantle wedge, oceanic crust, sediments) from three to two. The second stage of the model concerns the binary mixing of the two end-members thus defined: mantle wedge metasomatized by slab-derived fluids and mantle wedge metasomatized by sediment-derived fluids. This model has been applied on a new isotopic data set (Sr, Nd and Pb, analyzed by TIMS and MC-ICP-MS) of Merapi volcano (Java island, Indonesia). Previous studies have suggested three distinct components in the source of indonesian lavas: mantle wedge, subducted sediments and altered oceanic crust. Moreover, it has been shown that crustal contamination does not significantly affect isotopic ratios of lavas. The multi-stage model proposed here is able to reproduce the binary mixing observed in lavas of Merapi, and a set of numerical values of bulk partition coefficient is given that accounts for the genesis of lavas.

  18. Breast Radiotherapy with Mixed Energy Photons; a Model for Optimal Beam Weighting.

    PubMed

    Birgani, Mohammadjavad Tahmasebi; Fatahiasl, Jafar; Hosseini, Seyed Mohammad; Bagheri, Ali; Behrooz, Mohammad Ali; Zabiehzadeh, Mansour; Meskani, Reza; Gomari, Maryam Talaei

    2015-01-01

    Utilization of high energy photons (>10 MV) with an optimal weight using a mixed energy technique is a practical way to generate a homogenous dose distribution while maintaining adequate target coverage in intact breast radiotherapy. This study represents a model for estimation of this optimal weight for day to day clinical usage. For this purpose, treatment planning computed tomography scans of thirty-three consecutive early stage breast cancer patients following breast conservation surgery were analyzed. After delineation of the breast clinical target volume (CTV) and placing opposed wedge paired isocenteric tangential portals, dosimeteric calculations were conducted and dose volume histograms (DVHs) were generated, first with pure 6 MV photons and then these calculations were repeated ten times with incorporating 18 MV photons (ten percent increase in weight per step) in each individual patient. For each calculation two indexes including maximum dose in the breast CTV (Dmax) and the volume of CTV which covered with 95% Isodose line (VCTV, 95%IDL) were measured according to the DVH data and then normalized values were plotted in a graph. The optimal weight of 18 MV photons was defined as the intersection point of Dmax and VCTV, 95%IDL graphs. For creating a model to predict this optimal weight multiple linear regression analysis was used based on some of the breast and tangential field parameters. The best fitting model for prediction of 18 MV photons optimal weight in breast radiotherapy using mixed energy technique, incorporated chest wall separation plus central lung distance (Adjusted R2=0.776). In conclusion, this study represents a model for the estimation of optimal beam weighting in breast radiotherapy using mixed photon energy technique for routine day to day clinical usage.

  19. Formation of Intermediate Plutonic Rocks by Magma Mixing: the Shoshonite Suite of Timna, Southern Israel.

    NASA Astrophysics Data System (ADS)

    Fox, S.; Katzir, Y.

    2017-12-01

    In magmatic series considered to form by crystal fractionation intermediate rocks are usually much less abundant than expected. Yet, intermediate plutonic rocks, predominantly monzodiorites, are very abundant in the Neoproterozoic Timna igneous complex, S. Israel. A previously unnoticed plutonic shoshonitic suite was recently defined and mapped in Timna (Litvinovsky et al., 2015). It mostly comprises intermediate rocks in a seemingly 'continuous' trend from monzodiorite through monzonite to quartz syenite. Macroscale textures including gradational boundaries of mafic and felsic rocks and MME suggest that magma mixing is central in forming intermediate rocks in Timna. Our petrographic, microtextural and mineral chemistry study delineates the mode of incipient mixing, ultimate mingling and crystal equilibration in hybrid melts. An EMP study of plagioclase from rocks across the suite provides a quantitative evaluation of textures indicative of magma mixing/mingling, including recurrent/patchy zoning, Ca spike, boxy/sponge cellular texture and anti-Rapakivi texture. Each texture has an affinity to a particular mixing region. A modal count of these textures leads to a kinetic mixing model involving multi temporal and spatial scales necessary to form the hybrid intermediate rocks. A `shell'-like model for varying degrees of mixing is developed with the more intensive mixing at the core and more abundant felsic and mafic end-members towards the outer layer. REE patterns in zircon shows that it originated from both mafic and felsic parent melts. Whole rock Fe vs Sr plot suggests a two-stage mixing between the monzogabbro and quartz-syenite producing first mesocratic syenite, and subsequent mixing with a fractionating monzogabbro resulting in monzonitic compositions. A fractionating monzogabbro intruded into a syenitic melt sequentially. While slowly cooling, the monzogabbro heated the immediate syenitic melt, lowering the viscosity and rheological obstruction to overturn the boundary, and thus facilitated mixing. Increasing melt hybridization, tandem with crystallization, produced mixing textures in the turbulent crystal mush zone, synchronously with `pure end-member' crystallization. As a result, a large volume of intermediate rock was created through a hybridization process.

  20. A workload model and measures for computer performance evaluation

    NASA Technical Reports Server (NTRS)

    Kerner, H.; Kuemmerle, K.

    1972-01-01

    A generalized workload definition is presented which constructs measurable workloads of unit size from workload elements, called elementary processes. An elementary process makes almost exclusive use of one of the processors, CPU, I/O processor, etc., and is measured by the cost of its execution. Various kinds of user programs can be simulated by quantitative composition of elementary processes into a type. The character of the type is defined by the weights of its elementary processes and its structure by the amount and sequence of transitions between its elementary processes. A set of types is batched to a mix. Mixes of identical cost are considered as equivalent amounts of workload. These formalized descriptions of workloads allow investigators to compare the results of different studies quantitatively. Since workloads of different composition are assigned a unit of cost, these descriptions enable determination of cost effectiveness of different workloads on a machine. Subsequently performance parameters such as throughput rate, gain factor, internal and external delay factors are defined and used to demonstrate the effects of various workload attributes on the performance of a selected large scale computer system.

  1. Quantifying urban river-aquifer fluid exchange processes: a multi-scale problem.

    PubMed

    Ellis, Paul A; Mackay, Rae; Rivett, Michael O

    2007-04-01

    Groundwater-river exchanges in an urban setting have been investigated through long term field monitoring and detailed modelling of a 7 km reach of the Tame river as it traverses the unconfined Triassic Sandstone aquifer that lies beneath the City of Birmingham, UK. Field investigations and numerical modelling have been completed at a range of spatial and temporal scales from the metre to the kilometre scale and from event (hourly) to multi-annual time scales. The objective has been to quantify the spatial and temporal flow distributions governing mixing processes at the aquifer-river interface that can affect the chemical activity in the hyporheic zone of this urbanised river. The hyporheic zone is defined to be the zone of physical mixing of river and aquifer water. The results highlight the multi-scale controls that govern the fluid exchange distributions that influence the thickness of the mixing zone between urban rivers and groundwater and the patterns of groundwater flow through the bed of the river. The morphologies of the urban river bed and the adjacent river bank sediments are found to be particularly influential in developing the mixing zone at the interface between river and groundwater. Pressure transients in the river are also found to exert an influence on velocity distribution in the bed material. Areas of significant mixing do not appear to be related to the areas of greatest groundwater discharge and therefore this relationship requires further investigation to quantify the actual remedial capacity of the physical hyporheic zone.

  2. The MAGnet Newsletter on Mixed-Age Grouping in Preschool and Elementary Settings, 1992-1997.

    ERIC Educational Resources Information Center

    McClellan, Diane, Ed.

    1997-01-01

    These 11 newsletter issues provide a forum for discussion and exchange of ideas regarding mixed-age grouping in preschool and elementary schools. The October 1992 issue focuses on the mixed-age approach as an educational innovation, defines relevant terms, and presents advice from Oregon teachers on teaching mixed-age groups. The March 1993 issue…

  3. Evaluation of flow mixing in an ARID-HV algal raceway using statistics of temporal and spatial distribution of fluid particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Ben; Li, Peiwen; Waller, Peter

    2015-02-27

    This paper analyzes and evaluates the flow mixing in an open channel algal raceway for biofuel production. The flow mixing governs the frequency of how algae cells are exposed to sunlight, due to the fluid movement between the surface and the bottom of the algal raceway, thereby affecting algal growth rate. In this work, we investigated the flow mixing performance in a table-sized model of the High Velocity Algae Raceway Integrated Design (ARID-HV). Various geometries of the raceway channels and dams were considered in both the CFD analysis and experimental flowvisualization. In the CFD simulation, the pathlines of fluid particlesweremore » analyzed to obtain the distribution of the number of times that particles passed across a critical water depth, Dc, defined as a cycle count. In addition, the distribution of the time period fraction that the fluid particles stayed in the zones above and below Dc was recorded. Such information was used to evaluate the flow mixing in the raceway. The CFD evaluation of the flow mixing was validated using experimental flow visualization, which showed a good qualitative agreement with the numerical results. In conclusion, this CFD-based evaluation methodology is recommended for flow field optimization for open channel algal raceways, as well as for other engineering applications in which flow mixing is an important concern.« less

  4. The importance of work organization on workload and musculoskeletal health--Grocery store work as a model.

    PubMed

    Balogh, I; Ohlsson, K; Nordander, C; Björk, J; Hansson, G-Å

    2016-03-01

    We have evaluated the consequences of work organization on musculoskeletal health. Using a postal questionnaire, answered by 1600 female grocery store workers, their main work tasks were identified and four work groups were defined (cashier, picking, and delicatessen work, and a mixed group, who performed a mix of these tasks). The crude odds ratios (ORs) for neck/shoulder complaints were 1.5 (95% CI 1.0-2.2), 1.1 (0.7-1.5) and 1.6 (1.1-2.3), respectively, compared to mixed work. Adjusting for individual and psychosocial factors had no effect on these ORs. For elbows/hands, no significant differences were found. Technical measurements of the workload showed large differences between the work groups. Picking work was the most strenuous, while cashier work showed low loads. Quantitative measures of variation revealed for mixed work high between minutes variation and the highest between/within minutes variation. Combining work tasks with different physical exposure levels increases the variation and may reduce the risk of musculoskeletal complaints. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  5. Hybrid Multiscale Simulation of Hydrologic and Biogeochemical Processes in the River-Groundwater Interaction Zone

    NASA Astrophysics Data System (ADS)

    Yang, X.; Scheibe, T. D.; Chen, X.; Hammond, G. E.; Song, X.

    2015-12-01

    The zone in which river water and groundwater mix plays an important role in natural ecosystems as it regulates the mixing of nutrients that control biogeochemical transformations. Subsurface heterogeneity leads to local hotspots of microbial activity that are important to system function yet difficult to resolve computationally. To address this challenge, we are testing a hybrid multiscale approach that couples models at two distinct scales, based on field research at the U. S. Department of Energy's Hanford Site. The region of interest is a 400 x 400 x 20 m macroscale domain that intersects the aquifer and the river and contains a contaminant plume. However, biogeochemical activity is high in a thin zone (mud layer, <1 m thick) immediately adjacent to the river. This microscale domain is highly heterogeneous and requires fine spatial resolution to adequately represent the effects of local mixing on reactions. It is not computationally feasible to resolve the full macroscale domain at the fine resolution needed in the mud layer, and the reaction network needed in the mud layer is much more complex than that needed in the rest of the macroscale domain. Hence, a hybrid multiscale approach is used to efficiently and accurately predict flow and reactive transport at both scales. In our simulations, models at both scales are simulated using the PFLOTRAN code. Multiple microscale simulations in dynamically defined sub-domains (fine resolution, complex reaction network) are executed and coupled with a macroscale simulation over the entire domain (coarse resolution, simpler reaction network). The objectives of the research include: 1) comparing accuracy and computing cost of the hybrid multiscale simulation with a single-scale simulation; 2) identifying hot spots of microbial activity; and 3) defining macroscopic quantities such as fluxes, residence times and effective reaction rates.

  6. Ambient Pressure XPS Study of Mixed Conducting Perovskite-Type SOFC Cathode and Anode Materials under Well-Defined Electrochemical Polarization

    PubMed Central

    2015-01-01

    The oxygen exchange activity of mixed conducting oxide surfaces has been widely investigated, but a detailed understanding of the corresponding reaction mechanisms and the rate-limiting steps is largely still missing. Combined in situ investigation of electrochemically polarized model electrode surfaces under realistic temperature and pressure conditions by near-ambient pressure (NAP) XPS and impedance spectroscopy enables very surface-sensitive chemical analysis and may detect species that are involved in the rate-limiting step. In the present study, acceptor-doped perovskite-type La0.6Sr0.4CoO3-δ (LSC), La0.6Sr0.4FeO3-δ (LSF), and SrTi0.7Fe0.3O3-δ (STF) thin film model electrodes were investigated under well-defined electrochemical polarization as cathodes in oxidizing (O2) and as anodes in reducing (H2/H2O) atmospheres. In oxidizing atmosphere all materials exhibit additional surface species of strontium and oxygen. The polaron-type electronic conduction mechanism of LSF and STF and the metal-like mechanism of LSC are reflected by distinct differences in the valence band spectra. Switching between oxidizing and reducing atmosphere as well as electrochemical polarization cause reversible shifts in the measured binding energy. This can be correlated to a Fermi level shift due to variations in the chemical potential of oxygen. Changes of oxidation states were detected on Fe, which appears as FeIII in oxidizing atmosphere and as mixed FeII/III in H2/H2O. Cathodic polarization in reducing atmosphere leads to the reversible formation of a catalytically active Fe0 phase. PMID:26877827

  7. Chemical Reactions in Turbulent Mixing Flows. Revision.

    DTIC Science & Technology

    1983-08-02

    jet diameter F2 fluorine H2 hydrogen HF hydrogen fluoride I(y) instantaneous fluorescence intensity distribution L-s flame length measured from...virtual origin -.4 of turbulent region (L-s). flame length at high Reynolds number LIF laser induced fluorescence N2 nitrogen PI product thickness (defined...mixing is attained as a function of the equivallence ratio. For small values of the equivalence ratio f, the flame length - defined here as the

  8. The IGNITE (investigation to guide new insight into translational effectiveness) trial: Protocol for a translational study of an evidenced-based wellness program in fire departments

    PubMed Central

    2010-01-01

    Background Worksites are important locations for interventions to promote health. However, occupational programs with documented efficacy often are not used, and those being implemented have not been studied. The research in this report was funded through the American Reinvestment and Recovery Act Challenge Topic 'Pathways for Translational Research,' to define and prioritize determinants that enable and hinder translation of evidenced-based health interventions in well-defined settings. Methods The IGNITE (investigation to guide new insights for translational effectiveness) trial is a prospective cohort study of a worksite wellness and injury reduction program from adoption to final outcomes among 12 fire departments. It will employ a mixed methods strategy to define a translational model. We will assess decision to adopt, installation, use, and outcomes (reach, individual outcomes, and economic effects) using onsite measurements, surveys, focus groups, and key informant interviews. Quantitative data will be used to define the model and conduct mediation analysis of each translational phase. Qualitative data will expand on, challenge, and confirm survey findings and allow a more thorough understanding and convergent validity by overcoming biases in qualitative and quantitative methods used alone. Discussion Findings will inform worksite wellness in fire departments. The resultant prioritized influences and model of effective translation can be validated and manipulated in these and other settings to more efficiently move science to service. PMID:20932290

  9. Environmental drivers defining linkages among life-history traits: mechanistic insights from a semiterrestrial amphipod subjected to macroscale gradients.

    PubMed

    Gómez, Julio; Barboza, Francisco R; Defeo, Omar

    2013-10-01

    Determining the existence of interconnected responses among life-history traits and identifying underlying environmental drivers are recognized as key goals for understanding the basis of phenotypic variability. We studied potentially interconnected responses among senescence, fecundity, embryos size, weight of brooding females, size at maturity and sex ratio in a semiterrestrial amphipod affected by macroscale gradients in beach morphodynamics and salinity. To this end, multiple modelling processes based on generalized additive mixed models were used to deal with the spatio-temporal structure of the data obtained at 10 beaches during 22 months. Salinity was the only nexus among life-history traits, suggesting that this physiological stressor influences the energy balance of organisms. Different salinity scenarios determined shifts in the weight of brooding females and size at maturity, having consequences in the number and size of embryos which in turn affected sex determination and sex ratio at the population level. Our work highlights the importance of analysing field data to find the variables and potential mechanisms that define concerted responses among traits, therefore defining life-history strategies.

  10. Direct 3D-printing of cell-laden constructs in microfluidic architectures.

    PubMed

    Liu, Justin; Hwang, Henry H; Wang, Pengrui; Whang, Grace; Chen, Shaochen

    2016-04-21

    Microfluidic platforms have greatly benefited the biological and medical fields, however standard practices require a high cost of entry in terms of time and energy. The utilization of three-dimensional (3D) printing technologies has greatly enhanced the ability to iterate and build functional devices with unique functions. However, their inability to fabricate within microfluidic devices greatly increases the cost of producing several different devices to examine different scientific questions. In this work, a variable height micromixer (VHM) is fabricated using projection 3D-printing combined with soft lithography. Theoretical and flow experiments demonstrate that altering the local z-heights of VHM improved mixing at lower flow rates than simple geometries. Mixing of two fluids occurs as low as 320 μL min(-1) in VHM whereas the planar zigzag region requires a flow rate of 2.4 mL min(-1) before full mixing occurred. Following device printing, to further demonstrate the ability of this projection-based method, complex, user-defined cell-laden scaffolds are directly printed inside the VHM. The utilization of this unique ability to produce 3D tissue models within a microfluidic system could offer a unique platform for medical diagnostics and disease modeling.

  11. Developing quality indicators and auditing protocols from formal guideline models: knowledge representation and transformations.

    PubMed

    Advani, Aneel; Goldstein, Mary; Shahar, Yuval; Musen, Mark A

    2003-01-01

    Automated quality assessment of clinician actions and patient outcomes is a central problem in guideline- or standards-based medical care. In this paper we describe a model representation and algorithm for deriving structured quality indicators and auditing protocols from formalized specifications of guidelines used in decision support systems. We apply the model and algorithm to the assessment of physician concordance with a guideline knowledge model for hypertension used in a decision-support system. The properties of our solution include the ability to derive automatically context-specific and case-mix-adjusted quality indicators that can model global or local levels of detail about the guideline parameterized by defining the reliability of each indicator or element of the guideline.

  12. Sudbury project (University of Muenster-Ontario Geological Survey): Isotope systematics support the impact origin

    NASA Technical Reports Server (NTRS)

    Deutsch, A.; Buhl, D.; Brockmeyer, P.; Lakomy, R.; Flucks, M.

    1992-01-01

    Within the framework of the Sudbury project a considerable number of Sr-Nd isotope analyses were carried out on petrographically well-defined samples of different breccia units. Together with isotope data from the literature these data are reviewed under the aspect of a self-consistent impact model. The crucial point of this model is that the Sudbury Igneous Complex (SIC) is interpreted as a differentiated impact melt sheet without any need for an endogenic 'magmatic' component such as 'impact-triggered' magmatism or 'partial' impact melting of the crust and mixing with a mantle-derived magma.

  13. Changes of ns-soot mixing states and shapes in an urban area during CalNex

    NASA Astrophysics Data System (ADS)

    Adachi, Kouji; Buseck, Peter R.

    2013-05-01

    Aerosol particles from megacities influence the regional and global climate as well as the health of their occupants. We used transmission electron microscopes (TEMs) to study aerosol particles collected from the Los Angeles area during the 2010 CalNex campaign. We detected major amounts of ns-soot, defined as consisting of carbon nanospheres, sulfate, sea salt, and organic aerosol (OA) and lesser amounts of brochosome particles from leaf hoppers. Ns-soot-particle shapes, mixing states, and abundances varied significantly with sampling times and days. Within plumes having high CO2 concentrations, much ns-soot was compacted and contained a relatively large number of carbon nanospheres. Ns-soot particles from both CalNex samples and Mexico City, the latter collected in 2006, had a wide range of shapes when mixed with other aerosol particles, but neither sets showed spherical ns-soot nor the core-shell configuration that is commonly used in optical calculations. Our TEM observations and light-absorption calculations of modeled particles indicate that, in contrast to ns-soot particles that are embedded within other materials or have the hypothesized core-shell configurations, those attached to other aerosol particles hardly enhance their light absorption. We conclude that the ways in which ns-soot mixes with other particles explain the observations of smaller light amplification by ns-soot coatings than model calculations during the CalNex campaign and presumably in other areas.

  14. 21 CFR 184.1230 - Calcium sulfate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., processing aid as defined in § 170.3(o)(24) of this chapter, stabilizer and thickener as defined in § 170.3(o... dairy desserts and mixes as defined in § 170.3(n)(20) of this chapter, 0.4 percent for gelatins and...

  15. An ideal-typical model for comparing interprofessional relations and skill mix in health care.

    PubMed

    Schönfelder, Walter; Nilsen, Elin Anita

    2016-11-08

    Comparisons of health system performance, including the regulations of interprofessional relations and the skill mix between health professions are challenging. National strategies for regulating interprofessional relations vary widely across European health care systems. Unambiguously defined and generally accepted performance indicators have to remain generic, with limited power for recognizing the organizational structures regulating interprofessional relations in different health systems. A coherent framework for in-depth comparisons of different models for organizing interprofessional relations and the skill mix between professional groups is currently not available. This study aims to develop an ideal-typical framework for categorizing skill mix and interprofessional relations in health care, and to assess the potential impact for different ideal types on care coordination and integrated service delivery. A document analysis of the Health Systems in Transition (HiT) reports published by the European Observatory on Health Systems and Policies was conducted. The HiT reports to 31 European health systems were analyzed using a qualitative content analysis and a process of meaning condensation. The educational tracks available to nurses have an impact on the professional autonomy for nurses, the hierarchy between professional groups, the emphasis given to negotiating skill mix, interdisciplinary teamwork and the extent of cooperation across the health and social service interface. Based on the results of the document analysis, three ideal types for regulating interprofessional relations and skill mix in health care are delimited. For each ideal type, outcomes on service coordination and holistic service delivery are described. Comparisons of interprofessional relations are necessary for proactive health human resource policies. The proposed ideal-typical framework provides the means for in-depth comparisons of interprofessional relations in the health care workforce beyond of what is possible with directly comparable, but generic performance indicators.

  16. Pick a Color MARIA: Adaptive Sampling Enables the Rapid Identification of Complex Perovskite Nanocrystal Compositions with Defined Emission Characteristics.

    PubMed

    Bezinge, Leonard; Maceiczyk, Richard M; Lignos, Ioannis; Kovalenko, Maksym V; deMello, Andrew J

    2018-06-06

    Recent advances in the development of hybrid organic-inorganic lead halide perovskite (LHP) nanocrystals (NCs) have demonstrated their versatility and potential application in photovoltaics and as light sources through compositional tuning of optical properties. That said, due to their compositional complexity, the targeted synthesis of mixed-cation and/or mixed-halide LHP NCs still represents an immense challenge for traditional batch-scale chemistry. To address this limitation, we herein report the integration of a high-throughput segmented-flow microfluidic reactor and a self-optimizing algorithm for the synthesis of NCs with defined emission properties. The algorithm, named Multiparametric Automated Regression Kriging Interpolation and Adaptive Sampling (MARIA), iteratively computes optimal sampling points at each stage of an experimental sequence to reach a target emission peak wavelength based on spectroscopic measurements. We demonstrate the efficacy of the method through the synthesis of multinary LHP NCs, (Cs/FA)Pb(I/Br) 3 (FA = formamidinium) and (Rb/Cs/FA)Pb(I/Br) 3 NCs, using MARIA to rapidly identify reagent concentrations that yield user-defined photoluminescence peak wavelengths in the green-red spectral region. The procedure returns a robust model around a target output in far fewer measurements than systematic screening of parametric space and additionally enables the prediction of other spectral properties, such as, full-width at half-maximum and intensity, for conditions yielding NCs with similar emission peak wavelength.

  17. Film processing investigation. [improved chemical mixing system

    NASA Technical Reports Server (NTRS)

    Kelly, J. L.

    1972-01-01

    The present operational chemical mixing system for the Photographic Technology Division is evaluated, and the limitations are defined in terms of meeting the present and programmed chemical supply and delivery requirements. A major redesign of the entire chemical mixing, storage, analysis, and supply system is recommended. Other requirements for immediate and future implementations are presented.

  18. Including scattering within the room acoustics diffusion model: An analytical approach.

    PubMed

    Foy, Cédric; Picaut, Judicaël; Valeau, Vincent

    2016-10-01

    Over the last 20 years, a statistical acoustic model has been developed to predict the reverberant sound field in buildings. This model is based on the assumption that the propagation of the reverberant sound field follows a transport process and, as an approximation, a diffusion process that can be easily solved numerically. This model, initially designed and validated for rooms with purely diffuse reflections, is extended in the present study to mixed reflections, with a proportion of specular and diffuse reflections defined by a scattering coefficient. The proposed mathematical developments lead to an analytical expression of the diffusion constant that is a function of the scattering coefficient, but also on the absorption coefficient of the walls. The results obtained with this extended diffusion model are then compared with the classical diffusion model, as well as with a sound particles tracing approach considering mixed wall reflections. The comparison shows a good agreement for long rooms with uniform low absorption (α = 0.01) and uniform scattering. For a larger absorption (α = 0.1), the agreement is moderate, due to the fact that the proposed expression of the diffusion coefficient does not vary spatially. In addition, the proposed model is for now limited to uniform diffusion and should be extended in the future to more general cases.

  19. Residual estuarine circulation in the Mandovi, a monsoonal estuary: A three-dimensional model study

    NASA Astrophysics Data System (ADS)

    Vijith, V.; Shetye, S. R.; Baetens, K.; Luyten, P.; Michael, G. S.

    2016-05-01

    Observations in the Mandovi estuary, located on the central west coast of India, have shown that the salinity field in this estuary is remarkably time-dependent and passes through all possible states of stratification (riverine, highly-stratified, partially-mixed and well-mixed) during a year as the runoff into the estuary varies from high values (∼1000 m3 s-1) in the wet season to negligible values (∼1 m3 s-1) at end of the dry season. The time-dependence is forced by the Indian Summer Monsoon (ISM) and hence the estuary is referred to as a monsoonal estuary. In this paper, we use a three-dimensional, open source, hydrodynamic, numerical model to reproduce the observed annual salinity field in the Mandovi. We then analyse the model results to define characteristics of residual estuarine circulation in the Mandovi. Our motivation to study this aspect of the Mandovi's dynamics is derived from the following three considerations. First, residual circulation is important to long-term evolution of an estuary; second, we need to understand how this circulation responds to strongly time-dependent runoff forcing experienced by a monsoonal estuary; and third, Mandovi is among the best studied estuaries that come under the influence of ISM, and has observations that can be used to validate the model. Our analysis shows that the residual estuarine circulation in the Mandovi shows four distinct phases during a year: a river like flow that is oriented downstream throughout the estuary; a salt-wedge type circulation, with flow into the estuary near the bottom and out of the estuary near the surface restricted close to the mouth of the estuary; circulation associated with a partially-mixed estuary; and, the circulation associated with a well-mixed estuary. Dimensional analysis of the field of residual circulation helped us to establish the link between strength of residual circulation at a location and magnitude of river runoff and rate of mixing at the location. We then derive an analytical expression that approximates exchange velocity (bottom velocity minus near freshwater velocity at a location) as a function of freshwater velocity and rate of mixing.

  20. Variability of particle number emissions from diesel and hybrid diesel-electric buses in real driving conditions.

    PubMed

    Sonntag, Darrell B; Gao, H Oliver; Holmén, Britt A

    2008-08-01

    A linear mixed model was developed to quantify the variability of particle number emissions from transit buses tested in real-world driving conditions. Two conventional diesel buses and two hybrid diesel-electric buses were tested throughout 2004 under different aftertreatments, fuels, drivers, and bus routes. The mixed model controlled the confounding influence of factors inherent to on-board testing. Statistical tests showed that particle number emissions varied significantly according to the after treatment, bus route, driver, bus type, and daily temperature, with only minor variability attributable to differences between fuel types. The daily setup and operation of the sampling equipment (electrical low pressure impactor) and mini-dilution system contributed to 30-84% of the total random variability of particle measurements among tests with diesel oxidation catalysts. By controlling for the sampling day variability, the model better defined the differences in particle emissions among bus routes. In contrast, the low particle number emissions measured with diesel particle filters (decreased by over 99%) did not vary according to operating conditions or bus type but did vary substantially with ambient temperature.

  1. Thermal fluids for CSP systems: Alkaline nitrates/nitrites thermodynamics modelling method

    NASA Astrophysics Data System (ADS)

    Tizzoni, A. C.; Sau, S.; Corsaro, N.; Giaconia, A.; D'Ottavi, C.; Licoccia, S.

    2016-05-01

    Molten salt (MS) mixtures are used for the transport (HTF-heat transfer fluid) and storage of heat (HSM-heat storage material) in Concentration Solar Plants (CSP). In general, alkaline and earth-alkaline nitrate/nitrite mixtures are employed. Along with its upper stability temperature, the melting point (liquidus point) of a MS mixture is one of the main parameters which defines its usefulness as a HTF and HSM medium. As a result, we would like to develop a predictive model which will allow us to forecast freezing points for different MS mixture compositions; thus circumventing the need to determine experimentally the phase diagram for each MS mixture. To model ternary/quaternary phase diagram, parameters for the binary subsystems are to be determined, which is the purpose of the concerned work. In a binary system with components A and B, in phase equilibrium conditions (e.g. liquid and solid) the chemical potentials (partial molar Gibbs energy) for each component in each phase are equal. For an ideal solution it is possible to calculate the mixing (A+B) Gibbs energy:ΔG = ΔH - TΔS = RT(xAlnxA + xBlnxB) In case of non-ideal solid/liquid mixtures, such as the nitrates/nitrites compositions investigated in this work, the actual value will differ from the ideal one by an amount defined as the "mixing" (mix) Gibbs free energy. If the resulting mixtures is assumed, as indicated in the previous literature, to follow a "regular solution" model, where all the non-ideality is considered included in the enthalpy of mixing value and considering, for instance, the A component:Δ G ≡0 =(Δ HA-T Δ SA)+(ΔH¯ m i x AL-T ΔS¯ m i x AL)-(ΔH¯ m i x AS-T ΔS¯ m i x AS)where the molar partial amounts can be calculated from the total value by the Gibbs Duhem equation: (ΔH¯m i x AL=ΔHm i x-XB Ld/Δ Hm i x d XB L ) L;(ΔH¯m i x AS=ΔHm i x-XB Sd/Δ Hm i x d XB S ) S and, in general, it is possible to express the mixing enthalpy for solids and liquids as a function of the mol fraction: Δ HL m i x=XA LXB L(a1+b1XA L+c1XA LXB L),Δ HS m i x=XA SXB S(a2+b2XA S+c2XA SXB S) From the latter expressions it can be possible to modelize the phase diagram of a binary mixtures by using the a,b and c couples of parameters. To calculate those coefficients a method commonly employed in literature is to measure the mixing enthalpies, or to use one reported of the enthalpy of mixing (for instance for the liquid state) and calculate the other one using the phase diagram points. A direct ΔHmix (in solid or liquid phase) measurement can be difficult to carry out using common DSC equipment generally present in research laboratories. In fact, such determinations can be, in principle, performed, but the obtained data will be affected by large experimental errors. On the other hand, it is possible to obtain values with great precision regarding the algebraic sum of mixing enthalpies and the phase diagram trend. For this reason, only the phase diagrams are proposed to be used to calculate a, b, c parameters, and, subsequently, the total (liquid-solid algebraic sum) enthalpy of mixing will be employed to verify their validity. At this aim, a C++ code was assessed and used. Three binary mixtures were considered by combining NaNO3, KNO3 and NaNO2.

  2. Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedures

    NASA Astrophysics Data System (ADS)

    Vollant, A.; Balarac, G.; Corre, C.

    2017-09-01

    New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.

  3. Integrated payload and mission planning, phase 3. Volume 2: Logic/Methodology for preliminary grouping of spacelab and mixed cargo payloads

    NASA Technical Reports Server (NTRS)

    Rodgers, T. E.; Johnson, J. F.

    1977-01-01

    The logic and methodology for a preliminary grouping of Spacelab and mixed-cargo payloads is proposed in a form that can be readily coded into a computer program by NASA. The logic developed for this preliminary cargo grouping analysis is summarized. Principal input data include the NASA Payload Model, payload descriptive data, Orbiter and Spacelab capabilities, and NASA guidelines and constraints. The first step in the process is a launch interval selection in which the time interval for payload grouping is identified. Logic flow steps are then taken to group payloads and define flight configurations based on criteria that includes dedication, volume, area, orbital parameters, pointing, g-level, mass, center of gravity, energy, power, and crew time.

  4. Lurasidone for major depressive disorder with mixed features and irritability: a post-hoc analysis.

    PubMed

    Swann, Alan C; Fava, Maurizio; Tsai, Joyce; Mao, Yongcai; Pikalov, Andrei; Loebel, Antony

    2017-04-01

    The aim of this post-hoc analysis was to evaluate the efficacy of lurasidone in treating major depressive disorder (MDD) with mixed features including irritability. The data in this analysis were derived from a study of patients meeting DSM-IV-TR criteria for unipolar MDD, with a Montgomery-Åsberg Depression Rating Scale (MADRS) total score ≥26, presenting with two or three protocol-defined manic symptoms, and who were randomized to 6 weeks of double-blind treatment with either lurasidone 20-60 mg/d (n=109) or placebo (n=100). We defined "irritability" as a score ≥2 on both the Young Mania Rating Scale (YMRS) irritability item (#5) and the disruptive-aggressive item (#9). Endpoint change in the MADRS and YMRS items 5 and 9 were analyzed using a mixed model for repeated measures for patients with and without irritability. Some 20.7% of patients met the criteria for irritability. Treatment with lurasidone was associated with a significant week 6 change vs. placebo in MADRS score in both patients with (-22.6 vs. -9.5, p<0.0001, effect size [ES]=1.4) and without (-19.9 vs. -13.8, p<0.0001, ES=0.7) irritability. In patients with irritable features, treatment with lurasidone was associated with significant week 6 changes vs. placebo in both the YMRS irritability item (-1.4 vs. -0.3, p=0.0012, ES=1.0) and the YMRS disruptive-aggressive item (-1.0 vs. -0.3, p=0.0002, ES=1.2). In our post-hoc analysis of a randomized, placebo-controlled, 6-week trial, treatment with lurasidone significantly improved depressive symptoms in MDD patients with mixed features including irritability. In addition, irritability symptoms significantly improved in patients treated with lurasidone.

  5. Deriving Surface NO2 Mixing Ratios from DISCOVER-AQ ACAM Observations: A Method to Assess Surface NO2 Spatial Variability

    NASA Astrophysics Data System (ADS)

    Silverman, M. L.; Szykman, J.; Chen, G.; Crawford, J. H.; Janz, S. J.; Kowalewski, M. G.; Lamsal, L. N.; Long, R.

    2015-12-01

    Studies have shown that satellite NO2 columns are closely related to ground level NO2 concentrations, particularly over polluted areas. This provides a means to assess surface level NO2 spatial variability over a broader area than what can be monitored from ground stations. The characterization of surface level NO2 variability is important to understand air quality in urban areas, emissions, health impacts, photochemistry, and to evaluate the performance of chemical transport models. Using data from the NASA DISCOVER-AQ campaign in Baltimore/Washington we calculate NO2 mixing ratios from the Airborne Compact Atmospheric Mapper (ACAM), through four different methods to derive surface concentration from column measurements. High spectral resolution lidar (HSRL) mixed layer heights, vertical P3B profiles, and CMAQ vertical profiles are used to scale ACAM vertical column densities. The derived NO2 mixing ratios are compared to EPA ground measurements taken at Padonia and Edgewood. We find similar results from scaling with HSRL mixed layer heights and normalized P3B vertical profiles. The HSRL mixed layer heights are then used to scale ACAM vertical column densities across the DISCOVER-AQ flight pattern to assess spatial variability of NO2 over the area. This work will help define the measurement requirements for future satellite instruments.

  6. On the repeated measures designs and sample sizes for randomized controlled trials.

    PubMed

    Tango, Toshiro

    2016-04-01

    For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Carbon deposition model for oxygen-hydrocarbon combustion. Task 6: Data analysis and formulation of an empirical model

    NASA Technical Reports Server (NTRS)

    Makel, Darby B.; Rosenberg, Sanders D.

    1990-01-01

    The formation and deposition of carbon (soot) was studied in the Carbon Deposition Model for Oxygen-Hydrocarbon Combustion Program. An empirical, 1-D model for predicting soot formation and deposition in LO2/hydrocarbon gas generators/preburners was derived. The experimental data required to anchor the model were identified and a test program to obtain the data was defined. In support of the model development, cold flow mixing experiments using a high injection density injector were performed. The purpose of this investigation was to advance the state-of-the-art in LO2/hydrocarbon gas generator design by developing a reliable engineering model of gas generator operation. The model was formulated to account for the influences of fluid dynamics, chemical kinetics, and gas generator hardware design on soot formation and deposition.

  8. Effective algorithm for solving complex problems of production control and of material flows control of industrial enterprise

    NASA Astrophysics Data System (ADS)

    Mezentsev, Yu A.; Baranova, N. V.

    2018-05-01

    A universal economical and mathematical model designed for determination of optimal strategies for managing subsystems (components of subsystems) of production and logistics of enterprises is considered. Declared universality allows taking into account on the system level both production components, including limitations on the ways of converting raw materials and components into sold goods, as well as resource and logical restrictions on input and output material flows. The presented model and generated control problems are developed within the framework of the unified approach that allows one to implement logical conditions of any complexity and to define corresponding formal optimization tasks. Conceptual meaning of used criteria and limitations are explained. The belonging of the generated tasks of the mixed programming with the class of NP is shown. An approximate polynomial algorithm for solving the posed optimization tasks for mixed programming of real dimension with high computational complexity is proposed. Results of testing the algorithm on the tasks in a wide range of dimensions are presented.

  9. Estimating growth and yield of mixed stands

    Treesearch

    Stephen R. Shifley; Burnell C. Fischer

    1989-01-01

    A mixed stand is defined as one in which no single species comprises more than 80 percent of the stocking. The growth estimation methods described below can be used not only in mixed stands but in almost any stand, regardless of species composition, age structure, or size structure. The methods described are necessary to accommodate the complex species mixtures and...

  10. Elastic-viscoplastic modeling of soft biological tissues using a mixed finite element formulation based on the relative deformation gradient.

    PubMed

    Weickenmeier, J; Jabareen, M

    2014-11-01

    The characteristic highly nonlinear, time-dependent, and often inelastic material response of soft biological tissues can be expressed in a set of elastic-viscoplastic constitutive equations. The specific elastic-viscoplastic model for soft tissues proposed by Rubin and Bodner (2002) is generalized with respect to the constitutive equations for the scalar quantity of the rate of inelasticity and the hardening parameter in order to represent a general framework for elastic-viscoplastic models. A strongly objective integration scheme and a new mixed finite element formulation were developed based on the introduction of the relative deformation gradient-the deformation mapping between the last converged and current configurations. The numerical implementation of both the generalized framework and the specific Rubin and Bodner model is presented. As an example of a challenging application of the new model equations, the mechanical response of facial skin tissue is characterized through an experimental campaign based on the suction method. The measurement data are used for the identification of a suitable set of model parameters that well represents the experimentally observed tissue behavior. Two different measurement protocols were defined to address specific tissue properties with respect to the instantaneous tissue response, inelasticity, and tissue recovery. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Genomic similarity and kernel methods I: advancements by building on mathematical and statistical foundations.

    PubMed

    Schaid, Daniel J

    2010-01-01

    Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.

  12. DSN Array Simulator

    NASA Technical Reports Server (NTRS)

    Tikidjian, Raffi; Mackey, Ryan

    2008-01-01

    The DSN Array Simulator (wherein 'DSN' signifies NASA's Deep Space Network) is an updated version of software previously denoted the DSN Receive Array Technology Assessment Simulation. This software (see figure) is used for computational modeling of a proposed DSN facility comprising user-defined arrays of antennas and transmitting and receiving equipment for microwave communication with spacecraft on interplanetary missions. The simulation includes variations in spacecraft tracked and communication demand changes for up to several decades of future operation. Such modeling is performed to estimate facility performance, evaluate requirements that govern facility design, and evaluate proposed improvements in hardware and/or software. The updated version of this software affords enhanced capability for characterizing facility performance against user-defined mission sets. The software includes a Monte Carlo simulation component that enables rapid generation of key mission-set metrics (e.g., numbers of links, data rates, and date volumes), and statistical distributions thereof as functions of time. The updated version also offers expanded capability for mixed-asset network modeling--for example, for running scenarios that involve user-definable mixtures of antennas having different diameters (in contradistinction to a fixed number of antennas having the same fixed diameter). The improved version also affords greater simulation fidelity, sufficient for validation by comparison with actual DSN operations and analytically predictable performance metrics.

  13. Thrust modeling for hypersonic engines

    NASA Technical Reports Server (NTRS)

    Riggins, D. W.; Mcclinton, C. R.

    1995-01-01

    Expressions for the thrust losses of a scramjet engine are developed in terms of irreversible entropy increases and the degree of incomplete combustion. A method is developed which allows the calculation of the lost vehicle thrust due to different loss mechanisms within a given flow-field. This analysis demonstrates clearly the trade-off between mixing enhancement and resultant increased flow losses in scramjet combustors. An engine effectiveness parameter is defined in terms of thrust loss. Exergy and the thrust-potential method are related and compared.

  14. A Comparison of Analytical and Numerical Methods for Modeling Dissolution and Other Reactions in Transport Limited Systems

    NASA Astrophysics Data System (ADS)

    Hochstetler, D. L.; Kitanidis, P. K.

    2009-12-01

    Modeling the transport of reactive species is a computationally demanding problem, especially in complex subsurface media, where it is crucial to improve understanding of geochemical processes and the fate of groundwater contaminants. In most of these systems, reactions are inherently fast and actual rates of transformations are limited by the slower physical transport mechanisms. There have been efforts to reformulate multi-component reactive transport problems into systems that are simpler and less demanding to solve. These reformulations include defining conservative species and decoupling of reactive transport equations so that fewer of them must be solved, leaving mostly conservative equations for transport [e.g., De Simoni et al., 2005; De Simoni et al., 2007; Kräutle and Knabner, 2007; Molins et al., 2004]. Complex and computationally cumbersome numerical codes used to solve such problems have also caused De Simoni et al. [2005] to develop more manageable analytical solutions. Furthermore, this work evaluates reaction rates and has reaffirmed that the mixing rate,▽TuD▽u, where u is a solute concentration and D is the dispersion tensor, as defined by Kitanidis [1994], is an important and sometimes dominant factor in determining reaction rates. Thus, mixing of solutions is often reaction-limiting. We will present results from analytical and computational modeling of multi-component reactive-transport problems. The results have applications to dissolution of solid boundaries (e.g., calcite), dissolution of non-aqueous phase liquids (NAPLs) in separate phases, and mixing of saltwater and freshwater (e.g. saltwater intrusion in coastal carbonate aquifers). We quantify reaction rates, compare numerical and analytical results, and analyze under what circumstances which approach is most effective for a given problem. References: DeSimoni, M., et al. (2005), A procedure for the solution of multicomponent reactive transport problems, Water Resources Research, 41(W11410). DeSimoni, M., et al. (2007), A mixing ratios-based formulation for multicomponent reactive transport, Water Resources Research, 43(W07419). Kitanidis, P. (1994), The Concept of the Dilution Index, Water Resources Research, 30(7), 2011-2026. Kräutle, S., and P. Knabner (2007), A reduction scheme for coupled multicomponent transport-reaction problems in porous media: Generalization to problems with heterogeneous equilibrium reactions Water Resources Research, 43. Molins, S., et al. (2004), A formulation for decoupling components in reactive transport porblems, Water Resources Research, 40, 13.

  15. Developing Quality Indicators and Auditing Protocols from Formal Guideline Models: Knowledge Representation and Transformations

    PubMed Central

    Advani, Aneel; Goldstein, Mary; Shahar, Yuval; Musen, Mark A.

    2003-01-01

    Automated quality assessment of clinician actions and patient outcomes is a central problem in guideline- or standards-based medical care. In this paper we describe a model representation and algorithm for deriving structured quality indicators and auditing protocols from formalized specifications of guidelines used in decision support systems. We apply the model and algorithm to the assessment of physician concordance with a guideline knowledge model for hypertension used in a decision-support system. The properties of our solution include the ability to derive automatically (1) context-specific and (2) case-mix-adjusted quality indicators that (3) can model global or local levels of detail about the guideline (4) parameterized by defining the reliability of each indicator or element of the guideline. PMID:14728124

  16. The titration of carboxyl-terminated monolayers revisited: in situ calibrated fourier transform infrared study of well-defined monolayers on silicon.

    PubMed

    Aureau, D; Ozanam, F; Allongue, P; Chazalviel, J-N

    2008-09-02

    The acid-base equilibrium at the surface of well-defined mixed carboxyl-terminated/methyl-terminated monolayers grafted on silicon (111) has been investigated using in situ calibrated infrared spectroscopy (attenuated total reflectance (ATR)) in the range of 900-4000 cm (-1). Spectra of surfaces in contact with electrolytes of various pH provide a direct observation of the COOH <--> COO (-) conversion process. Quantitative analysis of the spectra shows that ionization of the carboxyl groups starts around pH 6 and extends over more than 6 pH units: approximately 85% ionization is measured at pH 11 (at higher pH, the layers become damaged). Observations are consistently accounted for by a single acid-base equilibrium and discussed in terms of change in ion solvation at the surface and electrostatic interactions between surface charges. The latter effect, which appears to be the main limitation, is qualitatively accounted for by a simple model taking into account the change in the Helmholtz potential associated with the surface charge. Furthermore, comparison of calculated curves with experimental titration curves of mixed monolayers suggests that acid and alkyl chains are segregated in the monolayer.

  17. Flux-related and Critical Dilution Indices: Quantitative Indicators of Mixing and Mixing-controlled Reactions in Heterogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Chiogna, G.; Cirpka, O. A.; Grathwohl, P.; Rolle, M.

    2010-12-01

    The correct quantification of mixing is of utmost importance for modeling reactive transport in porous media and, thereby assessing the fate and transport of contaminants in the subsurface. An appropriate measure of mixing in heterogeneous porous formations should correctly capture the effects on mixing intensity of various processes at different scales, such as local dispersion and the effect of mixing enhancement due to heterogeneities. In this work, we use the concept of the flux-related dilution index as a measure of transverse mixing. This quantity expresses the dilution of the mass flux of a tracer solution over the total discharge of the system and is particularly suited to address problems where a compound is continuously injected into the domain. We focus our attention on two-dimensional systems under steady-state flow conditions and investigate both conservative and reactive transport in both homogeneous and heterogeneous porous media at different scales. For mixing-controlled reactive systems, we introduce and illustrate the concept of the critical dilution index, which represents the amount of mixing required for complete degradation of a continuously emitted plume undergoing decay upon mixing with ambient water. We perform two-dimensional numerical experiments at bench and field scales in homogeneous and heterogeneous conductivity fields. These numerical simulations show that the flux-related dilution index quantifies mixing and that the concept of the critical dilution index is a useful measure to relate the mixing of conservative tracers to mixing-controlled turnover of reactive compounds. In the end we define an effective transverse dispersion coefficient which is able to capture the main characteristics of the physical mechanisms controlling reactive transport at the field scale. Furthermore we investigated the influence of compound specific local transverse dispersion coefficients on the flux related dilution index and on the critical dilution index.

  18. Guidelines and Parameter Selection for the Simulation of Progressive Delamination

    NASA Technical Reports Server (NTRS)

    Song, Kyongchan; Davila, Carlos G.; Rose, Cheryl A.

    2008-01-01

    Turon s methodology for determining optimal analysis parameters for the simulation of progressive delamination is reviewed. Recommended procedures for determining analysis parameters for efficient delamination growth predictions using the Abaqus/Standard cohesive element and relatively coarse meshes are provided for single and mixed-mode loading. The Abaqus cohesive element, COH3D8, and a user-defined cohesive element are used to develop finite element models of the double cantilever beam specimen, the end-notched flexure specimen, and the mixed-mode bending specimen to simulate progressive delamination growth in Mode I, Mode II, and mixed-mode fracture, respectively. The predicted responses are compared with their analytical solutions. The results show that for single-mode fracture, the predicted responses obtained with the Abaqus cohesive element correlate well with the analytical solutions. For mixed-mode fracture, it was found that the response predicted using COH3D8 elements depends on the damage evolution criterion that is used. The energy-based criterion overpredicts the peak loads and load-deflection response. The results predicted using a tabulated form of the BK criterion correlate well with the analytical solution and with the results predicted with the user-written element.

  19. Methane Provenance Determined by CH2D2 and 13CH3D Abundances

    NASA Astrophysics Data System (ADS)

    Kohl, I. E.; Giunta, T.; Warr, O.; Ash, J. L.; Ruffine, L.; Sherwood Lollar, B.; Young, E. D.

    2017-12-01

    Determining the provenance of naturally occurring methane gases is of major interest to energy companies and atmospheric climate modelers, among others. Bulk isotopic compositions and other geochemical tracers sometimes fail to provide definitive determinations of sources of methane due to complications from mixing and complicated chemical pathways of origin. Recent measurements of doubly-substituted isotopologues of methane, CH2D2 (UCLA) and 13CH3D (UCLA, CalTech, and MIT) have allowed for major improvements in sourcing natural methane gases. Early work has focused on formation temperatures obtained when the relative abundances of both doubly-substituted mass-18 species are consistent with internal equilibrium. When methane gases do not plot on the thermodynamic equilibrium curve in D12CH2D2 vs D13CH3D space, temperatures determined from D13CH3D values alone are usually spurious, even when appearing reasonable. We find that the equilibrium case is actually rare and almost exclusive to thermogenic gases produced at temperatures exceeding 100°C. All other relevant methane production processes appear to generate gases that are not in isotopologue-temperature equilibrium. When gases show departures from equilibrium as determined by the relationship between CH2D2 and 13CH3D abundances, data fall within empirically defined fields representing formation pathways. These fields are thus far consistent between different geological settings and and between lab experiments and natural samples. We have now defined fields for thermogenic gas production, microbial methanogenesis, low temperature abiotic (Sabatier) synthesis and higher temperature FTT synthesis. The majority of our natural methane data can be explained by mixing between end members originating within these production fields. Mixing can appear complex, resulting in both hyper-clumped and anti-clumped isotopologue abundances. In systems where mixtures dominate and end-members are difficult to sample, mixing models can be used to extrapolate end member compositions. Post formation equilibration with time is evident in some cases and is most likely attributable to anaerobic methane oxidation. Large variation in CH2D2 abundances related to quantum tunneling and /or combinatorial effects is a crucial arbiter for methane sources.

  20. Pdf modeling for premixed turbulent combustion based on the properties of iso-concentration surfaces

    NASA Technical Reports Server (NTRS)

    Vervisch, L.; Kollmann, W.; Bray, K. N. C.; Mantel, T.

    1994-01-01

    In premixed turbulent flames the presence of intense mixing zones located in front of and behind the flame surface leads to a requirement to study the behavior of iso-concentration surfaces defined for all values of the progress variable (equal to unity in burnt gases and to zero in fresh mixtures). To support this study, some theoretical and mathematical tools devoted to level surfaces are first developed. Then a database of direct numerical simulations of turbulent premixed flames is generated and used to investigate the internal structure of the flame brush, and a new pdf model based on the properties of iso-surfaces is proposed.

  1. Challenging the unipolar-bipolar division: does mixed depression bridge the gap?

    PubMed

    Benazzi, Franco

    2007-01-30

    Mixed states, i.e., opposite polarity symptoms in the same mood episode, question the categorical splitting of mood disorders in bipolar disorders and unipolar depressive disorders, and may support a continuum between these disorders. Study aim was to find if there were a continuum between hypomania (defining BP-II) and depression (defining MDD), by testing mixed depression as a 'bridge' linking these two disorders. A correlation between intradepressive hypomanic symptoms and depressive symptoms could support such a continuum, but other explanations of a correlation are possible. Consecutive 389 BP-II and 261 MDD major depressive episode (MDE) outpatients were interviewed, cross-sectionally, with the Structured Clinical Interview for DSM-IV, the Hypomania Interview Guide (to assess intradepressive hypomanic symptoms) and the Family History Screen, by a mood disorders specialist psychiatrist in a private practice. Patients presented voluntarily for treatment of depression when interviewed drug-free and had many subsequent follow-ups after treatment start. Mixed depression (depressive mixed state) was defined as the combination of MDE (depression) and three or more DSM-IV intradepressive hypomanic symptoms (elevated mood and increased self-esteem were always absent by definition), a definition validated by Akiskal and Benazzi. BP-II, versus MDD, had significantly lower age at onset, more recurrences, atypical and mixed depressions, bipolar family history, MDE symptoms and intradepressive hypomanic symptoms. Mixed depression was present in 64.5% of BP-II and in 32.1% of MDD (p=0.000). There was a significant correlation between number of MDE symptoms and number of intradepressive hypomanic symptoms. A dose-response relationship between frequency of mixed depression and number of MDE symptoms was also found. Differences on classic diagnostic validators could support a division between BP-II and MDD. Presence of intradepressive hypomanic symptoms by itself, and correlation between intradepressive hypomanic symptoms and depressive symptoms could instead support a continuum. Other explanations of such a correlation are possible. Depending on the method used, a BP-II-MDD continuum could be supported or not.

  2. How should "ambidexterity" be estimated?

    PubMed

    Fagard, Jacqueline; Chapelain, Amandine; Bonnet, Philippe

    2015-01-01

    Weak and absent hand preferences have often been associated with developmental disorders or with cognitive functioning in the typical population. The results of different studies in this area, however, are not always coherent. One likely reason for discrepancies in findings is the diversity of cut-offs used to define ambidexterity and mixed right- and mixed left-handedness. Establishing and applying a common criterion would constitute an important step on the way to producing systematically comparable results. We thus decided to try to identify criteria for classifying individuals ambidextrous, mixed right- or left-handed or strong right- or left-handed. For that purpose, we first administered a handedness questionnaire to 716 individuals and performed multiple correspondence analyses to define handedness groups. Twenty-four participants were categorized as ambidextrous (3.3%), as opposed to mixed (29.2%) and strong (56%) right-handers, and to mixed (9.1%) and strong (2.4%) left-handers. We then compared this categorization with laterality index (LI)-based categories using different cut-offs and found that it was most correlated with LI cut-offs at -90, -30, +30 and +90, successively delimiting strong left-handedness, mixed left-handedness, ambidexterity (-30 to +30), mixed right-handedness and strong right-handedness. The characteristics of ambidextrous and lateralized individuals are also compared.

  3. Multipartite entangled states in particle mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blasone, M.; INFN Sezione di Napoli, Gruppo collegato di Salerno, Baronissi; Dell'Anno, F.

    2008-05-01

    In the physics of flavor mixing, the flavor states are given by superpositions of mass eigenstates. By using the occupation number to define a multiqubit space, the flavor states can be interpreted as multipartite mode-entangled states. By exploiting a suitable global measure of entanglement, based on the entropies related to all possible bipartitions of the system, we analyze the correlation properties of such states in the instances of three- and four-flavor mixing. Depending on the mixing parameters, and, in particular, on the values taken by the free phases, responsible for the CP-violation, entanglement concentrates in certain bipartitions. We quantify inmore » detail the amount and the distribution of entanglement in the physically relevant cases of flavor mixing in quark and neutrino systems. By using the wave packet description for localized particles, we use the global measure of entanglement, suitably adapted for the instance of multipartite mixed states, to analyze the decoherence, induced by the free evolution dynamics, on the quantum correlations of stationary neutrino beams. We define a decoherence length as the distance associated with the vanishing of the coherent interference effects among massive neutrino states. We investigate the role of the CP-violating phase in the decoherence process.« less

  4. Wavelet-based functional linear mixed models: an application to measurement error-corrected distributed lag models.

    PubMed

    Malloy, Elizabeth J; Morris, Jeffrey S; Adar, Sara D; Suh, Helen; Gold, Diane R; Coull, Brent A

    2010-07-01

    Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1-7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants.

  5. Application of optimization technique for flood damage modeling in river system

    NASA Astrophysics Data System (ADS)

    Barman, Sangita Deb; Choudhury, Parthasarathi

    2018-04-01

    A river system is defined as a network of channels that drains different parts of a basin uniting downstream to form a common outflow. An application of various models found in literatures, to a river system having multiple upstream flows is not always straight forward, involves a lengthy procedure; and with non-availability of data sets model calibration and applications may become difficult. In the case of a river system the flow modeling can be simplified to a large extent if the channel network is replaced by an equivalent single channel. In the present work optimization model formulations based on equivalent flow and applications of the mixed integer programming based pre-emptive goal programming model in evaluating flood control alternatives for a real life river system in India are proposed to be covered in the study.

  6. Concentration transport calculations by an original C++ program with interediate fidelity physics through user-defined buildings with an emphasis on release scenarios in radiological facilities

    NASA Astrophysics Data System (ADS)

    Sayre, George Anthony

    The purpose of this dissertation was to develop the C ++ program Emergency Dose to calculate transport of radionuclides through indoor spaces using intermediate fidelity physics that provides improved spatial heterogeneity over well-mixed models such as MELCORRTM and much lower computation times than CFD codes such as FLUENTRTM . Modified potential flow theory, which is an original formulation of potential flow theory with additions of turbulent jet and natural convection approximations, calculates spatially heterogeneous velocity fields that well-mixed models cannot predict. Other original contributions of MPFT are: (1) generation of high fidelity boundary conditions relative to well-mixed-CFD coupling methods (conflation), (2) broadening of potential flow applications to arbitrary indoor spaces previously restricted to specific applications such as exhaust hood studies, and (3) great reduction of computation time relative to CFD codes without total loss of heterogeneity. Additionally, the Lagrangian transport module, which is discussed in Sections 1.3 and 2.4, showcases an ensemble-based formulation thought to be original to interior studies. Velocity and concentration transport benchmarks against analogous formulations in COMSOLRTM produced favorable results with discrepancies resulting from the tetrahedral meshing used in COMSOLRTM outperforming the Cartesian method used by Emergency Dose. A performance comparison of the concentration transport modules against MELCORRTM showed that Emergency Dose held advantages over the well-mixed model especially in scenarios with many interior partitions and varied source positions. A performance comparison of velocity module against FLUENTRTM showed that viscous drag provided the largest error between Emergency Dose and CFD velocity calculations, but that Emergency Dose's turbulent jets well approximated the corresponding CFD jets. Overall, Emergency Dose was found to provide a viable intermediate solution method for concentration transport with relatively low computation times.

  7. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    PubMed

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p < 0.001) when using a linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p < 0.001) and slopes (p < 0.001) of the individual growth trajectories. We also identified important serial correlation within the structure of the data (ρ = 0.66; 95 % CI 0.64 to 0.68; p < 0.001), which we modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19,598, respectively). While the regression parameters are more complex to interpret in the former, we argue that inference for any problem depends more on the estimated curve or differences in curves rather than the coefficients. Moreover, use of cubic regression splines provides biological meaningful growth velocity and acceleration curves despite increased complexity in coefficient interpretation. Through this stepwise approach, we provide a set of tools to model longitudinal childhood data for non-statisticians using linear mixed-effect models.

  8. Iterative Usage of Fixed and Random Effect Models for Powerful and Efficient Genome-Wide Association Studies

    PubMed Central

    Liu, Xiaolei; Huang, Meng; Fan, Bin; Buckler, Edward S.; Zhang, Zhiwu

    2016-01-01

    False positives in a Genome-Wide Association Study (GWAS) can be effectively controlled by a fixed effect and random effect Mixed Linear Model (MLM) that incorporates population structure and kinship among individuals to adjust association tests on markers; however, the adjustment also compromises true positives. The modified MLM method, Multiple Loci Linear Mixed Model (MLMM), incorporates multiple markers simultaneously as covariates in a stepwise MLM to partially remove the confounding between testing markers and kinship. To completely eliminate the confounding, we divided MLMM into two parts: Fixed Effect Model (FEM) and a Random Effect Model (REM) and use them iteratively. FEM contains testing markers, one at a time, and multiple associated markers as covariates to control false positives. To avoid model over-fitting problem in FEM, the associated markers are estimated in REM by using them to define kinship. The P values of testing markers and the associated markers are unified at each iteration. We named the new method as Fixed and random model Circulating Probability Unification (FarmCPU). Both real and simulated data analyses demonstrated that FarmCPU improves statistical power compared to current methods. Additional benefits include an efficient computing time that is linear to both number of individuals and number of markers. Now, a dataset with half million individuals and half million markers can be analyzed within three days. PMID:26828793

  9. Campaign datasets for Biomass Burning Observation Project (BBOP)

    DOE Data Explorer

    Kleinman,Larry; Mei,Fan; Arnott,William; Buseck,Peter; Chand,Duli; Comstock,Jennifer; Dubey,Manvendra; Lawson,Paul; Long,Chuck; Onasch,Timothy; Sedlacek,Arthur; Senum,Gunnar; Shilling,John; Springston,Stephen; Tomlinson,Jason; Wang,Jian

    2014-04-24

    This field campaign will address multiple uncertainties in aerosol intensive properties, which are poorly represented in climate models, by means of aircraft measurements in biomass burning plumes. Key topics to be investigated are: 1. Aerosol mixing state and morphology 2. Mass absorption coefficients (MACs) 3. Chemical composition of non-refractory material associated with light-absorbing carbon (LAC) 4. Production rate of secondary organic aerosol (SOA) 5. Microphysical processes relevant to determining aerosol size distributions and single scattering albedo (SSA) 6. CCN activity. These topics will be investigated through measurements near active fires (0-5 hours downwind), where limited observations indicate rapid changes in aerosol properties, and in biomass burning plumes aged >5 hours. Aerosol properties and their time evolution will be determined as a function of fire type, defined according to fuel and the mix of flaming and smoldering combustion at the source.

  10. Dynamics and Hall-edge-state mixing of localized electrons in a two-channel Mach-Zehnder interferometer

    NASA Astrophysics Data System (ADS)

    Bellentani, Laura; Beggi, Andrea; Bordone, Paolo; Bertoni, Andrea

    2018-05-01

    We present a numerical study of a multichannel electronic Mach-Zehnder interferometer, based on magnetically driven noninteracting edge states. The electron path is defined by a full-scale potential landscape on the two-dimensional electron gas at filling factor 2, assuming initially only the first Landau level as filled. We tailor the two beamsplitters with 50 % interchannel mixing and measure Aharonov-Bohm oscillations in the transmission probability of the second channel. We perform time-dependent simulations by solving the electron Schrödinger equation through a parallel implementation of the split-step Fourier method, and we describe the charge-carrier wave function as a Gaussian wave packet of edge states. We finally develop a simplified theoretical model to explain the features observed in the transmission probability, and we propose possible strategies to optimize gate performances.

  11. Generic evolution of mixing in heterogeneous media

    NASA Astrophysics Data System (ADS)

    De Dreuzy, J.; Carrera, J.; Dentz, M.; Le Borgne, T.

    2011-12-01

    Mixing in heterogeneous media results from the competition bewteen flow fluctuations and local scale diffusion. Flow fluctuations quickly create concentration contrasts and thus heterogeneity of the concentration field, which is slowly homogenized by local scale diffusion. Mixing first deviates from Gaussian mixing, which represents the potential mixing induced by spreading before approaching it. This deviation fundamentally expresses the evolution of the interaction between spreading and local scale diffusion. We characterize it by the ratio γ of the non-Gaussian to the Gaussian mixing states. We define the Gaussian mixing state as the integrated squared concentration of the Gaussian plume that has the same longitudinal dispersion as the real plume. The non-Gaussian mixing state is the difference between the overall mixing state defined as the integrated squared concentration and the Gaussian mixing state. The main advantage of this definition is to use the full knowledge previously acquired on dispersion for characterizing mixing even when the solute concentration field is highly non Gaussian. Using high precision numerical simulations, we show that γ quickly increases, peaks and slowly decreases. γ can be derived from two scales characterizing spreading and local mixing, at least for large flux-weighted solute injection conditions into classically log-normal Gaussian correlated permeability fields. The spreading scale is directly related to the longitudinal dispersion. The local mixing scale is the largest scale over which solute concentrations can be considered locally uniform. More generally, beyond the characteristics of its maximum, γ turns out to have a highly generic scaling form. Its fast increase and slow decrease depend neither on the heterogeneity level, nor on the ratio of diffusion to advection, nor on the injection conditions. They might even not depend on the particularities of the flow fields as the same generic features also prevail for Taylor dispersion. This generic characterization of mixing can offer new ways to set up transport equations that honor not only advection and spreading (dispersion), but also mixing.

  12. Comment on `Electrical conductance of a sandstone partially saturated with varying concentrations of NaCl solutions' by R. Umezawa, N. Nishiyama, M. Katsura and S. Nakashima

    NASA Astrophysics Data System (ADS)

    Revil, André; Soueid Ahmed, Abdellahi

    2017-11-01

    Umezawa et al. investigated the dependence of the electrical conductivity of rocks with respect to the saturation of the water phase. Four issues can be underlined in their work: (1) The conductivity model they used mixes bulk and surface tortuosities in the same linear equation (i.e., between the conductivity and the conductivity of the pore water). This conflicts with the fact that the conductivity is a concave down increasing function of the pore water conductivity and bulk tortuosity is defined only at high salinity while surface tortuosity is defined only at very low salinity. (2) The specific surface conductance obtained by Umezawa et al. is too low and conflicts with independent evaluations obtained with double layer models for aluminosilicates and silicates. (3) The expression given for the resistivity index conflicts with the inclusion of a surface conductivity term in the conductivity equation.

  13. Workability of hot mix asphalt

    DOT National Transportation Integrated Search

    2003-04-01

    Workability in the field can be defined as a property that describes the ease with which hot mix asphalt (HMA) can be placed, worked by hand and compacted. Use of polymer-modified binders has increase in the U.S. due to the resultant performance bene...

  14. Parents' Reasons for Choosing Non-Public Non-Denominational Elementary Schools for Low Socioeconomic Students in Alabama: A Mixed-Methods Study

    ERIC Educational Resources Information Center

    Francis-Thomas, Kyle

    2016-01-01

    The purpose of this mixed-methods study was to determine parents' reasons for choosing Non-Public Non-Denominational Elementary Schools for low socioeconomic students in Alabama. Low socioeconomic students were defined as students who qualified for free/reduced lunches. The research was designed as a mixed methods study with data being collected…

  15. 40 CFR 63.9520 - What procedures must I use to demonstrate initial compliance?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the emission limitations in § 63.9500(a) and (b). (1) Record the date and time of each mix batch. (2) Record the identity of each mix batch using a unique batch ID, as defined in § 63.9565. (3) Measure and record the weight of HAP solvent loaded into the solvent mixer for each mix batch. (4) Measure and record...

  16. Adaptation of non-linear mixed amount with zero amount response surface model for analysis of concentration-dependent synergism and safety with midazolam, alfentanil, and propofol sedation.

    PubMed

    Liou, J-Y; Ting, C-K; Teng, W-N; Mandell, M S; Tsou, M-Y

    2018-06-01

    The non-linear mixed amount with zero amounts response surface model can be used to describe drug interactions and predict loss of response to noxious stimuli and respiratory depression. We aimed to determine whether this response surface model could be used to model sedation with the triple drug combination of midazolam, alfentanil and propofol. Sedation was monitored in 56 patients undergoing gastrointestinal endoscopy (modelling group) using modified alertness/sedation scores. A total of 227 combinations of effect-site concentrations were derived from pharmacokinetic models. Accuracy and the area under the receiver operating characteristic curve were calculated. Accuracy was defined as an absolute difference <0.5 between the binary patient responses and the predicted probability of loss of responsiveness. Validation was performed with a separate group (validation group) of 47 patients. Effect-site concentration ranged from 0 to 108 ng ml -1 for midazolam, 0-156 ng ml -1 for alfentanil, and 0-2.6 μg ml -1 for propofol in both groups. Synergy was strongest with midazolam and alfentanil (24.3% decrease in U 50 , concentration for half maximal drug effect). Adding propofol, a third drug, offered little additional synergy (25.8% decrease in U 50 ). Two patients (3%) experienced respiratory depression. Model accuracy was 83% and 76%, area under the curve was 0.87 and 0.80 for the modelling and validation group, respectively. The non-linear mixed amount with zero amounts triple interaction response surface model predicts patient sedation responses during endoscopy with combinations of midazolam, alfentanil, or propofol that fall within clinical use. Our model also suggests a safety margin of alfentanil fraction <0.12 that avoids respiratory depression after loss of responsiveness. Copyright © 2018 British Journal of Anaesthesia. Published by Elsevier Ltd. All rights reserved.

  17. Phenomenology of manic episodes according to the presence or absence of depressive features as defined in DSM-5: Results from the IMPACT self-reported online survey.

    PubMed

    Vieta, Eduard; Grunze, Heinz; Azorin, Jean-Michel; Fagiolini, Andrea

    2014-03-01

    The aim of this study was to describe the phenomenology of mania and depression in bipolar patients experiencing a manic episode with mixed features as defined in the new Diagnostic and Statistical Manual of Mental Disorders (DSM-5). In this multicenter, international on-line survey (the IMPACT study), 700 participants completed a 54-item questionnaire on demographics, diagnosis, symptomatology, communication of the disease, impact on life, and treatment received. Patients with a manic episode with or without DSM-5 criteria for mixed features were compared using descriptive and inferential statistics. Patients with more than 3 depressive symptoms were more likely to have had a delay in diagnosis, more likely to have experienced shorter symptom-free periods, and were characterized by a marked lower prevalence of typical manic manifestations. All questionnaire items exploring depressive symptomatology, including the DSM-5 criteria defining a manic episode as "with mixed features", were significantly overrepresented in the group of patients with depressive symptoms. Anxiety associated with irritability/agitation was also more frequent among patients with mixed features. Retrospective cross-sectional design, sensitive to recall bias. Two of the 6 DSM-5 required criteria for the specifier "with mixed features" were not explored: suicidality and psychomotor retardation. Bipolar disorder patients with at least 3 depressive symptoms during a manic episode self-reported typical symptomatology. Anxiety with irritability/agitation differentiated patients with depressive symptoms during mania from those with "pure" manic episodes. The results support the use of DSM-5 mixed features specifier and its value in research and clinical practice. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. The impact of composite AUC estimates on the prediction of systemic exposure in toxicology experiments.

    PubMed

    Sahota, Tarjinder; Danhof, Meindert; Della Pasqua, Oscar

    2015-06-01

    Current toxicity protocols relate measures of systemic exposure (i.e. AUC, Cmax) as obtained by non-compartmental analysis to observed toxicity. A complicating factor in this practice is the potential bias in the estimates defining safe drug exposure. Moreover, it prevents the assessment of variability. The objective of the current investigation was therefore (a) to demonstrate the feasibility of applying nonlinear mixed effects modelling for the evaluation of toxicokinetics and (b) to assess the bias and accuracy in summary measures of systemic exposure for each method. Here, simulation scenarios were evaluated, which mimic toxicology protocols in rodents. To ensure differences in pharmacokinetic properties are accounted for, hypothetical drugs with varying disposition properties were considered. Data analysis was performed using non-compartmental methods and nonlinear mixed effects modelling. Exposure levels were expressed as area under the concentration versus time curve (AUC), peak concentrations (Cmax) and time above a predefined threshold (TAT). Results were then compared with the reference values to assess the bias and precision of parameter estimates. Higher accuracy and precision were observed for model-based estimates (i.e. AUC, Cmax and TAT), irrespective of group or treatment duration, as compared with non-compartmental analysis. Despite the focus of guidelines on establishing safety thresholds for the evaluation of new molecules in humans, current methods neglect uncertainty, lack of precision and bias in parameter estimates. The use of nonlinear mixed effects modelling for the analysis of toxicokinetics provides insight into variability and should be considered for predicting safe exposure in humans.

  19. Surface boundary layer turbulence in the Southern ocean

    NASA Astrophysics Data System (ADS)

    Merrifield, Sophia; St. Laurent, Louis; Owens, Breck; Naveira Garabato, Alberto

    2015-04-01

    Due to the remote location and harsh conditions, few direct measurements of turbulence have been collected in the Southern Ocean. This region experiences some of the strongest wind forcing of the global ocean, leading to large inertial energy input. While mixed layers are known to have a strong seasonality and reach 500m depth, the depth structure of near-surface turbulent dissipation and diffusivity have not been examined using direct measurements. We present data collected during the Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean (DIMES) field program. In a range of wind conditions, the wave affected surface layer (WASL), where surface wave physics are actively forcing turbulence, is contained to the upper 15-20m. The lag-correlation between wind stress and turbulence shows a strong relationship up to 6 hours (˜1/2 inertial period), with the winds leading the oceanic turbulent response, in the depth range between 20-50m. We find the following characterize the data: i) Profiles that have a well-defined hydrographic mixed layer show that dissipation decays in the mixed layer inversely with depth, ii) WASLs are typically 15 meters deep and 30% of mixed layer depth, iii) Subject to strong winds, the value of dissipation as a function of depth is significantly lower than predicted by theory. Many dynamical processes are known to be missing from upper-ocean parameterizations of mixing in global models. These include surface-wave driven processes such as Langmuir turbulence, submesocale frontal processes, and nonlocal representations of mixing. Using velocity, hydrographic, and turbulence measurements, the existence of coherent structures in the boundary layer are investigated.

  20. 21 CFR 184.1835 - Sorbitol.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... nonstandardized jams and jellies, commercial, as defined in § 170.3(n)(28) of this chapter, 30 percent in baked goods and baking mixes as defined in § 170.3(n)(1) of this chapter, 17 percent in frozen dairy desserts...

  1. 21 CFR 184.1835 - Sorbitol.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... nonstandardized jams and jellies, commercial, as defined in § 170.3(n)(28) of this chapter, 30 percent in baked goods and baking mixes as defined in § 170.3(n)(1) of this chapter, 17 percent in frozen dairy desserts...

  2. 21 CFR 184.1835 - Sorbitol.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... nonstandardized jams and jellies, commercial, as defined in § 170.3(n)(28) of this chapter, 30 percent in baked goods and baking mixes as defined in § 170.3(n)(1) of this chapter, 17 percent in frozen dairy desserts...

  3. 21 CFR 184.1835 - Sorbitol.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... nonstandardized jams and jellies, commercial, as defined in § 170.3(n)(28) of this chapter, 30 percent in baked goods and baking mixes as defined in § 170.3(n)(1) of this chapter, 17 percent in frozen dairy desserts...

  4. Experimental Models of C. albicans-Streptococcal Co-infection.

    PubMed

    Sobue, Takanori; Diaz, Patricia; Xu, Hongbin; Bertolini, Martinna; Dongari-Bagtzoglou, Anna

    2016-01-01

    Interactions of C. albicans with co-colonizing bacteria at mucosal sites can be synergistic or antagonistic in disease development, depending on the bacterial species and mucosal site. Mitis group streptococci and C. albicans colonize the oral mucosa of the majority of healthy individuals. These streptococci have been termed "accessory pathogens," defined by their ability to initiate multispecies biofilm assembly and promote the virulence of the mixed bacterial biofilm community in which they participate. To demonstrate whether interactions with Mitis group streptococci limit or promote the potential of C. albicans to become an opportunistic pathogen, in vitro and in vivo co-infection models are needed. Here, we describe two C. albicans-streptococcal co-infection models: an organotypic oral mucosal tissue model that incorporates salivary flow and a mouse model of oral co-infection that requires reduced levels of immunosuppression compared to single fungal infection.

  5. Characterization of Viscoelastic Materials Through an Active Mixer by Direct-Ink Writing

    NASA Astrophysics Data System (ADS)

    Drake, Eric

    The goal of this thesis is two-fold: First, to determine mixing effectiveness of an active mixer attachment to a three-dimensional (3D) printer by characterizing actively-mixed, three-dimensionally printed silicone elastomers. Second, to understand mechanical properties of a printed lattice structure with varying geometry and composition. Ober et al defines mixing effectiveness as a measureable quantity characterized by two key variables: (i) a dimensionless impeller parameter (O ) that depends on mixer geometry as well as Peclet number (Pe) and (ii) a coefficient of variation (COV) that describes the mixer effectiveness based upon image intensity. The first objective utilizes tungsten tracer particles distributed throughout a batch of Dow Corning SE1700 (two parts silicone) - ink "A". Ink "B" is made from pure SE1700. Using the in-site active mixer, both ink "A" and "B" coalesce to form a hybrid ink just before extrusion. Two samples of varying mixer speeds and composition ratios are printed and analyzed by microcomputed tomography (MicroCT). A continuous stirred tank reactor (CSTR) model is applied to better understand mixing behavior. Results are then compared with computer models to verify the hypothesis. Data suggests good mixing for the sample with higher impeller speed. A Radial Distrubtion Function (RDF) macro is used to provide further qualitative analysis of mixing efficiency. The second objective of this thesis utilized three-dimensionally printed samples of varying geometry and composition to ascertain mechanical properties. Samples were printed using SE1700 provided by Lawrence Livermore National Laboratory with a face-centered tetragonal (FCT) structure. Hardness testing is conducted using a Shore OO durometer guided by a computer-controlled, three-axis translation stage to provide precise movements. Data is collected across an 'x-y' plane of the specimen. To explain the data, a simply supported beam model is applied to a single unit cell which yields basic structural behavior per cell. Characterizing the sample as a whole requires a more rigorous approach and non-trivial complexities due to varying geometries and compositions exist. The data demonstrates a uniform change in hardness as a function of position. Additionally, the data indicates periodicities in the lattice structure.

  6. Four-way coupling of a three-dimensional debris flow solver to a Lagrangian Particle Simulation: method and first results

    NASA Astrophysics Data System (ADS)

    von Boetticher, Albrecht; Rickenmann, Dieter; McArdell, Brian; Kirchner, James W.

    2017-04-01

    Debris flows are dense flowing mixtures of water, clay, silt, sand and coarser particles. They are a common natural hazard in mountain regions and frequently cause severe damage. Modeling debris flows to design protection measures is still challenging due to the complex interactions within the inhomogeneous material mixture, and the sensitivity of the flow process to the channel geometry. The open-source, OpenFOAM-based finite-volume debris flow model debrisInterMixing (von Boetticher et al, 2016) defines rheology parameters based on the material properties of the debris flow mixture to reduce the number of free model parameters. As a simplification in this first model version, gravel was treated as a Coulomb-viscoplastic fluid, neglecting grain-to-grain collisions and the coupling between the coarser gravel grains and the interstitial fluid. Here we present an extension of that solver, accounting for the particle-to-particle and particle-to-boundary contacts with a Lagrangian Particle Simulation composed of spherical grains and a user-defined grain size distribution. The grain collisions of the Lagrangian particles add granular flow behavior to the finite-volume simulation of the continuous phases. The two-way coupling exchanges momentum between the phase-averaged flow in a finite volume cell, and among all individual particles contained in that cell, allowing the user to choose from a number of different drag models. The momentum exchange is implemented in the momentum equation and in the pressure equation (ensuring continuity) of the so-called PISO-loop, resulting in a stable 4-way coupling (particle-to-particle, particle-to-boundary, particle-to-fluid and fluid-to-particle) that represents the granular and viscous flow behavior of debris flow material. We will present simulations that illustrate the relative benefits and drawbacks of explicitly representing grain collisions, compared to the original debrisInterMixing solver.

  7. A mixing timescale model for TPDF simulations of turbulent premixed flames

    DOE PAGES

    Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.; ...

    2017-02-06

    Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive scalar mixing across different flame regimes are appropriately accounted for.« less

  8. A mixing timescale model for TPDF simulations of turbulent premixed flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.

    Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive scalar mixing across different flame regimes are appropriately accounted for.« less

  9. Energy-exchange collisions of dark-bright-bright vector solitons.

    PubMed

    Radhakrishnan, R; Manikandan, N; Aravinthan, K

    2015-12-01

    We find a dark component guiding the practically interesting bright-bright vector one-soliton to two different parametric domains giving rise to different physical situations by constructing a more general form of three-component dark-bright-bright mixed vector one-soliton solution of the generalized Manakov model with nine free real parameters. Moreover our main investigation of the collision dynamics of such mixed vector solitons by constructing the multisoliton solution of the generalized Manakov model with the help of Hirota technique reveals that the dark-bright-bright vector two-soliton supports energy-exchange collision dynamics. In particular the dark component preserves its initial form and the energy-exchange collision property of the bright-bright vector two-soliton solution of the Manakov model during collision. In addition the interactions between bound state dark-bright-bright vector solitons reveal oscillations in their amplitudes. A similar kind of breathing effect was also experimentally observed in the Bose-Einstein condensates. Some possible ways are theoretically suggested not only to control this breathing effect but also to manage the beating, bouncing, jumping, and attraction effects in the collision dynamics of dark-bright-bright vector solitons. The role of multiple free parameters in our solution is examined to define polarization vector, envelope speed, envelope width, envelope amplitude, grayness, and complex modulation of our solution. It is interesting to note that the polarization vector of our mixed vector one-soliton evolves in sphere or hyperboloid depending upon the initial parametric choices.

  10. Illustration of microphysical processes in Amazonian deep convective clouds in the gamma phase space: introduction and potential applications

    NASA Astrophysics Data System (ADS)

    Cecchini, Micael A.; Machado, Luiz A. T.; Wendisch, Manfred; Costa, Anja; Krämer, Martina; Andreae, Meinrat O.; Afchine, Armin; Albrecht, Rachel I.; Artaxo, Paulo; Borrmann, Stephan; Fütterer, Daniel; Klimach, Thomas; Mahnke, Christoph; Martin, Scot T.; Minikin, Andreas; Molleker, Sergej; Pardo, Lianet H.; Pöhlker, Christopher; Pöhlker, Mira L.; Pöschl, Ulrich; Rosenfeld, Daniel; Weinzierl, Bernadett

    2017-12-01

    The behavior of tropical clouds remains a major open scientific question, resulting in poor representation by models. One challenge is to realistically reproduce cloud droplet size distributions (DSDs) and their evolution over time and space. Many applications, not limited to models, use the gamma function to represent DSDs. However, even though the statistical characteristics of the gamma parameters have been widely studied, there is almost no study dedicated to understanding the phase space of this function and the associated physics. This phase space can be defined by the three parameters that define the DSD intercept, shape, and curvature. Gamma phase space may provide a common framework for parameterizations and intercomparisons. Here, we introduce the phase space approach and its characteristics, focusing on warm-phase microphysical cloud properties and the transition to the mixed-phase layer. We show that trajectories in this phase space can represent DSD evolution and can be related to growth processes. Condensational and collisional growth may be interpreted as pseudo-forces that induce displacements in opposite directions within the phase space. The actually observed movements in the phase space are a result of the combination of such pseudo-forces. Additionally, aerosol effects can be evaluated given their significant impact on DSDs. The DSDs associated with liquid droplets that favor cloud glaciation can be delimited in the phase space, which can help models to adequately predict the transition to the mixed phase. We also consider possible ways to constrain the DSD in two-moment bulk microphysics schemes, in which the relative dispersion parameter of the DSD can play a significant role. Overall, the gamma phase space approach can be an invaluable tool for studying cloud microphysical evolution and can be readily applied in many scenarios that rely on gamma DSDs.

  11. Material Barriers to Diffusive Mixing

    NASA Astrophysics Data System (ADS)

    Haller, George; Karrasch, Daniel

    2017-11-01

    Transport barriers, as zero-flux surfaces, are ill-defined in purely advective mixing in which the flux of any passive scalar is zero through all material surfaces. For this reason, Lagrangian Coherent Structures (LCSs) have been argued to play the role of mixing barriers as most repelling, attracting or shearing material lines. These three kinematic concepts, however, can also be defined in different ways, both within rigorous mathematical treatments and within the realm of heuristic diagnostics. This has lead to a an ever-growing number of different LCS methods, each generally identifying different objects as transport barriers. In this talk, we examine which of these methods have actual relevance for diffusive transport barriers. The latter barriers are arguably the practically relevant inhibitors in the mixing of physically relevant tracers, such as temperature, salinity, vorticity or potential vorticity. We demonstrate the role of the most effective diffusion barriers in analytical examples and observational data. Supported in part by the DFG Priority Program on Turbulent Superstructures.

  12. Coupled Hf-Nd-Pb isotope co-variations of HIMU oceanic island basalts from Mangaia, Cook-Austral islands, suggest an Archean source component in the mantle transition zone

    NASA Astrophysics Data System (ADS)

    Nebel, Oliver; Arculus, Richard J.; van Westrenen, Wim; Woodhead, Jon D.; Jenner, Frances E.; Nebel-Jacobsen, Yona J.; Wille, Martin; Eggins, Stephen M.

    2013-07-01

    Although it is widely accepted that oceanic island basalts (OIB) sample geochemically distinct mantle reservoirs including recycled oceanic crust, the composition, age, and locus of these reservoirs remain uncertain. OIB with highly radiogenic Pb isotope signatures are grouped as HIMU (high-μ, with μ = 238U/204Pb), and exhibit unique Hf-Nd isotopic characteristics, defined as ΔɛHf, deviant from a terrestrial igneous rock array that includes all other OIB types. Here we combine new Hf isotope data with previous Nd-Pb isotope measurements to assess the coupled, time-integrated Hf-Nd-Pb isotope evolution of the most extreme HIMU location (Mangaia, French Polynesia). In comparison with global MORB and other OIB types, Mangaia samples define a unique trend in coupled Hf-Nd-Pb isotope co-variations (expressed in 207Pb/206Pb vs. ΔɛHf). In a model employing subducted, dehydrated oceanic crust, mixing between present-day depleted MORB mantle (DMM) and small proportions (˜5%) of a HIMU mantle endmember can re-produce the Hf-Nd-Pb isotope systematics of global HIMU basalts (sensu stricto; i.e., without EM-1/EM-2/FOZO components). An age range of 3.5 to <2 Ga is required for HIMU endmember(s) that mix with DMM to account for the observed present-day HIMU isotope compositions, suggesting a range of age distributions rather than a single component in the mantle. Our data suggest that mixing of HIMU mantle endmembers and DMM occurs in the mantle transition zone by entrainment in secondary plumes that rise at the edge of the Pacific Large Low Seismic Velocity Zone (LLSVP). These create either pure HIMU (sensu stricto) or HIMU affected by other enriched mantle endmembers (sensu lato). If correct, this requires isolation of parts of the mantle transition zone for >3 Gyr and implies that OIB chemistry can be used to test geodynamic models.

  13. Methodological quality and reporting of generalized linear mixed models in clinical medicine (2000-2012): a systematic review.

    PubMed

    Casals, Martí; Girabent-Farrés, Montserrat; Carrasco, Josep L

    2014-01-01

    Modeling count and binary data collected in hierarchical designs have increased the use of Generalized Linear Mixed Models (GLMMs) in medicine. This article presents a systematic review of the application and quality of results and information reported from GLMMs in the field of clinical medicine. A search using the Web of Science database was performed for published original articles in medical journals from 2000 to 2012. The search strategy included the topic "generalized linear mixed models","hierarchical generalized linear models", "multilevel generalized linear model" and as a research domain we refined by science technology. Papers reporting methodological considerations without application, and those that were not involved in clinical medicine or written in English were excluded. A total of 443 articles were detected, with an increase over time in the number of articles. In total, 108 articles fit the inclusion criteria. Of these, 54.6% were declared to be longitudinal studies, whereas 58.3% and 26.9% were defined as repeated measurements and multilevel design, respectively. Twenty-two articles belonged to environmental and occupational public health, 10 articles to clinical neurology, 8 to oncology, and 7 to infectious diseases and pediatrics. The distribution of the response variable was reported in 88% of the articles, predominantly Binomial (n = 64) or Poisson (n = 22). Most of the useful information about GLMMs was not reported in most cases. Variance estimates of random effects were described in only 8 articles (9.2%). The model validation, the method of covariate selection and the method of goodness of fit were only reported in 8.0%, 36.8% and 14.9% of the articles, respectively. During recent years, the use of GLMMs in medical literature has increased to take into account the correlation of data when modeling qualitative data or counts. According to the current recommendations, the quality of reporting has room for improvement regarding the characteristics of the analysis, estimation method, validation, and selection of the model.

  14. Seasonal Variability of Middle Latitude Ozone in the Lowermost Stratosphere Derived from Probability Distribution Functions

    NASA Technical Reports Server (NTRS)

    Cerniglia, M. C.; Douglass, A. R.; Rood, R. B.; Sparling, L. C..; Nielsen, J. E.

    1999-01-01

    We present a study of the distribution of ozone in the lowermost stratosphere with the goal of understanding the relative contribution to the observations of air of either distinctly tropospheric or stratospheric origin. The air in the lowermost stratosphere is divided into two population groups based on Ertel's potential vorticity at 300 hPa. High [low] potential vorticity at 300 hPa suggests that the tropopause is low [high], and the identification of the two groups helps to account for dynamic variability. Conditional probability distribution functions are used to define the statistics of the mix from both observations and model simulations. Two data sources are chosen. First, several years of ozonesonde observations are used to exploit the high vertical resolution. Second, observations made by the Halogen Occultation Experiment [HALOE] on the Upper Atmosphere Research Satellite [UARS] are used to understand the impact on the results of the spatial limitations of the ozonesonde network. The conditional probability distribution functions are calculated at a series of potential temperature surfaces spanning the domain from the midlatitude tropopause to surfaces higher than the mean tropical tropopause [about 380K]. Despite the differences in spatial and temporal sampling, the probability distribution functions are similar for the two data sources. Comparisons with the model demonstrate that the model maintains a mix of air in the lowermost stratosphere similar to the observations. The model also simulates a realistic annual cycle. By using the model, possible mechanisms for the maintenance of mix of air in the lowermost stratosphere are revealed. The relevance of the results to the assessment of the environmental impact of aircraft effluence is discussed.

  15. Seasonal Variability of Middle Latitude Ozone in the Lowermost Stratosphere Derived from Probability Distribution Functions

    NASA Technical Reports Server (NTRS)

    Cerniglia, M. C.; Douglass, A. R.; Rood, R. B.; Sparling, L. C.; Nielsen, J. E.

    1999-01-01

    We present a study of the distribution of ozone in the lowermost stratosphere with the goal of understanding the relative contribution to the observations of air of either distinctly tropospheric or stratospheric origin. The air in the lowermost stratosphere is divided into two population groups based on Ertel's potential vorticity at 300 hPa. High [low] potential vorticity at 300 hPa suggests that the tropopause is low [high], and the identification of the two groups helps to account for dynamic variability. Conditional probability distribution functions are used to define the statistics of the mix from both observations and model simulations. Two data sources are chosen. First, several years of ozonesonde observations are used to exploit the high vertical resolution. Second, observations made by the Halogen Occultation Experiment [HALOE) on the Upper Atmosphere Research Satellite [UARS] are used to understand the impact on the results of the spatial limitations of the ozonesonde network. The conditional probability distribution functions are calculated at a series of potential temperature surfaces spanning the domain from the midlatitude tropopause to surfaces higher than the mean tropical tropopause [approximately 380K]. Despite the differences in spatial and temporal sampling, the probability distribution functions are similar for the two data sources. Comparisons with the model demonstrate that the model maintains a mix of air in the lowermost stratosphere similar to the observations. The model also simulates a realistic annual cycle. By using the model, possible mechanisms for the maintenance of mix of air in the lowermost stratosphere are revealed. The relevance of the results to the assessment of the environmental impact of aircraft effluence is discussed.

  16. Mixed linear-non-linear inversion of crustal deformation data: Bayesian inference of model, weighting and regularization parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, Jun'ichi; Johnson, Kaj M.

    2010-06-01

    We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.

  17. 21 CFR 184.1271 - L-Cysteine.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... dough as a dough strengthener as defined in § 170.3(o)(6) of this chapter in yeast-leavened baked goods and baking mixes as defined in § 170.3(n)(1) of this chapter. (d) This regulation is issued prior to a...

  18. Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex

    PubMed Central

    Procyk, Emmanuel; Dominey, Peter Ford

    2016-01-01

    Primates display a remarkable ability to adapt to novel situations. Determining what is most pertinent in these situations is not always possible based only on the current sensory inputs, and often also depends on recent inputs and behavioral outputs that contribute to internal states. Thus, one can ask how cortical dynamics generate representations of these complex situations. It has been observed that mixed selectivity in cortical neurons contributes to represent diverse situations defined by a combination of the current stimuli, and that mixed selectivity is readily obtained in randomly connected recurrent networks. In this context, these reservoir networks reproduce the highly recurrent nature of local cortical connectivity. Recombining present and past inputs, random recurrent networks from the reservoir computing framework generate mixed selectivity which provides pre-coded representations of an essentially universal set of contexts. These representations can then be selectively amplified through learning to solve the task at hand. We thus explored their representational power and dynamical properties after training a reservoir to perform a complex cognitive task initially developed for monkeys. The reservoir model inherently displayed a dynamic form of mixed selectivity, key to the representation of the behavioral context over time. The pre-coded representation of context was amplified by training a feedback neuron to explicitly represent this context, thereby reproducing the effect of learning and allowing the model to perform more robustly. This second version of the model demonstrates how a hybrid dynamical regime combining spatio-temporal processing of reservoirs, and input driven attracting dynamics generated by the feedback neuron, can be used to solve a complex cognitive task. We compared reservoir activity to neural activity of dorsal anterior cingulate cortex of monkeys which revealed similar network dynamics. We argue that reservoir computing is a pertinent framework to model local cortical dynamics and their contribution to higher cognitive function. PMID:27286251

  19. Mixed Methods Research: The "Thing-ness" Problem.

    PubMed

    Hesse-Biber, Sharlene

    2015-06-01

    Contemporary mixed methods research (MMR) veers away from a "loosely bounded" to a "bounded" concept that has important negative implications for how qualitatively driven mixed methods approaches are positioned in the field of mixed methods and overall innovation in the praxis of MMR. I deploy the concept of reification defined as taking an object/abstraction and treating it as if it were real such that it takes on the quality of "thing-ness," having a concrete independent existence. I argue that the contemporary reification of mixed methods as a "thing" is fueled by three interrelated factors: (a) the growing formalization of mixed methods as design, (b) the unexamined belief in the "synergy" of mixed methods and, (c) the deployment of a "practical pragmatism" as the "philosophical partner" for mixed methods inquiry. © The Author(s) 2015.

  20. Assessing Affective Constructs in Reading: A Mixed Methods Study

    ERIC Educational Resources Information Center

    Conradi, Kristin

    2011-01-01

    Research investigating affective dimensions in reading has long been plagued by vaguely defined constructs and, consequently, by an array of potentially problematic instruments designed to measure them. This mixed-methods study investigated the relationship among three popular group-administered instruments intended to tap affective constructs in…

  1. Jointly modeling longitudinal proportional data and survival times with an application to the quality of life data in a breast cancer trial.

    PubMed

    Song, Hui; Peng, Yingwei; Tu, Dongsheng

    2017-04-01

    Motivated by the joint analysis of longitudinal quality of life data and recurrence free survival times from a cancer clinical trial, we present in this paper two approaches to jointly model the longitudinal proportional measurements, which are confined in a finite interval, and survival data. Both approaches assume a proportional hazards model for the survival times. For the longitudinal component, the first approach applies the classical linear mixed model to logit transformed responses, while the second approach directly models the responses using a simplex distribution. A semiparametric method based on a penalized joint likelihood generated by the Laplace approximation is derived to fit the joint model defined by the second approach. The proposed procedures are evaluated in a simulation study and applied to the analysis of breast cancer data motivated this research.

  2. User's guide to PHREEQC (Version 2) : a computer program for speciation, batch-reaction, one-dimensional transport, and inverse geochemical calculations

    USGS Publications Warehouse

    Parkhurst, David L.; Appelo, C.A.J.

    1999-01-01

    PHREEQC version 2 is a computer program written in the C programming language that is designed to perform a wide variety of low-temperature aqueous geochemical calculations. PHREEQC is based on an ion-association aqueous model and has capabilities for (1) speciation and saturation-index calculations; (2) batch-reaction and one-dimensional (1D) transport calculations involving reversible reactions, which include aqueous, mineral, gas, solid-solution, surface-complexation, and ion-exchange equilibria, and irreversible reactions, which include specified mole transfers of reactants, kinetically controlled reactions, mixing of solutions, and temperature changes; and (3) inverse modeling, which finds sets of mineral and gas mole transfers that account for differences in composition between waters, within specified compositional uncertainty limits.New features in PHREEQC version 2 relative to version 1 include capabilities to simulate dispersion (or diffusion) and stagnant zones in 1D-transport calculations, to model kinetic reactions with user-defined rate expressions, to model the formation or dissolution of ideal, multicomponent or nonideal, binary solid solutions, to model fixed-volume gas phases in addition to fixed-pressure gas phases, to allow the number of surface or exchange sites to vary with the dissolution or precipitation of minerals or kinetic reactants, to include isotope mole balances in inverse modeling calculations, to automatically use multiple sets of convergence parameters, to print user-defined quantities to the primary output file and (or) to a file suitable for importation into a spreadsheet, and to define solution compositions in a format more compatible with spreadsheet programs. This report presents the equations that are the basis for chemical equilibrium, kinetic, transport, and inverse-modeling calculations in PHREEQC; describes the input for the program; and presents examples that demonstrate most of the program's capabilities.

  3. The application of mixed methods designs to trauma research.

    PubMed

    Creswell, John W; Zhang, Wanqing

    2009-12-01

    Despite the use of quantitative and qualitative data in trauma research and therapy, mixed methods studies in this field have not been analyzed to help researchers designing investigations. This discussion begins by reviewing four core characteristics of mixed methods research in the social and human sciences. Combining these characteristics, the authors focus on four select mixed methods designs that are applicable in trauma research. These designs are defined and their essential elements noted. Applying these designs to trauma research, a search was conducted to locate mixed methods trauma studies. From this search, one sample study was selected, and its characteristics of mixed methods procedures noted. Finally, drawing on other mixed methods designs available, several follow-up mixed methods studies were described for this sample study, enabling trauma researchers to view design options for applying mixed methods research in trauma investigations.

  4. Medicare home health: a description of total episodes of care.

    PubMed

    Branch, L G; Goldberg, H B; Cheh, V A; Williams, J

    1993-01-01

    The purpose of this study was to present descriptive information on the characteristics of 2,873 Medicare home health clients, to quantify systematically their patterns of service utilization and allowed charges during a total episode of care, and to clarify the bivariate associations between client characteristics and utilization. The model client was female, 75-84 years of age, living with a spouse, and frail based on a variety of indicators. The mean total episode was approximately 23 visits, with allowed charges of $1,238 (1986 dollars). Specific subgroups of clients, defined by their morbidities and frailties, used identifiable clusters of services. Implications for case-mix models and implications for capitation payments under health care reform are discussed.

  5. Mixed finite element - discontinuous finite volume element discretization of a general class of multicontinuum models

    NASA Astrophysics Data System (ADS)

    Ruiz-Baier, Ricardo; Lunati, Ivan

    2016-10-01

    We present a novel discretization scheme tailored to a class of multiphase models that regard the physical system as consisting of multiple interacting continua. In the framework of mixture theory, we consider a general mathematical model that entails solving a system of mass and momentum equations for both the mixture and one of the phases. The model results in a strongly coupled and nonlinear system of partial differential equations that are written in terms of phase and mixture (barycentric) velocities, phase pressure, and saturation. We construct an accurate, robust and reliable hybrid method that combines a mixed finite element discretization of the momentum equations with a primal discontinuous finite volume-element discretization of the mass (or transport) equations. The scheme is devised for unstructured meshes and relies on mixed Brezzi-Douglas-Marini approximations of phase and total velocities, on piecewise constant elements for the approximation of phase or total pressures, as well as on a primal formulation that employs discontinuous finite volume elements defined on a dual diamond mesh to approximate scalar fields of interest (such as volume fraction, total density, saturation, etc.). As the discretization scheme is derived for a general formulation of multicontinuum physical systems, it can be readily applied to a large class of simplified multiphase models; on the other, the approach can be seen as a generalization of these models that are commonly encountered in the literature and employed when the latter are not sufficiently accurate. An extensive set of numerical test cases involving two- and three-dimensional porous media are presented to demonstrate the accuracy of the method (displaying an optimal convergence rate), the physics-preserving properties of the mixed-primal scheme, as well as the robustness of the method (which is successfully used to simulate diverse physical phenomena such as density fingering, Terzaghi's consolidation, deformation of a cantilever bracket, and Boycott effects). The applicability of the method is not limited to flow in porous media, but can also be employed to describe many other physical systems governed by a similar set of equations, including e.g. multi-component materials.

  6. Long-path measurements of pollutants and micrometeorology over Highway 401 in Toronto

    NASA Astrophysics Data System (ADS)

    You, Yuan; Staebler, Ralf M.; Moussa, Samar G.; Su, Yushan; Munoz, Tony; Stroud, Craig; Zhang, Junhua; Moran, Michael D.

    2017-11-01

    Traffic emissions contribute significantly to urban air pollution. Measurements were conducted over Highway 401 in Toronto, Canada, with a long-path Fourier transform infrared (FTIR) spectrometer combined with a suite of micrometeorological instruments to identify and quantify a range of air pollutants. Results were compared with simultaneous in situ observations at a roadside monitoring station, and with output from a special version of the operational Canadian air quality forecast model (GEM-MACH). Elevated mixing ratios of ammonia (0-23 ppb) were observed, of which 76 % were associated with traffic emissions. Hydrogen cyanide was identified at mixing ratios between 0 and 4 ppb. Using a simple dispersion model, an integrated emission factor of on average 2.6 g km-1 carbon monoxide was calculated for this defined section of Highway 401, which agreed well with estimates based on vehicular emission factors and observed traffic volumes. Based on the same dispersion calculations, vehicular average emission factors of 0.04, 0.36, and 0.15 g km-1 were calculated for ammonia, nitrogen oxide, and methanol, respectively.

  7. Reliability of environmental sampling culture results using the negative binomial intraclass correlation coefficient.

    PubMed

    Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming

    2014-01-01

    The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.

  8. Numerical prediction of algae cell mixing feature in raceway ponds using particle tracing methods.

    PubMed

    Ali, Haider; Cheema, Taqi A; Yoon, Ho-Sung; Do, Younghae; Park, Cheol W

    2015-02-01

    In the present study, a novel technique, which involves numerical computation of the mixing length of algae particles in raceway ponds, was used to evaluate the mixing process. A value of mixing length that is higher than the maximum streamwise distance (MSD) of algae cells indicates that the cells experienced an adequate turbulent mixing in the pond. A coupling methodology was adapted to map the pulsating effects of a 2D paddle wheel on a 3D raceway pond in this study. The turbulent mixing was examined based on the computations of mixing length, residence time, and algae cell distribution in the pond. The results revealed that the use of particle tracing methodology is an improved approach to define the mixing phenomenon more effectively. Moreover, the algae cell distribution aided in identifying the degree of mixing in terms of mixing length and residence time. © 2014 Wiley Periodicals, Inc.

  9. Radiographic appearance of bronchoalveolar carcinoma in nine cats.

    PubMed

    Ballegeer, Elizabeth A; Forrest, Lisa J; Stepien, Rebecca L

    2002-01-01

    Thoracic radiographs of nine cats with confirmed bronchoalveolar carcinoma (BAC) were reviewed retrospectively. Radiographic appearance of BAC was divided into three categories: mixed bronchoalveolar pattern, ill-defined alveolar mass, or mass with cavitation. In addition to these radiographic signs, all nine cats had evidence of some form of bronchial disease. Cavitary lesions were the most common finding (n = 5). In addition, three cats in this category had diffuse bronchointerstitial opacity and one cat had focal peribronchial cuffing. Five cats had either a mixed bronchoalveolar pattern with bronchiectasis (n = 3) or an ill-defined alveolar mass with peribronchial cuffing (n = 2). One cat had both a mixed bronchoalveolar pattern and a cavitary mass. Each of these nine cats had some form of bronchial disease (bronchointerstitial pattern, peribronchial cuffing, or bronchiectasis), which aids in the radiographic diagnosis of bronchoalveolar carcinoma and may represent airway metastasis.

  10. Combustor nozzles in gas turbine engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Thomas Edward; Keener, Christopher Paul; Stewart, Jason Thurman

    2017-09-12

    A micro-mixer nozzle for use in a combustor of a combustion turbine engine, the micro-mixer nozzle including: a fuel plenum defined by a shroud wall connecting a periphery of a forward tube sheet to a periphery of an aft tubesheet; a plurality of mixing tubes extending across the fuel plenum for mixing a supply of compressed air and fuel, each of the mixing tubes forming a passageway between an inlet formed through the forward tubesheet and an outlet formed through the aft tubesheet; and a wall mixing tube formed in the shroud wall.

  11. Explaining prescription drug use and expenditures using the adjusted clinical groups case-mix system in the population of British Columbia, Canada.

    PubMed

    Hanley, Gillian E; Morgan, Steve; Reid, Robert J

    2010-05-01

    Given that prescription drugs have become a major financial component of health care, there is an increased need to explain variations in the use of and expenditure on medicines. Case-mix systems built from existing administrative datasets may prove very useful for such prediction. We estimated the concurrent and prospective predictive validity of the adjusted clinical groups (ACG) system in pharmaceutical research and compared the ACG system with the Charlson index of comorbidity. We ran a generalized linear models to examine the predictive validity of the ACG system and the Charlson index and report the correlation between the predicted and observed expenditures. We reported mean predictive ratios across medical condition and cost-defined groups. When predicting use of medicines, we used C-statistics to summarize the area under the receiver operating characteristic curve. The 3,908,533 British Columbia residents who were registered for the universal health care plan for 275+ days in the calendar years 2004 and 2005. Outcomes were total pharmaceutical expenditures, use of any medicines, and use of medicines from 4+ different therapeutic categories. The ACG case mix system predicted drug expenditures better than the Charlson index. The mean predictive ratios for the ACG system models were all within 4% of the actual costs when examining medical condition group and the C-stats for the 2 dichotomous outcomes were between 0.82 and 0.89. ACG case-mix adjusters are a valuable predictor of pharmaceutical use and expenditures with much higher predictive power than age, sex, and the Charlson index of comorbidity.

  12. Micromixer-based time-resolved NMR: applications to ubiquitin protein conformation.

    PubMed

    Kakuta, Masaya; Jayawickrama, Dimuthu A; Wolters, Andrew M; Manz, Andreas; Sweedler, Jonathan V

    2003-02-15

    Time-resolved NMR spectroscopy is used to studychanges in protein conformation based on the elapsed time after a change in the solvent composition of a protein solution. The use of a micromixer and a continuous-flow method is described where the contents of two capillary flows are mixed rapidly, and then the NMR spectra of the combined flow are recorded at precise time points. The distance after mixing the two fluids and flow rates define the solvent-protein interaction time; this method allows the measurement of NMR spectra at precise mixing time points independent of spectral acquisition time. Integration of a micromixer and a microcoil NMR probe enables low-microliter volumes to be used without losing significant sensitivity in the NMR measurement. Ubiquitin, the model compound, changes its conformation from native to A-state at low pH and in 40% or higher methanol/water solvents. Proton NMR resonances of the His-68 and the Tyr-59 of ubiquitin are used to probe the conformational changes. Mixing ubiquitin and methanol solutions under low pH at microliter per minute flow rates yields both native and A-states. As the flow rate decreases, yielding longer reaction times, the population of the A-state increases. The micromixer-NMR system can probe reaction kinetics on a time scale of seconds.

  13. CHARACTERIZING CONTAINERIZED MIXED LOW-LEVEL WASTE FOR TREATMENT - A WORKSHOP PROCEEDINGS

    EPA Science Inventory

    This report is the product of a technical workshop held in May 1993 in Las Vegas, Nevada addressing Mixed Low-Level Waste (MLLW). he workshop was conducted by the Environmental Protection Agency (EPA) and the Department of Energy (DOE). ts purpose was to define the characterizati...

  14. New Colors: Mixed Race Families Still Find a Mixed Reception.

    ERIC Educational Resources Information Center

    Steel, Melissa; Valentine, Glenda

    1995-01-01

    Describes the struggles children of multiracial families face in their daily lives and at school where they commonly experience the social isolation of not belonging to a defined group. The commentary, "Shades of Grey," explores the debate over racial categories, explaining its base in changing social standards. (SLD)

  15. 40 CFR 227.29 - Initial mixing.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Initial mixing is defined to be that dispersion or diffusion of liquid, suspended particulate, and solid... adequate to predict initial dispersion and diffusion of the waste, these shall be used, if necessary, in.... (2) When field data on the dispersion and diffusion of a waste of characteristics similar to that...

  16. 40 CFR 227.29 - Initial mixing.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Initial mixing is defined to be that dispersion or diffusion of liquid, suspended particulate, and solid... adequate to predict initial dispersion and diffusion of the waste, these shall be used, if necessary, in.... (2) When field data on the dispersion and diffusion of a waste of characteristics similar to that...

  17. 40 CFR 227.29 - Initial mixing.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Initial mixing is defined to be that dispersion or diffusion of liquid, suspended particulate, and solid... adequate to predict initial dispersion and diffusion of the waste, these shall be used, if necessary, in.... (2) When field data on the dispersion and diffusion of a waste of characteristics similar to that...

  18. 40 CFR 227.29 - Initial mixing.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Initial mixing is defined to be that dispersion or diffusion of liquid, suspended particulate, and solid... adequate to predict initial dispersion and diffusion of the waste, these shall be used, if necessary, in.... (2) When field data on the dispersion and diffusion of a waste of characteristics similar to that...

  19. Outcome differences in adolescent blunt severe polytrauma patients managed at pediatric versus adult trauma centers.

    PubMed

    Rogers, Amelia T; Gross, Brian W; Cook, Alan D; Rinehart, Cole D; Lynch, Caitlin A; Bradburn, Eric H; Heinle, Colin C; Jammula, Shreya; Rogers, Frederick B

    2017-12-01

    Previous research suggests adolescent trauma patients can be managed equally effectively at pediatric and adult trauma centers. We sought to determine whether this association would be upheld for adolescent severe polytrauma patients. We hypothesized that no difference in adjusted outcomes would be observed between pediatric trauma centers (PTCs) and adult trauma centers (ATCs) for this population. All severely injured adolescent (aged 12-17 years) polytrauma patients were extracted from the Pennsylvania Trauma Outcomes Study database from 2003 to 2015. Polytrauma was defined as an Abbreviated Injury Scale (AIS) score ≥3 for two or more AIS-defined body regions. Dead on arrival, transfer, and penetrating trauma patients were excluded from analysis. ATC were defined as adult-only centers, whereas standalone pediatric hospitals and adult centers with pediatric affiliation were considered PTC. Multilevel mixed-effects logistic regression models assessed the adjusted impact of center type on mortality and total complications while controlling for age, shock index, Injury Severity Score, Glasgow Coma Scale motor score, trauma center level, case volume, and injury year. A generalized linear mixed model characterized functional status at discharge (FSD) while controlling for the same variables. A total of 1,606 patients met inclusion criteria (PTC: 868 [54.1%]; ATC: 738 [45.9%]), 139 (8.66%) of which died in-hospital. No significant difference in mortality (adjusted odds ratio [AOR]: 1.10, 95% CI 0.54-2.24; p = 0.794; area under the receiver operating characteristic: 0.89) was observed between designations in adjusted analysis; however, FSD (AOR: 0.38, 95% CI 0.15-0.97; p = 0.043) was found to be lower and total complication trends higher (AOR: 1.78, 95% CI 0.98-3.32; p = 0.058) at PTC for adolescent polytrauma patients. Contrary to existing literature on adolescent trauma patients, our results suggest patients aged 12-17 presenting with polytrauma may experience improved overall outcomes when managed at adult compared to pediatric trauma centers. Epidemiologic study, level III.

  20. Search for heavy Majorana neutrinos in μ ± μ ± + jets and e ± e ± + jets events in pp collisions at s = 7   TeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatrchyan, S.; Khachatryan, V.; Sirunyan, A. M.

    A search is performed for heavy Majorana neutrinos (N) using an event signature defined by two same-sign charged leptons of the same flavour and two jets. The data correspond to an integrated luminosity of 4.98 inverse femtobarns of pp collisions at a centre-of-mass energy of 7 TeV collected with the CMS detector at the Large Hadron Collider. No excess of events is observed beyond the expected standard model background and therefore upper limits are set on the square of the mixing parameter, abs(V[ell N]) squared, for ell = e, mu, as a function of heavy Majorana-neutrino mass. These are themore » first direct upper limits on the heavy Majorana-neutrino mixing for m[N] > 90 GeV.« less

  1. On call at the mall: a mixed methods study of U.S. medical malls

    PubMed Central

    2013-01-01

    Background The decline of the traditional U.S. shopping mall and a focus on more consumer- centered care have created an opportunity for “medical malls”. Medical malls are defined as former retail spaces repurposed for healthcare tenants or mixed-use medical/retail facilities. We aimed to describe the current reach of healthcare services in U.S. malls, characterize the medical mall model and emerging trends, and assess the potential of these facilities to serve low-income populations. Methods We used a mixed methods approach which included a comprehensive literature review, key informant interviews, and a descriptive analysis of the Directory of Major Malls, an online retail database. Results Six percent (n = 89) of large, enclosed shopping malls in the U.S. include at least one non-optometry or dental healthcare tenant. We identified a total of 28 medical malls across the U.S., the majority of which opened in the past five years and serve middle or high income populations. Stakeholders felt the key strengths of medical malls were more convenient access including public transportation, greater familiarity for patients, and “one stop shopping” for primary care and specialty services as well as retail needs. Conclusions While medical malls currently account for a small fraction of malls in the US, they are a new model for healthcare with significant potential for growth. PMID:24209495

  2. On call at the mall: a mixed methods study of U.S. medical malls.

    PubMed

    Uscher-Pines, Lori; Mehrotra, Ateev; Chari, Ramya

    2013-11-09

    The decline of the traditional U.S. shopping mall and a focus on more consumer- centered care have created an opportunity for "medical malls". Medical malls are defined as former retail spaces repurposed for healthcare tenants or mixed-use medical/retail facilities.We aimed to describe the current reach of healthcare services in U.S. malls, characterize the medical mall model and emerging trends, and assess the potential of these facilities to serve low-income populations. We used a mixed methods approach which included a comprehensive literature review, key informant interviews, and a descriptive analysis of the Directory of Major Malls, an online retail database. Six percent (n = 89) of large, enclosed shopping malls in the U.S. include at least one non-optometry or dental healthcare tenant. We identified a total of 28 medical malls across the U.S., the majority of which opened in the past five years and serve middle or high income populations. Stakeholders felt the key strengths of medical malls were more convenient access including public transportation, greater familiarity for patients, and "one stop shopping" for primary care and specialty services as well as retail needs. While medical malls currently account for a small fraction of malls in the US, they are a new model for healthcare with significant potential for growth.

  3. A new approach to mixed H2/H infinity controller synthesis using gradient-based parameter optimization methods

    NASA Technical Reports Server (NTRS)

    Ly, Uy-Loi; Schoemig, Ewald

    1993-01-01

    In the past few years, the mixed H(sub 2)/H-infinity control problem has been the object of much research interest since it allows the incorporation of robust stability into the LQG framework. The general mixed H(sub 2)/H-infinity design problem has yet to be solved analytically. Numerous schemes have considered upper bounds for the H(sub 2)-performance criterion and/or imposed restrictive constraints on the class of systems under investigation. Furthermore, many modern control applications rely on dynamic models obtained from finite-element analysis and thus involve high-order plant models. Hence the capability to design low-order (fixed-order) controllers is of great importance. In this research a new design method was developed that optimizes the exact H(sub 2)-norm of a certain subsystem subject to robust stability in terms of H-infinity constraints and a minimal number of system assumptions. The derived algorithm is based on a differentiable scalar time-domain penalty function to represent the H-infinity constraints in the overall optimization. The scheme is capable of handling multiple plant conditions and hence multiple performance criteria and H-infinity constraints and incorporates additional constraints such as fixed-order and/or fixed structure controllers. The defined penalty function is applicable to any constraint that is expressible in form of a real symmetric matrix-inequity.

  4. A multi-species reactive transport model to estimate biogeochemical rates based on single-well push-pull test data

    NASA Astrophysics Data System (ADS)

    Phanikumar, Mantha S.; McGuire, Jennifer T.

    2010-08-01

    Push-pull tests are a popular technique to investigate various aquifer properties and microbial reaction kinetics in situ. Most previous studies have interpreted push-pull test data using approximate analytical solutions to estimate (generally first-order) reaction rate coefficients. Though useful, these analytical solutions may not be able to describe important complexities in rate data. This paper reports the development of a multi-species, radial coordinate numerical model (PPTEST) that includes the effects of sorption, reaction lag time and arbitrary reaction order kinetics to estimate rates in the presence of mixing interfaces such as those created between injected "push" water and native aquifer water. The model has the ability to describe an arbitrary number of species and user-defined reaction rate expressions including Monod/Michelis-Menten kinetics. The FORTRAN code uses a finite-difference numerical model based on the advection-dispersion-reaction equation and was developed to describe the radial flow and transport during a push-pull test. The accuracy of the numerical solutions was assessed by comparing numerical results with analytical solutions and field data available in the literature. The model described the observed breakthrough data for tracers (chloride and iodide-131) and reactive components (sulfate and strontium-85) well and was found to be useful for testing hypotheses related to the complex set of processes operating near mixing interfaces.

  5. The mixing effects for real gases and their mixtures

    NASA Astrophysics Data System (ADS)

    Gong, M. Q.; Luo, E. C.; Wu, J. F.

    2004-10-01

    The definitions of the adiabatic and isothermal mixing effects in the mixing processes of real gases were presented in this paper. Eight substances with boiling-point temperatures from cryogenic temperature to the ambient temperature were selected from the interest of low temperature refrigeration to study their binary and multicomponent mixing effects. Detailed analyses were made on the parameters of the mixing process to know their influences on mixing effects. Those parameters include the temperatures, pressures, and mole fraction ratios of pure substances before mixing. The results show that the maximum temperature variation occurs at the saturation state of each component in the mixing process. Those components with higher boiling-point temperatures have higher isothermal mixing effects. The maximum temperature variation which is defined as the adiabatic mixing effect can even reach up to 50 K, and the isothermal mixing effect can reach about 20 kJ/mol. The possible applications of the mixing cooling effect in both open cycle and closed cycle refrigeration systems were also discussed.

  6. A novel fermentation strategy for removing the key inhibitor acetic acid and efficiently utilizing the mixed sugars from lignocellulosic hydrolysates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mark A. Eiteman PHD; Elliot Altman Phd

    2009-02-11

    As part of preliminary research efforts, we have completed several experiments which demonstrate 'proof of concept.' These experiments addressed the following three questions: (1) Can a synthetic mixed sugar solution of glucose and xylose be efficiently consumed using the multi-organism approach? (2) Can this approach be used to accumulate a model product? (3) Can this approach be applied to the removal of an inhibitor, acetate, selectively from mixtures of xylose and glucose? To answer the question of whether this multi-organism approach can effectively consume synthetic mixed sugar solutions, we first tested substrate-selective uptake using two strains, one unable to consumemore » glucose and one unable to consume xylose. The xylose-selective strain ALS998 has mutations in the three genes involved in glucose uptake, rendering it unable to consume glucose: ptsG codes for the Enzyme IICB{sup Glc} of the phosphotransferase system (PTS) for carbohydrate transport (Postma et al., 1993), manZ codes for the IID{sup Man} domain of the mannose PTS permease (Huber, 1996), glk codes for glucokinase (Curtis and Epstein 1975) We also constructed strain ALS1008 which has a knockout in the xylA gene encoding for xylose isomerase, rendering ALS1008 unable to consume xylose. Two batch experiments and one continuous bioprocess were completed. In the first experiment, each strain was grown separately in a defined medium of 8 g/L xylose and 15 g/L glucose which represented xylose and glucose concentrations that can be generated by actual biomass. In the second experiment, the two strains were grown together in batch in the same defined, mixed-sugar medium. In a third experiment, we grew the strains continuously in a 'chemostat', except that we shifted the concentrations of glucose and xylose periodically to observe how the system would respond. (For example, we shifted the glucose concentration suddenly from 15 g/L to 30 g/L in the feed).« less

  7. Lurasidone for major depressive disorder with mixed features and anxiety: a post-hoc analysis of a randomized, placebo-controlled study.

    PubMed

    Tsai, Joyce; Thase, Michael E; Mao, Yongcai; Ng-Mak, Daisy; Pikalov, Andrei; Loebel, Antony

    2017-04-01

    The aim of this post-hoc analysis was to evaluate the efficacy of lurasidone in treating patients with major depressive disorder (MDD) with mixed features who present with mild and moderate-to-severe levels of anxiety. The data in this analysis were derived from a study of patients meeting the DSM-IV-TR criteria for unipolar MDD, with a Montgomery-Åsberg Depression Rating Scale (MADRS) total score ≥26, presenting with two or three protocol-defined manic symptoms, who were randomized to 6 weeks of double-blind treatment with either lurasidone 20-60 mg/day (n=109) or placebo (n=100). Anxiety severity was evaluated using the Hamilton Anxiety Rating Scale (HAM-A). To evaluate the effect of baseline anxiety on response to lurasidone, the following two anxiety groups were defined: mild anxiety (HAM-A≤14) and moderate-to-severe anxiety (HAM-A≥15). Change from baseline in MADRS total score was analyzed for each group using a mixed model for repeated measures. Treatment with lurasidone was associated with a significant week 6 change versus placebo in MADRS total score for patients with both mild anxiety (-18.4 vs. -12.8, p<0.01, effect size [ES]=0.59) and moderate-to-severe anxiety (-22.0 vs. -13.0, p<0.001, ES=0.95). Treatment with lurasidone was associated with a significant week 6 change versus placebo in HAM-A total score for patients with both mild anxiety (-7.6 vs. -4.0, p<0.01, ES=0.62), and moderate-to-severe anxiety (-11.4 vs. -6.1, p<0.0001, ES=0.91). In this post-hoc analysis of an MDD with mixed features and anxiety population, treatment with lurasidone was associated with significant improvement in both depressive and anxiety symptoms in subgroups with mild and moderate-to-severe levels of anxiety at baseline.

  8. Time Perception and Depressive Realism: Judgment Type, Psychophysical Functions and Bias

    PubMed Central

    Kornbrot, Diana E.; Msetfi, Rachel M.; Grimwood, Melvyn J.

    2013-01-01

    The effect of mild depression on time estimation and production was investigated. Participants made both magnitude estimation and magnitude production judgments for five time intervals (specified in seconds) from 3 sec to 65 sec. The parameters of the best fitting psychophysical function (power law exponent, intercept, and threshold) were determined individually for each participant in every condition. There were no significant effects of mood (high BDI, low BDI) or judgment (estimation, production) on the mean exponent, n = .98, 95% confidence interval (.96–1.04) or on the threshold. However, the intercept showed a ‘depressive realism’ effect, where high BDI participants had a smaller deviation from accuracy and a smaller difference between estimation and judgment than low BDI participants. Accuracy bias was assessed using three measures of accuracy: difference, defined as psychological time minus physical time, ratio, defined as psychological time divided by physical time, and a new logarithmic accuracy measure defined as ln (ratio). The ln (ratio) measure was shown to have approximately normal residuals when subjected to a mixed ANOVA with mood as a between groups explanatory factor and judgment and time category as repeated measures explanatory factors. The residuals of the other two accuracy measures flagrantly violated normality. The mixed ANOVAs of accuracy also showed a strong depressive realism effect, just like the intercepts of the psychophysical functions. There was also a strong negative correlation between estimation and production judgments. Taken together these findings support a clock model of time estimation, combined with additional cognitive mechanisms to account for the depressive realism effect. The findings also suggest strong methodological recommendations. PMID:23990960

  9. Molecular fingerprinting of complex grass allergoids: size assessments reveal new insights in epitope repertoires and functional capacities.

    PubMed

    Starchenka, S; Bell, A J; Mwange, J; Skinner, M A; Heath, M D

    2017-01-01

    Subcutaneous allergen immunotherapy (SCIT) is a well-documented treatment for allergic disease which involves injections of native allergen or modified (allergoid) extracts. The use of allergoid vaccines is a growing sector of the allergy immunotherapy market, associated with shorter-course therapy. The aim of this study was the structural and immunological characterisation of group 1 (Lol p 1) IgG-binding epitopes within a complex mix grass allergoid formulation containing rye grass. HP-SEC was used to resolve a mix grass allergoid preparation of high molecular weight into several distinct fractions with defined molecular weight and elution profiles. Allergen verification of the HP-SEC allergoid fractions was confirmed by mass spectrometry analysis. IgE and IgG immunoreactivity of the allergoid preparations was explored and Lol p 1 specific IgG-binding epitopes mapped by SPOT synthesis technology (PepSpot™) with structural analysis based on a Lol p 1 homology model. Grass specific IgE reactivity of the mix grass modified extract (allergoid) was diminished in comparison with the mix grass native extract. A difference in IgG profiles was observed between an intact mix grass allergoid preparation and HP-SEC allergoid fractions, which indicated enhancement of accessible reactive IgG epitopes across size distribution profiles of the mix grass allergoid formulation. Detailed analysis of the epitope specificity showed retention of six Lol p 1 IgG-binding epitopes in the mix grass modified extract. The structural and immunological changes which take place following the grass allergen modification process was further unravelled revealing distinct IgG immunological profiles. All epitopes were mapped on the solvent exposed area of Lol p 1 homology model accessible for IgG binding. One of the epitopes was identified as an 'immunodominant' Lol p 1 IgG-binding epitope (62-IFKDGRGCGSCFEIK-76) and classified as a novel epitope. The results from this study support the concept that modification allows shorter-course therapy options as a result of providing an IgG epitope repertoire important for efficacy. Additionally, the work paves the way to help further develop methods for standardising allergoid platforms.

  10. Monolithic Microfluidic Mixing-Spraying Devices for Time-Resolved Cryo-Electron Microscopy

    PubMed Central

    Lu, Zonghuan; Shaikh, Tanvir R.; Barnard, David; Meng, Xing; Mohamed, Hisham; Yassin, Aymen; Mannella, Carmen A.; Agrawal, Rajendra K.; Lu, Toh-Ming

    2009-01-01

    The goal of time-resolved cryo-electron microscopy is to determine structural models for transient functional states of large macromolecular complexes such as ribosomes and viruses. The challenge of time-resolved cryo-electron microscopy is to rapidly mix reactants, and then, following a defined time interval, to rapidly deposit them as a thin film and freeze the sample to the vitreous state. Here we describe a methodology in which reaction components are mixed and allowed to react, and are then sprayed onto an EM grid as it is being plunged into cryogen. All steps are accomplished by a monolithic, microfabricated silicon device that incorporates a mixer, reaction channel, and pneumatic sprayer in a single chip. We have found that microdroplets produced by air atomization spread to sufficiently thin films on a millisecond time scale provided that the carbon supporting film is made suitably hydrophilic. The device incorporates two T-mixers flowing into a single channel of four butterfly-shaped mixing elements that ensure effective mixing, followed by a microfluidic reaction channel whose length can be varied to achieve the desired reaction time. The reaction channel is flanked by two ports connected to compressed humidified nitrogen gas (at 50 psi) to generate the spray. The monolithic mixer-sprayer is incorporated into a computer-controlled plunging apparatus. To test the mixing performance and the suitability of the device for preparation of biological macromolecules for cryo-EM, ribosomes and ferritin were mixed in the device and sprayed onto grids. Three-dimensional reconstructions of the ribosomes demonstrated retention of native structure, and 30S and 50S subunits were shown to be capable of reassociation into ribosomes after passage through the device. PMID:19683579

  11. Trophic structure of mesopelagic fishes in the Gulf of Mexico revealed by gut content and stable isotope analyses

    USGS Publications Warehouse

    McClain-Counts, Jennifer P.; Demopoulos, Amanda W.J.; Ross, Steve W.

    2017-01-01

    Mesopelagic fishes represent an important component of the marine food web due to their global distributions, high abundances and ability to transport organic material throughout a large part of the water column. This study combined stable isotope (SIAs) and gut content analyses (GCAs) to characterize the trophic structure of mesopelagic fishes in the North-Central Gulf of Mexico. Additionally, this study examined whether mesopelagic fishes utilized chemosynthetic energy from cold seeps. Specimens were collected (9–25 August 2007) over three deep (>1,000 m) cold seeps at discrete depths (surface to 1,503 m) over the diurnal cycle. GCA classified 31 species (five families) of mesopelagic fishes into five feeding guilds: piscivores, large crustacean consumers, copepod consumers, generalists and mixed zooplanktivores. However, these guilds were less clearly defined based on stable isotope mixing model (MixSIAR) results, suggesting diets may be more mixed over longer time periods (weeks–months) and across co-occurring species. Copepods were likely important for the majority of mesopelagic fishes, consistent with GCA (this study) and previous literature. MixSIAR results also identified non-crustacean prey items, including salps and pteropods, as potentially important prey items for mesopelagic fishes, including those fishes not analysed in GCA (Sternoptyx spp. and Melamphaidae). Salps and other soft-bodied species are often missed in GCAs. Mesopelagic fishes had δ13C results consistent with particulate organic matter serving as the baseline organic carbon source, fueling up to three trophic levels. Fishes that undergo diel vertical migration were depleted in 15N relative to weak migrators, consistent with depth-specific isotope trends in sources and consumers, and assimilation of 15N-depleted organic matter in surface waters. Linear correlations between fish size and δ15N values suggested ontogenetic changes in fish diets for several species. While there was no direct measure of mesopelagic fishes assimilating chemosynthetic material, detection of infrequent consumption of this food resource may be hindered by the assimilation of isotopically enriched photosynthetic organic matter. By utilizing multiple dietary metrics (e.g. GCA, δ13C, δ15N, MixSIAR), this study better defined the trophic structure of mesopelagic fishes and allowed for insights on feeding, ultimately providing useful baseline information from which to track mesopelagic trophodynamics over time and space.

  12. Individualising Chronic Care Management by Analysing Patients' Needs - A Mixed Method Approach.

    PubMed

    Timpel, P; Lang, C; Wens, J; Contel, J C; Gilis-Januszewska, A; Kemple, K; Schwarz, P E

    2017-11-13

    Modern health systems are increasingly faced with the challenge to provide effective, affordable and accessible health care for people with chronic conditions. As evidence on the specific unmet needs and their impact on health outcomes is limited, practical research is needed to tailor chronic care to individual needs of patients with diabetes. Qualitative approaches to describe professional and informal caregiving will support understanding the complexity of chronic care. Results are intended to provide practical recommendations to be used for systematic implementation of sustainable chronic care models. A mixed method study was conducted. A standardised survey (n = 92) of experts in chronic care using mail responses to open-ended questions was conducted to analyse existing chronic care programs focusing on effective, problematic and missing components. An expert workshop (n = 22) of professionals and scientists of a European funded research project MANAGE CARE was used to define a limited number of unmet needs and priorities of elderly patients with type 2 diabetes mellitus and comorbidities. This list was validated and ranked using a multilingual online survey (n = 650). Participants of the online survey included patients, health care professionals and other stakeholders from 56 countries. The survey indicated that current care models need to be improved in terms of financial support, case management and the consideration of social care. The expert workshop identified 150 patient needs which were summarised in 13 needs dimensions. The online survey of these pre-defined dimensions revealed that financial issues, education of both patients and professionals, availability of services as well as health promotion are the most important unmet needs for both patients and professionals. The study uncovered competing demands which are not limited to medical conditions. The findings emphasise that future care models need to focus stronger on individual patient needs and promote their active involvement in co-design and implementation. Future research is needed to develop new chronic care models providing evidence-based and practical implications for the regional care setting.

  13. Retrospective analysis of Bluetongue farm risk profile definition, based on biology, farm management practices and climatic data.

    PubMed

    Cappai, Stefano; Loi, Federica; Coccollone, Annamaria; Contu, Marino; Capece, Paolo; Fiori, Michele; Canu, Simona; Foxi, Cipriano; Rolesu, Sandro

    2018-07-01

    Bluetongue (BT) is a vector-borne disease transmitted by species of Culicoides midges (Diptera: Ceratopogonidae). Many studies have contributed to clarifying various aspects of its aetiology, epidemiology and vector dynamic; however, BT remains a disease of epidemiological and economic importance that affects ruminants worldwide. Since 2000, the Sardinia region has been the most affected area of the Mediterranean basin. The region is characterised by wide pastoral areas for sheep and represents the most likely candidate region for the study of Bluetongue virus (BTV) distribution and prevalence in Italy. Furthermore, specific information on the farm level and epidemiological studies needs to be provided to increase the knowledge on the disease's spread and to provide valid mitigation strategies in Sardinia. This study conducted a punctual investigation into the spatial patterns of BTV transmission to define a risk profile for all Sardinian farmsby using a logistic multilevel mixed model that take into account agro-meteorological aspects, as well as farm characteristics and management. Data about animal density (i.e. sheep, goats and cattle), vaccination, previous outbreaks, altitude, land use, rainfall, evapotranspiration, water surface, and farm management practices (i.e. use of repellents, treatment against insect vectors, storage of animals in shelter overnight, cleaning, presence of mud and manure) were collected for 12,277 farms for the years 2011-2015. The logistic multilevel mixed model showed the fundamental role of climatic factors in disease development and the protective role of good management, vaccination, outbreak in the previous year and altitude. Regional BTV risk maps were developed, based on the predictor values of logistic model results, and updated every 10 days. These maps were used to identify, 20 days in advance, the areas at highest risk. The risk farm profile, as defined by the model, would provide specific information about the role of each factor for all Sardinian institutions involved in devising BT prevention and control strategies. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  14. MixSIAR: advanced stable isotope mixing models in R

    EPA Science Inventory

    Background/Question/Methods The development of stable isotope mixing models has coincided with modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances are published in parity with software packages. However, while mixing model theory has recently been ex...

  15. Interhospital differences and case-mix in a nationwide prevalence survey.

    PubMed

    Kanerva, M; Ollgren, J; Lyytikäinen, O

    2010-10-01

    A prevalence survey is a time-saving and useful tool for obtaining an overview of healthcare-associated infection (HCAI) either in a single hospital or nationally. Direct comparison of prevalence rates is difficult. We evaluated the impact of case-mix adjustment on hospital-specific prevalences. All five tertiary care, all 15 secondary care and 10 (25% of 40) other acute care hospitals took part in the first national prevalence survey in Finland in 2005. US Centers for Disease Control and Prevention criteria served to define HCAI. The information collected included demographic characteristics, severity of the underlying disease, use of catheters and a respirator, and previous surgery. Patients with HCAI related to another hospital were excluded. Case-mix-adjusted HCAI prevalences were calculated by using a multivariate logistic regression model for HCAI risk and an indirect standardisation method. Altogether, 587 (7.2%) of 8118 adult patients had at least one infection; hospital-specific prevalences ranged between 1.9% and 12.6%. Risk factors for HCAI that were previously known or identified by univariate analysis (age, male gender, intensive care, high Charlson comorbidity and McCabe indices, respirator, central venous or urinary catheters, and surgery during stay) were included in the multivariate analysis for standardisation. Case-mix-adjusted prevalences varied between 2.6% and 17.0%, and ranked the hospitals differently from the observed rates. In 11 (38%) hospitals, the observed prevalence rank was lower than predicted by the case-mix-adjusted figure. Case-mix should be taken into consideration in the interhospital comparison of prevalence rates. Copyright 2010 The Hospital Infection Society. Published by Elsevier Ltd. All rights reserved.

  16. Zones of life in the subsurface of hydrothermal vents: A synthesis

    NASA Astrophysics Data System (ADS)

    Larson, B. I.; Houghton, J.; Meile, C. D.

    2011-12-01

    Subsurface microbial communities in Mid-ocean Ridge (MOR) hydrothermal systems host a wide array of unique metabolic strategies, but the spatial distribution of biogeochemical transformations is poorly constrained. Here we present an approach that reexamines chemical measurements from diffuse fluids with models of convective transport to delineate likely reaction zones. Chemical data have been compiled from bare basalt surfaces at a wide array of mid-ocean ridge systems, including 9°N, East Pacific Rise, Axial Seamount, Juan de Fuca, and Lucky Strike, Mid-Atlantic Ridge. Co-sampled end-member fluid from Ty (EPR) was used to constrain reaction path models that define diffuse fluid compositions as a function of temperature. The degree of mixing between hot vent fluid (350 deg. C) and seawater (2 deg. C) governs fluid temperature, Fe-oxide mineral precipitation is suppressed, and aqueous redox reactions are prevented from equilibrating, consistent with sluggish kinetics. Quartz and pyrite are predicted to precipitate, consistent with field observations. Most reported samples of diffuse fluids from EPR and Axial Seamount fall along the same predicted mixing line only when pyrite precipitation is suppressed, but Lucky Strike fluids do not follow the same trend. The predicted fluid composition as a function of temperature is then used to calculate the free energy available to autotrophic microorganisms for a variety of catabolic strategies in the subsurface. Finally, the relationships between temperature and free energy is combined with modeled temperature fields (Lowell et al., 2007 Geochem. Geophys., Geosys.) over a 500 m x 500 m region extending downward from the seafloor and outward from the high temperature focused hydrothermal flow to define areas that are energetically most favorable for a given metabolic process as well as below the upper temperature limit for life (~120 deg. C). In this way, we can expand the relevance of geochemical model predictions of bioenergetics by predicting functionally-defined 'Zones of Life' and placing them spatially within the boundary of the 120 deg. C isotherm, estimating the extent of subsurface biosphere beneath mid-ocean ridge hydrothermal systems. Preliminary results indicate that methanogenesis yields the most energy per kg of vent fluid, consistent with the elevated CH4(aq) seen at all three sites, but may be constrained by temperatures too hot for microbial life while available energy from the oxidation of Fe(II) peaks near regions of the crust that are more hospitable.

  17. Mapping the Mixed Methods–Mixed Research Synthesis Terrain

    PubMed Central

    Sandelowski, Margarete; Voils, Corrine I.; Leeman, Jennifer; Crandell, Jamie L.

    2012-01-01

    Mixed methods–mixed research synthesis is a form of systematic review in which the findings of qualitative and quantitative studies are integrated via qualitative and/or quantitative methods. Although methodological advances have been made, efforts to differentiate research synthesis methods have been too focused on methods and not focused enough on the defining logics of research synthesis—each of which may be operationalized in different ways—or on the research findings themselves that are targeted for synthesis. The conduct of mixed methods–mixed research synthesis studies may more usefully be understood in terms of the logics of aggregation and configuration. Neither logic is preferable to the other nor tied exclusively to any one method or to any one side of the qualitative/quantitative binary. PMID:23066379

  18. Nanopatterns by phase separation of patterned mixed polymer monolayers

    DOEpatents

    Huber, Dale L; Frischknecht, Amalie

    2014-02-18

    Micron-size and sub-micron-size patterns on a substrate can direct the self-assembly of surface-bonded mixed polymer brushes to create nanoscale patterns in the phase-separated mixed polymer brush. The larger scale features, or patterns, can be defined by a variety of lithographic techniques, as well as other physical and chemical processes including but not limited to etching, grinding, and polishing. The polymer brushes preferably comprise vinyl polymers, such as polystyrene and poly(methyl methacrylate).

  19. Contractor Logistics Support in the U.S. Air Force

    DTIC Science & Technology

    2009-01-01

    limits), or it can engage in a mix of the two approaches.2 This monograph addresses CLS, which is defined as contractor sustainment of a weapon system...organic facilities; it can pay contractors to do the work (subject to some congressional limits); or it can apply a mix of the two approaches.2 Organic...levels are largely stable and represent a mix of services, including contractor operated facilities and instal- Figure 3.1 Air Force CSS for Weapon

  20. Selection of an actinobacteria mixed culture for chlordane remediation. Pesticide effects on microbial morphology and bioemulsifier production.

    PubMed

    Fuentes, María S; Colin, Verónica L; Amoroso, María J; Benimeli, Claudia S

    2016-02-01

    Chlordane bioremediation using actinobacteria mixed culture is an attractive clean-up technique. Their ability to produce bioemulsifiers could increase the bioavailability of this pesticide. In order to select a defined actinobacteria mixed culture for chlordane remediation, compatibility assays were performed among six Streptomyces strains. The strains did not show growth inhibition, and they were assayed for chlordane removal, either as pure or as mixed cultures. In pure cultures, all of the strains showed specific dechlorination activity (1.42-24.20 EU mg(-1)) and chlordane removal abilities (91.3-95.5%). The specific dechlorination activity was mainly improved with cultures of three or four microorganisms. The mixed culture consisting of Streptomyces sp. A2-A5-A13 was selected. Their ability to produce bioemulsifiers in the presence of glucose or chlordane was tested, but no significant differences were observed (p > 0.05). However, the stability of the emulsions formed was linked to the carbon source used. Only in chlordane presence the emulsions retained 100% of their initial height. Finally, the selected consortium showed a high degree of sporulation in the pesticide presence. This is the first study on the effects that chlordane exerts on microbe morphology and emulsifier production for a defined mixed culture of Streptomyces with ability to remediate the pesticide. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Defining the ecologically relevant mixed-layer depth for Antarctica's coastal seas

    NASA Astrophysics Data System (ADS)

    Carvalho, Filipa; Kohut, Josh; Oliver, Matthew J.; Schofield, Oscar

    2017-01-01

    Mixed-layer depth (MLD) has been widely linked to phytoplankton dynamics in Antarctica's coastal regions; however, inconsistent definitions have made intercomparisons among region-specific studies difficult. Using a data set with over 20,000 water column profiles corresponding to 32 Slocum glider deployments in three coastal Antarctic regions (Ross Sea, Amundsen Sea, and West Antarctic Peninsula), we evaluated the relationship between MLD and phytoplankton vertical distribution. Comparisons of these MLD estimates to an applied definition of phytoplankton bloom depth, as defined by the deepest inflection point in the chlorophyll profile, show that the maximum of buoyancy frequency is a good proxy for an ecologically relevant MLD. A quality index is used to filter profiles where MLD is not determined. Despite the different regional physical settings, we found that the MLD definition based on the maximum of buoyancy frequency best describes the depth to which phytoplankton can be mixed in Antarctica's coastal seas.

  2. Separating Internal Waves and Vortical Motions: Analysis of LatMix -EM-APEX Float Measurements

    DTIC Science & Technology

    2015-09-30

    vortical motions and internal waves and quantify their effects on horizontal dispersion and diapycnal mixing. WORK COMPLETED...defined as Π = ( + ∇×)⋅∇( − η) (e.g., Kunze and Sanford 1993), where f is the Coriolis frequency, U the velocity vector, z the vertical coordinate

  3. Scaffolding Argumentation about Water Quality: A Mixed-Method Study in a Rural Middle School

    ERIC Educational Resources Information Center

    Belland, Brian R.; Gu, Jiangyue; Armbrust, Sara; Cook, Brant

    2015-01-01

    A common way for students to develop scientific argumentation abilities is through argumentation about socioscientific issues, defined as scientific problems with social, ethical, and moral aspects. Computer-based scaffolding can support students in this process. In this mixed method study, we examined the use and impact of computer based…

  4. An Emic, Mixed-Methods Approach to Defining and Measuring Positive Parenting among Low-Income Black Families

    ERIC Educational Resources Information Center

    McWayne, Christine M.; Mattis, Jacqueline S.; Green Wright, Linnie E.; Limlingan, Maria Cristina; Harris, Elise

    2017-01-01

    Research Findings: This within-group exploratory sequential mixed-methods investigation sought to identify how ethnically diverse, urban-residing, low-income Black families conceptualize positive parenting. During the item development phase 119 primary caregivers from Head Start programs participated in focus groups and interviews. These…

  5. Estimates of water source contributions in a dynamic urban water supply system inferred via a Bayesian stable isotope mixing model

    NASA Astrophysics Data System (ADS)

    Jameel, M. Y.; Brewer, S.; Fiorella, R.; Tipple, B. J.; Bowen, G. J.; Terry, S.

    2017-12-01

    Public water supply systems (PWSS) are complex distribution systems and critical infrastructure, making them vulnerable to physical disruption and contamination. Exploring the susceptibility of PWSS to such perturbations requires detailed knowledge of the supply system structure and operation. Although the physical structure of supply systems (i.e., pipeline connection) is usually well documented for developed cities, the actual flow patterns of water in these systems are typically unknown or estimated based on hydrodynamic models with limited observational validation. Here, we present a novel method for mapping the flow structure of water in a large, complex PWSS, building upon recent work highlighting the potential of stable isotopes of water (SIW) to document water management practices within complex PWSS. We sampled a major water distribution system of the Salt Lake Valley, Utah, measuring SIW of water sources, treatment facilities, and numerous sites within in the supply system. We then developed a hierarchical Bayesian (HB) isotope mixing model to quantify the proportion of water supplied by different sources at sites within the supply system. Known production volumes and spatial distance effects were used to define the prior probabilities for each source; however, we did not include other physical information about the supply system. Our results were in general agreement with those obtained by hydrodynamic models and provide quantitative estimates of contributions of different water sources to a given site along with robust estimates of uncertainty. Secondary properties of the supply system, such as regions of "static" and "dynamic" source (e.g., regions supplied dominantly by one source vs. those experiencing active mixing between multiple sources), can be inferred from the results. The isotope-based HB isotope mixing model offers a new investigative technique for analyzing PWSS and documenting aspects of supply system structure and operation that are otherwise challenging to observe. The method could allow water managers to document spatiotemporal variation in PWSS flow patterns, critical for interrogating the distribution system to inform operation decision making or disaster response, optimize water supply and, monitor and enforce water rights.

  6. The origin of pallasites. A combined experimental and numerical approach.

    NASA Astrophysics Data System (ADS)

    Golabek, G.; Solferino, G. F. D.

    2017-12-01

    Pallasites are simple stony-iron meteorites made of olivine, FeNi, FeS +/- pyroxene. The presence of olivine as well-rounded grains or highly angular fragments, and occasionally both types (mixed-type pallasites) combined with the dunite-like mineralogy make is difficult to define a robust scenario for pallasite genesis. It has been suggested that mixing of Fe-Ni-S and olivine was caused by a non-destructive collision among planetesimals. Yet, this hypothesis needs to be tested and hitherto no attempt to reproduce the simultaneous presence of olivine, solid Fe(Ni) and molten FeS has been done. In this study we performed experiments with olivine plus partially molten Fe(Ni)-S, a composition most similar to those of pallasite meteorites. The main goal was to define the grain growth rate of olivine surrounded by a matrix of Fe(Ni) and FeS melt. Additionally, a 2D finite-difference numerical model was used to define a realistic scenario (e.g., time of impact, depth of intrusion of the Fe-Ni-S) for the formation of rounded- and mixed-type pallasites for the first time. Olivine grain growth rate in partially molten Fe-S follows: d n - d0n = k0 exp(-Ea/RT) t, where, d is the grain size at time t, d0 is the starting grain size, n = 3.70 (61) the growth exponent, k0 = 3.20 μmns-1 a characteristic constant, Ea = 101 (78) kJ/mol the activation energy for a specific growth process, R the gas constant, and T the absolute temperature. This is a substantially slower grain growth than in the case of olivine surrounded by FeS melt (i.e., n = 2.42), but significantly faster than for olivine+FeNi or olivine+Ni (n > 4 or 5). We concluded that the olivine grain growth limiting factor is the coarsening rate of solid Fe(Ni), which is in agreement with previous studies. Yet, we proved that the presence of FeS melt in contact with Fe(Ni) catalyzes the ripening of the latter. The overarching conclusion of this study is that all main phases known to be present during annealing of a given silicate mineral must be reproduced experimentally in order to accurately define its growth rate, with simplified systems not suited for the scope.

  7. Device and method for screening crystallization conditions in solution crystal growth

    NASA Technical Reports Server (NTRS)

    Carter, Daniel C. (Inventor)

    1995-01-01

    A device and method for detecting optimum protein crystallization conditions and for growing protein crystals in either 1g or microgravity environments comprising a housing, defining at least one pair of chambers for containing crystallization solutions is presented. The housing further defines an orifice therein for providing fluid communication between the chambers. The orifice is adapted to receive a tube which contains a gelling substance for limiting the rate of diffusive mixing of the crystallization solutions. The solutions are diffusively mixed over a period of time defined by the quantity of gelling substance sufficient to achieve equilibration and to substantially reduce density driven convection disturbances therein. The device further includes endcaps to seal the first and second chambers. One of the endcaps includes a dialysis chamber which contains protein solution in which protein crystals are grown. Once the endcaps are in place, the protein solution is exposed to the crystallization solutions wherein the solubility of the protein solution is reduced at a rate responsive to the rate of diffusive mixing of the crystallization solutions. This allows for a controlled approach to supersaturation and allows for screening of crystal growth conditions at preselected intervals.

  8. Device and Method for Screening Crystallization Conditions in Solution Crystal Growth

    NASA Technical Reports Server (NTRS)

    Carter, Daniel C. (Inventor)

    1997-01-01

    A device and method for detecting optimum protein crystallization conditions and for growing protein crystals in either 1 g or microgravity environments comprising a housing defining at least one pair of chambers for containing crystallization solutions. The housing further defines an orifice therein for providing fluid communication between the chambers. The orifice is adapted to receive a tube which contains a gelling substance for limiting the rate of diffusive mixing of the crystallization solutions. The solutions are diffusively mixed over a period of time defined by the quantity of gelling substance sufficient to achieve equilibration and to substantially reduce density driven convection disturbances therein. The device further includes endcaps to seal the first and second chambers. One of the endcaps includes a dialysis chamber which contains protein solution in which protein crystals are grown. Once the endcaps are in place. the protein solution is exposed to the crystallization solutions wherein the solubility of the protein solution is reduced at a rate responsive to the rate of diffusive mixing of the crystallization solutions. This allows for a controlled approach to supersaturation and allows for screening of crystal growth conditions at preselected intervals.

  9. Chemical Continuous Time Random Walks

    NASA Astrophysics Data System (ADS)

    Aquino, T.; Dentz, M.

    2017-12-01

    Traditional methods for modeling solute transport through heterogeneous media employ Eulerian schemes to solve for solute concentration. More recently, Lagrangian methods have removed the need for spatial discretization through the use of Monte Carlo implementations of Langevin equations for solute particle motions. While there have been recent advances in modeling chemically reactive transport with recourse to Lagrangian methods, these remain less developed than their Eulerian counterparts, and many open problems such as efficient convergence and reconstruction of the concentration field remain. We explore a different avenue and consider the question: In heterogeneous chemically reactive systems, is it possible to describe the evolution of macroscopic reactant concentrations without explicitly resolving the spatial transport? Traditional Kinetic Monte Carlo methods, such as the Gillespie algorithm, model chemical reactions as random walks in particle number space, without the introduction of spatial coordinates. The inter-reaction times are exponentially distributed under the assumption that the system is well mixed. In real systems, transport limitations lead to incomplete mixing and decreased reaction efficiency. We introduce an arbitrary inter-reaction time distribution, which may account for the impact of incomplete mixing. This process defines an inhomogeneous continuous time random walk in particle number space, from which we derive a generalized chemical Master equation and formulate a generalized Gillespie algorithm. We then determine the modified chemical rate laws for different inter-reaction time distributions. We trace Michaelis-Menten-type kinetics back to finite-mean delay times, and predict time-nonlocal macroscopic reaction kinetics as a consequence of broadly distributed delays. Non-Markovian kinetics exhibit weak ergodicity breaking and show key features of reactions under local non-equilibrium.

  10. When the Mannequin Dies, Creation and Exploration of a Theoretical Framework Using a Mixed Methods Approach.

    PubMed

    Tripathy, Shreepada; Miller, Karen H; Berkenbosch, John W; McKinley, Tara F; Boland, Kimberly A; Brown, Seth A; Calhoun, Aaron W

    2016-06-01

    Controversy exists in the simulation community as to the emotional and educational ramifications of mannequin death due to learner action or inaction. No theoretical framework to guide future investigations of learner actions currently exists. The purpose of our study was to generate a model of the learner experience of mannequin death using a mixed methods approach. The study consisted of an initial focus group phase composed of 11 learners who had previously experienced mannequin death due to action or inaction on the part of learners as defined by Leighton (Clin Simul Nurs. 2009;5(2):e59-e62). Transcripts were analyzed using grounded theory to generate a list of relevant themes that were further organized into a theoretical framework. With the use of this framework, a survey was generated and distributed to additional learners who had experienced mannequin death due to action or inaction. Results were analyzed using a mixed methods approach. Forty-one clinicians completed the survey. A correlation was found between the emotional experience of mannequin death and degree of presession anxiety (P < 0.001). Debriefing was found to significantly reduce negative emotion and enhance satisfaction. Sixty-nine percent of respondents indicated that mannequin death enhanced learning. These results were used to modify our framework. Using the previous approach, we created a model of the effect of mannequin death on the educational and psychological state of learners. We offer the final model as a guide to future research regarding the learner experience of mannequin death.

  11. Defining standardized protocols for determining the efficacy of a postmilking teat disinfectant following experimental exposure of teats to mastitis pathogens.

    PubMed

    Schukken, Y H; Rauch, B J; Morelli, J

    2013-04-01

    The objective of this paper was to define standardized protocols for determining the efficacy of a postmilking teat disinfectant following experimental exposure of teats to both Staphylococcus aureus and Streptococcus agalactiae. The standardized protocols describe the selection of cows and herds and define the critical points in performing experimental exposure, performing bacterial culture, evaluating the culture results, and finally performing statistical analyses and reporting of the results. The protocols define both negative control and positive control trials. For negative control trials, the protocol states that an efficacy of reducing new intramammary infections (IMI) of at least 40% is required for a teat disinfectant to be considered effective. For positive control trials, noninferiority to a control disinfectant with a published efficacy of reducing new IMI of at least 70% is required. Sample sizes for both negative and positive control trials are calculated. Positive control trials are expected to require a large trial size. Statistical analysis methods are defined and, in the proposed methods, the rate of IMI may be analyzed using generalized linear mixed models. The efficacy of the test product can be evaluated while controlling for important covariates and confounders in the trial. Finally, standards for reporting are defined and reporting considerations are discussed. The use of the defined protocol is shown through presentation of the results of a recent trial of a test product against a negative control. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. An Aircraft-Based Upper Troposphere Lower Stratosphere O3, CO, and H2O Climatology for the Northern Hemisphere

    NASA Technical Reports Server (NTRS)

    Tilmes, S.; Pan, L. L.; Hoor, P.; Atlas, E.; Avery, M. A.; Campos, T.; Christensen, L. E.; Diskin, G. S.; Gao, R.-S.; Herman, R. L.; hide

    2010-01-01

    We present a climatology of O3, CO, and H2O for the upper troposphere and lower stratosphere (UTLS), based on a large collection of high ]resolution research aircraft data taken between 1995 and 2008. To group aircraft observations with sparse horizontal coverage, the UTLS is divided into three regimes: the tropics, subtropics, and the polar region. These regimes are defined using a set of simple criteria based on tropopause height and multiple tropopause conditions. Tropopause ]referenced tracer profiles and tracer ]tracer correlations show distinct characteristics for each regime, which reflect the underlying transport processes. The UTLS climatology derived here shows many features of earlier climatologies. In addition, mixed air masses in the subtropics, identified by O3 ]CO correlations, show two characteristic modes in the tracer ]tracer space that are a result of mixed air masses in layers above and below the tropopause (TP). A thin layer of mixed air (1.2 km around the tropopause) is identified for all regions and seasons, where tracer gradients across the TP are largest. The most pronounced influence of mixing between the tropical transition layer and the subtropics was found in spring and summer in the region above 380 K potential temperature. The vertical extent of mixed air masses between UT and LS reaches up to 5 km above the TP. The tracer correlations and distributions in the UTLS derived here can serve as a reference for model and satellite data evaluation

  13. QSAR Modeling Using Large-Scale Databases: Case Study for HIV-1 Reverse Transcriptase Inhibitors.

    PubMed

    Tarasova, Olga A; Urusova, Aleksandra F; Filimonov, Dmitry A; Nicklaus, Marc C; Zakharov, Alexey V; Poroikov, Vladimir V

    2015-07-27

    Large-scale databases are important sources of training sets for various QSAR modeling approaches. Generally, these databases contain information extracted from different sources. This variety of sources can produce inconsistency in the data, defined as sometimes widely diverging activity results for the same compound against the same target. Because such inconsistency can reduce the accuracy of predictive models built from these data, we are addressing the question of how best to use data from publicly and commercially accessible databases to create accurate and predictive QSAR models. We investigate the suitability of commercially and publicly available databases to QSAR modeling of antiviral activity (HIV-1 reverse transcriptase (RT) inhibition). We present several methods for the creation of modeling (i.e., training and test) sets from two, either commercially or freely available, databases: Thomson Reuters Integrity and ChEMBL. We found that the typical predictivities of QSAR models obtained using these different modeling set compilation methods differ significantly from each other. The best results were obtained using training sets compiled for compounds tested using only one method and material (i.e., a specific type of biological assay). Compound sets aggregated by target only typically yielded poorly predictive models. We discuss the possibility of "mix-and-matching" assay data across aggregating databases such as ChEMBL and Integrity and their current severe limitations for this purpose. One of them is the general lack of complete and semantic/computer-parsable descriptions of assay methodology carried by these databases that would allow one to determine mix-and-matchability of result sets at the assay level.

  14. Optimal sensor placement for control of a supersonic mixed-compression inlet with variable geometry

    NASA Astrophysics Data System (ADS)

    Moore, Kenneth Thomas

    A method of using fluid dynamics models for the generation of models that are useable for control design and analysis is investigated. The problem considered is the control of the normal shock location in the VDC inlet, which is a mixed-compression, supersonic, variable-geometry inlet of a jet engine. A quasi-one-dimensional set of fluid equations incorporating bleed and moving walls is developed. An object-oriented environment is developed for simulation of flow systems under closed-loop control. A public interface between the controller and fluid classes is defined. A linear model representing the dynamics of the VDC inlet is developed from the finite difference equations, and its eigenstructure is analyzed. The order of this model is reduced using the square root balanced model reduction method to produce a reduced-order linear model that is suitable for control design and analysis tasks. A modification to this method that improves the accuracy of the reduced-order linear model for the purpose of sensor placement is presented and analyzed. The reduced-order linear model is used to develop a sensor placement method that quantifies as a function of the sensor location the ability of a sensor to provide information on the variable of interest for control. This method is used to develop a sensor placement metric for the VDC inlet. The reduced-order linear model is also used to design a closed loop control system to control the shock position in the VDC inlet. The object-oriented simulation code is used to simulate the nonlinear fluid equations under closed-loop control.

  15. The Satellite View of Extra-Tropical Stratosphere-Troposphere Exchange and the UT/LS

    NASA Technical Reports Server (NTRS)

    Schoeberl, Mark R.

    2004-01-01

    This talk will review satellite studies which have helped define the UT/LS and stratosphere-troposphere exchange. Satellites have provided a global perspective but have had limited temporal and spatial measurements for stratosphere-troposphere exchange (STE) studies. Nonetheless, long lived tracer measurements from satellites can be used as proxies for age-of-air can thus provide estimates of mixing and transport processes in the UT/LS. These measurements can be compared to model estimates of the mean age-of-air and trace gas fluxes providing an important model diagnostic. With the launch of EOS Aura, the potential for satellite trace gas measurements of the lower-most stratosphere and STE is significantly improved, and Aura s mission will be briefly described.

  16. A computational investigation of fuel mixing in a hypersonic scramjet

    NASA Technical Reports Server (NTRS)

    Fathauer, Brett W.; Rogers, R. C.

    1993-01-01

    A parabolized, Navier-Stokes code, SHIP3D, is used to numerically investigate the mixing between air injection and hydrogen injection from a swept ramp injector configuration into either a mainstream low-enthalpy flow or a hypervelocity test flow. The mixing comparisons between air and hydrogen injection reveal the importance of matching injectant-to-mainstream mass flow ratios. In flows with the same injectant-to-mainstream dynamic pressure ratio, the mixing definition was altered for the air injection cases. Comparisons of the computed results indicate that the air injection cases overestimate the mixing performance associated with hydrogen injection simulation. A lifting length parameter, to account for the time a fluid particle transverses through the mixing region, is defined and used to establish a connection of injectant mixing in hypervelocity flows, based on nonreactive, low-enthalpy flows.

  17. An Efficient Alternative Mixed Randomized Response Procedure

    ERIC Educational Resources Information Center

    Singh, Housila P.; Tarray, Tanveer A.

    2015-01-01

    In this article, we have suggested a new modified mixed randomized response (RR) model and studied its properties. It is shown that the proposed mixed RR model is always more efficient than the Kim and Warde's mixed RR model. The proposed mixed RR model has also been extended to stratified sampling. Numerical illustrations and graphical…

  18. Energy drink consumption and the risk of alcohol use disorder among a national sample of adolescents and young adults.

    PubMed

    Emond, Jennifer A; Gilbert-Diamond, Diane; Tanski, Susanne E; Sargent, James D

    2014-12-01

    To assess the association between energy drink use and hazardous alcohol use among a national sample of adolescents and young adults. Cross-sectional analysis of 3342 youth aged 15-23 years recruited for a national survey about media and alcohol use. Energy drink use was defined as recent use or ever mixed-use with alcohol. Outcomes were ever alcohol use and 3 hazardous alcohol use outcomes measured with the Alcohol Use Disorders Identification Test (AUDIT): ever consuming 6 or more drinks at once (6+ binge drinking) and clinical criteria for hazardous alcohol use as defined for adults (8+AUDIT) and for adolescents (4+AUDIT). Among 15-17 year olds (n = 1508), 13.3% recently consumed an energy drink, 9.7% ever consumed an energy drink mixed with alcohol, and 47.1% ever drank alcohol. Recent energy drink use predicted ever alcohol use among 15-17-year-olds only (OR 2.58; 95% CI 1.77-3.77). Of these 15-17-year-olds, 17% met the 6+ binge drinking criteria, 7.2% met the 8+AUDIT criteria, and 16.0% met the 4+AUDIT criteria. Rates of energy drink use and all alcohol use outcomes increased with age. Ever mixed-use with alcohol predicted 6+ binge drinking (OR 4.69; 95% CI 3.70-5.94), 8+AUDIT (OR 3.25; 95% CI 2.51-4.21), and 4+AUDIT (OR 4.15; 95% CI 3.27-5.25) criteria in adjusted models among all participants, with no evidence of modification by age. Positive associations between energy drink use and hazardous alcohol use behaviors are not limited to youth in college settings. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Energy drink consumption and the risk of alcohol use disorder among a national sample of adolescents and young adults

    PubMed Central

    Emond, Jennifer A.; Gilbert-Diamond, Diane; Tanski, Susanne E.; Sargent, James D.

    2014-01-01

    Objective To assess the association between energy drink use and hazardous alcohol use among a national sample of adolescents and young adults. Study design Cross-sectional analysis of 3,342 youth aged 15-23 years recruited for a national survey about media and alcohol use. Energy drink use was defined as recent use or ever mixed-use with alcohol. Outcomes were ever alcohol use and three hazardous alcohol use outcomes measured with the Alcohol Use Disorders Identification Test (AUDIT): ever consuming 6 or more drinks at once (6+ binge drinking) and clinical criteria for hazardous alcohol use as defined for adults (8+AUDIT) and for adolescents (4+AUDIT). Results Among 15-17 year olds (n=1,508), 13.3% recently consumed an energy drink, 9.7% ever consumed an energy drink mixed with alcohol, and 47.1% ever drank alcohol. Recent energy drink use predicted ever alcohol use among 15-17 years olds only (OR: 2.58; 95% CI: 1.77-3.77). Of these 15-17 year olds, 17% met the 6+ binge drinking criteria, 7.2% met the 8+AUDIT criteria, and 16.0% met the 4+AUDIT criteria. Rates of energy drink use and all alcohol use outcomes increased with age. Ever mixed-use with alcohol predicted 6+ binge drinking (OR 4.69; 95% CI: 3.70-5.94), 8+AUDIT (OR 3.25; 95% CI: 2.51-4.21), and 4+AUDIT (OR 4.15; 95% CI: 3.27-5.25) criteria in adjusted models among all participants, with no evidence of modification by age. Conclusions Positive associations between energy drink use and hazardous alcohol use behaviors are not limited to youth in college settings. PMID:25294603

  20. Quantifying the effect of mixing on the mean age of air in CCMVal-2 and CCMI-1 models

    NASA Astrophysics Data System (ADS)

    Dietmüller, Simone; Eichinger, Roland; Garny, Hella; Birner, Thomas; Boenisch, Harald; Pitari, Giovanni; Mancini, Eva; Visioni, Daniele; Stenke, Andrea; Revell, Laura; Rozanov, Eugene; Plummer, David A.; Scinocca, John; Jöckel, Patrick; Oman, Luke; Deushi, Makoto; Kiyotaka, Shibata; Kinnison, Douglas E.; Garcia, Rolando; Morgenstern, Olaf; Zeng, Guang; Stone, Kane Adam; Schofield, Robyn

    2018-05-01

    The stratospheric age of air (AoA) is a useful measure of the overall capabilities of a general circulation model (GCM) to simulate stratospheric transport. Previous studies have reported a large spread in the simulation of AoA by GCMs and coupled chemistry-climate models (CCMs). Compared to observational estimates, simulated AoA is mostly too low. Here we attempt to untangle the processes that lead to the AoA differences between the models and between models and observations. AoA is influenced by both mean transport by the residual circulation and two-way mixing; we quantify the effects of these processes using data from the CCM inter-comparison projects CCMVal-2 (Chemistry-Climate Model Validation Activity 2) and CCMI-1 (Chemistry-Climate Model Initiative, phase 1). Transport along the residual circulation is measured by the residual circulation transit time (RCTT). We interpret the difference between AoA and RCTT as additional aging by mixing. Aging by mixing thus includes mixing on both the resolved and subgrid scale. We find that the spread in AoA between the models is primarily caused by differences in the effects of mixing and only to some extent by differences in residual circulation strength. These effects are quantified by the mixing efficiency, a measure of the relative increase in AoA by mixing. The mixing efficiency varies strongly between the models from 0.24 to 1.02. We show that the mixing efficiency is not only controlled by horizontal mixing, but by vertical mixing and vertical diffusion as well. Possible causes for the differences in the models' mixing efficiencies are discussed. Differences in subgrid-scale mixing (including differences in advection schemes and model resolutions) likely contribute to the differences in mixing efficiency. However, differences in the relative contribution of resolved versus parameterized wave forcing do not appear to be related to differences in mixing efficiency or AoA.

  1. A 3-D Coupled CFD-DSMC Solution Method With Application to the Mars Sample Return Orbiter

    NASA Technical Reports Server (NTRS)

    Glass, Christopher E.; Gnoffo, Peter A.

    2000-01-01

    A method to obtain coupled Computational Fluid Dynamics-Direct Simulation Monte Carlo (CFD-DSMC), 3-D flow field solutions for highly blunt bodies at low incidence is presented and applied to one concept of the Mars Sample Return Orbiter vehicle as a demonstration of the technique. CFD is used to solve the high-density blunt forebody flow defining an inflow boundary condition for a DSMC solution of the afterbody wake flow. By combining the two techniques in flow regions where most applicable, the entire mixed flow field is modeled in an appropriate manner.

  2. [Primary branch size of Pinus koraiensis plantation: a prediction based on linear mixed effect model].

    PubMed

    Dong, Ling-Bo; Liu, Zhao-Gang; Li, Feng-Ri; Jiang, Li-Chun

    2013-09-01

    By using the branch analysis data of 955 standard branches from 60 sampled trees in 12 sampling plots of Pinus koraiensis plantation in Mengjiagang Forest Farm in Heilongjiang Province of Northeast China, and based on the linear mixed-effect model theory and methods, the models for predicting branch variables, including primary branch diameter, length, and angle, were developed. Considering tree effect, the MIXED module of SAS software was used to fit the prediction models. The results indicated that the fitting precision of the models could be improved by choosing appropriate random-effect parameters and variance-covariance structure. Then, the correlation structures including complex symmetry structure (CS), first-order autoregressive structure [AR(1)], and first-order autoregressive and moving average structure [ARMA(1,1)] were added to the optimal branch size mixed-effect model. The AR(1) improved the fitting precision of branch diameter and length mixed-effect model significantly, but all the three structures didn't improve the precision of branch angle mixed-effect model. In order to describe the heteroscedasticity during building mixed-effect model, the CF1 and CF2 functions were added to the branch mixed-effect model. CF1 function improved the fitting effect of branch angle mixed model significantly, whereas CF2 function improved the fitting effect of branch diameter and length mixed model significantly. Model validation confirmed that the mixed-effect model could improve the precision of prediction, as compare to the traditional regression model for the branch size prediction of Pinus koraiensis plantation.

  3. Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean

    NASA Astrophysics Data System (ADS)

    Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.

    2011-12-01

    Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling parameter for the aerosols. The estimation method is computationally fast and can be used with more complex models where climate sensitivity is diagnosed rather than prescribed. The parameter estimates can be used to create probabilistic climate projections using the UVic ESCM model in future studies.

  4. Atmospheric measurements of peroxyacetyl nitrate and other organic nitrates at high latitudes - Possible sources and sinks

    NASA Technical Reports Server (NTRS)

    Singh, H. B.; O'Hara, D.; Herlth, D.; Bradshaw, J. D.; Sandholm, S. T.; Gregory, G. L.; Sachse, G. W.; Blake, D. R.; Crutzen, P. J.; Kanakidou, M. A.

    1992-01-01

    Measurements of PAN and other reactive nitrogen species during the NASA Arctic Boundary Layer Expedition (ABLE 3A) are described, their north-south and east-west gradients in the free troposphere are characterized, and the sources and sinks of PAN and NO(y) are assessed. Large concentrations of PAN and NO(y) are present in the Arctic/sub-Arctic troposphere of the Northern Hemisphere during the summer. Mixing ratios of PAN and a variety of other molecules are more abundant in the free troposphere compared to the boundary layer. Coincident PAN and O3 atmospheric structures suggest that phenomena that define PAN also define the corresponding O3 behavior. Model calculations, correlations between NO(y) and anthropogenic tracers, and the compositions of NO(y) itself suggest that the Arctic/sub-Arctic reactive nitrogen measured during ABLE 3A is predominantly of anthropogenic origin with a minor component from the stratosphere.

  5. A one-dimensional sectional aerosol model integrated with mesoscale meteorological data to study marine boundary layer aerosol dynamics

    NASA Astrophysics Data System (ADS)

    Caffrey, Peter F.; Hoppel, William A.; Shi, Jainn J.

    2006-12-01

    The dynamics of aerosols in the marine boundary layer are simulated with a one-dimensional, multicomponent, sectional aerosol model using vertical profiles of turbulence, relative humidity, temperature, vertical velocity, cloud cover, and precipitation provided by 3-D mesoscale meteorological model output. The Naval Research Laboratory's (NRL) sectional aerosol model MARBLES (Fitzgerald et al., 1998a) was adapted to use hourly meteorological input taken from NRL's Coupled Ocean-Atmosphere Prediction System (COAMPS). COAMPS-generated turbulent mixing coefficients and large-scale vertical velocities determine vertical exchange within the marine boundary layer and exchange with the free troposphere. Air mass back trajectories were used to define the air column history along which the meteorology was retrieved for use with the aerosol model. Details on the integration of these models are described here, as well as a description of improvements made to the aerosol model, including transport by large-scale vertical motions (such as subsidence and lifting), a revised sea-salt aerosol source function, and separate tracking of sulfate mass from each of the five sources (free tropospheric, nucleated, condensed from gas phase oxidation products, cloud-processed, and produced from heterogeneous oxidation of S(IV) on sea-salt aerosol). Results from modeling air masses arriving at Oahu, Hawaii, are presented, and the relative contribution of free-tropospheric sulfate particles versus sea-salt aerosol from the surface to CCN concentrations is discussed. Limitations and benefits of the method are presented, as are sensitivity analyses of the effect of large-scale vertical motions versus turbulent mixing.

  6. Towards the next generation of simplified Dark Matter models

    NASA Astrophysics Data System (ADS)

    Albert, Andreas; Bauer, Martin; Brooke, Jim; Buchmueller, Oliver; Cerdeño, David G.; Citron, Matthew; Davies, Gavin; de Cosa, Annapaola; De Roeck, Albert; De Simone, Andrea; Du Pree, Tristan; Flaecher, Henning; Fairbairn, Malcolm; Ellis, John; Grohsjean, Alexander; Hahn, Kristian; Haisch, Ulrich; Harris, Philip C.; Khoze, Valentin V.; Landsberg, Greg; McCabe, Christopher; Penning, Bjoern; Sanz, Veronica; Schwanenberger, Christian; Scott, Pat; Wardle, Nicholas

    2017-06-01

    This White Paper is an input to the ongoing discussion about the extension and refinement of simplified Dark Matter (DM) models. It is not intended as a comprehensive review of the discussed subjects, but instead summarises ideas and concepts arising from a brainstorming workshop that can be useful when defining the next generation of simplified DM models (SDMM). In this spirit, based on two concrete examples, we show how existing SDMM can be extended to provide a more accurate and comprehensive framework to interpret and characterise collider searches. In the first example we extend the canonical SDMM with a scalar mediator to include mixing with the Higgs boson. We show that this approach not only provides a better description of the underlying kinematic properties that a complete model would possess, but also offers the option of using this more realistic class of scalar mixing models to compare and combine consistently searches based on different experimental signatures. The second example outlines how a new physics signal observed in a visible channel can be connected to DM by extending a simplified model including effective couplings. In the next part of the White Paper we outline other interesting options for SDMM that could be studied in more detail in the future. Finally, we review important aspects of supersymmetric models for DM and use them to propose how to develop more complete SDMMs. This White Paper is a summary of the brainstorming meeting "Next generation of simplified Dark Matter models" that took place at Imperial College, London on May 6, 2016, and corresponding follow-up studies on selected subjects.

  7. Low-cloud characteristics over the tropical western Pacific from ARM observations and CAM5 simulations

    DOE PAGES

    Chandra, Arunchandra S.; Zhang, Chidong; Klein, Stephen A.; ...

    2015-09-10

    Here, this study evaluates the ability of the Community Atmospheric Model version 5 (CAM5) to reproduce low clouds observed by the Atmospheric Radiation Measurement (ARM) cloud radar at Manus Island of the tropical western Pacific during the Years of Tropical Convection. Here low clouds are defined as clouds with their tops below the freezing level and bases within the boundary layer. Low-cloud statistics in CAM5 simulations and ARM observations are compared in terms of their general occurrence, mean vertical profiles, fraction of precipitating versus nonprecipitating events, diurnal cycle, and monthly time series. Other types of clouds are included to putmore » the comparison in a broader context. The comparison shows that the model overproduces total clouds and their precipitation fraction but underestimates low clouds in general. The model, however, produces excessive low clouds in a thin layer between 954 and 930 hPa, which coincides with excessive humidity near the top of the mixed layer. This suggests that the erroneously excessive low clouds stem from parameterization of both cloud and turbulence mixing. The model also fails to produce the observed diurnal cycle in low clouds, not exclusively due to the model coarse grid spacing that does not resolve Manus Island. Lastly, this study demonstrates the utility of ARM long-term cloud observations in the tropical western Pacific in verifying low clouds simulated by global climate models, illustrates issues of using ARM observations in model validation, and provides an example of severe model biases in producing observed low clouds in the tropical western Pacific.« less

  8. Preventable mix-ups of tuberculin and vaccines: reports to the US Vaccine and Drug Safety Reporting Systems.

    PubMed

    Chang, Soju; Pool, Vitali; O'Connell, Kathryn; Polder, Jacquelyn A; Iskander, John; Sweeney, Colleen; Ball, Robert; Braun, M Miles

    2008-01-01

    Errors involving the mix-up of tuberculin purified protein derivative (PPD) and vaccines leading to adverse reactions and unnecessary medical management have been reported previously. To determine the frequency of PPD-vaccine mix-ups reported to the US Vaccine Adverse Event Reporting System (VAERS) and the Adverse Event Reporting System (AERS), characterize adverse events and clusters involving mix-ups and describe reported contributory factors. We reviewed AERS reports from 1969 to 2005 and VAERS reports from 1990 to 2005. We defined a mix-up error event as an incident in which a single patient or a cluster of patients inadvertently received vaccine instead of a PPD product or received a PPD product instead of vaccine. We defined a cluster as inadvertent administration of PPD or vaccine products to more than one patient in the same facility within 1 month. Of 115 mix-up events identified, 101 involved inadvertent administration of vaccines instead of PPD. Product confusion involved PPD and multiple vaccines. The annual number of reported mix-ups increased from an average of one event per year in the early 1990s to an average of ten events per year in the early part of this decade. More than 240 adults and children were affected and the majority reported local injection site reactions. Four individuals were hospitalized (all recovered) after receiving the wrong products. Several patients were inappropriately started on tuberculosis prophylaxis as a result of a vaccine local reaction being interpreted as a positive tuberculin skin test. Reported potential contributory factors involved both system factors (e.g. similar packaging) and human errors (e.g. failure to read label before product administration). To prevent PPD-vaccine mix-ups, proper storage, handling and administration of vaccine and PPD products is necessary.

  9. BEYOND MIXING-LENGTH THEORY: A STEP TOWARD 321D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnett, W. David; Meakin, Casey; Viallet, Maxime

    2015-08-10

    We examine the physical basis for algorithms to replace mixing-length theory (MLT) in stellar evolutionary computations. Our 321D procedure is based on numerical solutions of the Navier–Stokes equations. These implicit large eddy simulations (ILES) are three-dimensional (3D), time-dependent, and turbulent, including the Kolmogorov cascade. We use the Reynolds-averaged Navier–Stokes (RANS) formulation to make concise the 3D simulation data, and use the 3D simulations to give closure for the RANS equations. We further analyze this data set with a simple analytical model, which is non-local and time-dependent, and which contains both MLT and the Lorenz convective roll as particular subsets ofmore » solutions. A characteristic length (the damping length) again emerges in the simulations; it is determined by an observed balance between (1) the large-scale driving, and (2) small-scale damping. The nature of mixing and convective boundaries is analyzed, including dynamic, thermal and compositional effects, and compared to a simple model. We find that (1) braking regions (boundary layers in which mixing occurs) automatically appear beyond the edges of convection as defined by the Schwarzschild criterion, (2) dynamic (non-local) terms imply a non-zero turbulent kinetic energy flux (unlike MLT), (3) the effects of composition gradients on flow can be comparable to thermal effects, and (4) convective boundaries in neutrino-cooled stages differ in nature from those in photon-cooled stages (different Péclet numbers). The algorithms are based upon ILES solutions to the Navier–Stokes equations, so that, unlike MLT, they do not require any calibration to astronomical systems in order to predict stellar properties. Implications for solar abundances, helioseismology, asteroseismology, nucleosynthesis yields, supernova progenitors and core collapse are indicated.« less

  10. Beyond Mixing-length Theory: A Step Toward 321D

    NASA Astrophysics Data System (ADS)

    Arnett, W. David; Meakin, Casey; Viallet, Maxime; Campbell, Simon W.; Lattanzio, John C.; Mocák, Miroslav

    2015-08-01

    We examine the physical basis for algorithms to replace mixing-length theory (MLT) in stellar evolutionary computations. Our 321D procedure is based on numerical solutions of the Navier-Stokes equations. These implicit large eddy simulations (ILES) are three-dimensional (3D), time-dependent, and turbulent, including the Kolmogorov cascade. We use the Reynolds-averaged Navier-Stokes (RANS) formulation to make concise the 3D simulation data, and use the 3D simulations to give closure for the RANS equations. We further analyze this data set with a simple analytical model, which is non-local and time-dependent, and which contains both MLT and the Lorenz convective roll as particular subsets of solutions. A characteristic length (the damping length) again emerges in the simulations; it is determined by an observed balance between (1) the large-scale driving, and (2) small-scale damping. The nature of mixing and convective boundaries is analyzed, including dynamic, thermal and compositional effects, and compared to a simple model. We find that (1) braking regions (boundary layers in which mixing occurs) automatically appear beyond the edges of convection as defined by the Schwarzschild criterion, (2) dynamic (non-local) terms imply a non-zero turbulent kinetic energy flux (unlike MLT), (3) the effects of composition gradients on flow can be comparable to thermal effects, and (4) convective boundaries in neutrino-cooled stages differ in nature from those in photon-cooled stages (different Péclet numbers). The algorithms are based upon ILES solutions to the Navier-Stokes equations, so that, unlike MLT, they do not require any calibration to astronomical systems in order to predict stellar properties. Implications for solar abundances, helioseismology, asteroseismology, nucleosynthesis yields, supernova progenitors and core collapse are indicated.

  11. Anatomy of a metabentonite: nucleation and growth of illite crystals and their colescence into mixed-layer illite/smectite

    USGS Publications Warehouse

    Eberl, D.D.; Blum, A.E.; Serravezza, M.

    2011-01-01

    The illite layer content of mixed-layer illite/smectite (I/S) in a 2.5 m thick, zoned, metabentonite bed from Montana decreases regularly from the edges to the center of the bed. Traditional X-ray diffraction (XRD) pattern modeling using Markovian statistics indicated that this zonation results from a mixing in different proportions of smectite-rich R0 I/S and illite-rich R1 I/S, with each phase having a relatively constant illite layer content. However, a new method for modeling XRD patterns of I/S indicates that R0 and R1 I/S in these samples are not separate phases (in the mineralogical sense of the word), but that the samples are composed of illite crystals that have continuous distributions of crystal thicknesses, and of 1 nm thick smectite crystals. The shapes of these distributions indicate that the crystals were formed by simultaneous nucleation and growth. XRD patterns for R0 and R1 I/S arise by interparticle diffraction from a random stacking of the crystals, with swelling interlayers formed at interfaces between crystals from water or glycol that is sorbed on crystal surfaces. It is the thickness distributions of smectite and illite crystals (also termed fundamental particles, or Nadeau particles), rather than XRD patterns for mixed-layer I/S, that are the more reliable indicators of geologic history, because such distributions are composed of well-defined crystals that are not affected by differences in surface sorption and particle arrangements, and because their thickness distribution shapes conform to the predictions of crystal growth theory, which describes their genesis.

  12. On Mixed Methods: Approaches to Language and Literacy Research (An NCRLL Volume). Language & Literacy Series (NCRLL Collection)

    ERIC Educational Resources Information Center

    Calfee, Robert; Sperling, Melanie

    2010-01-01

    This book examines the use of mixed methods for conducting language and literacy research, defining how and why this approach is successful for solving problems and clarifying issues that researchers encounter. Using research findings, the authors explore how an intermingling of multiple methods expands the possibilities of observation and…

  13. Integrating Quantitative and Qualitative Data in Mixed Methods Research--Challenges and Benefits

    ERIC Educational Resources Information Center

    Almalki, Sami

    2016-01-01

    This paper is concerned with investigating the integration of quantitative and qualitative data in mixed methods research and whether, in spite of its challenges, it can be of positive benefit to many investigative studies. The paper introduces the topic, defines the terms with which this subject deals and undertakes a literature review to outline…

  14. 21 CFR 184.1027 - Mixed carbohydrase and protease enzyme product.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... in § 170.3(o)(9) of this chapter, to hydrolyze proteins or carbohydrates. (2) The ingredient is used... beverages, as defined in § 170.3(n)(2) of this chapter, candy, nutritive sweeteners, and protein... 21 Food and Drugs 3 2013-04-01 2013-04-01 false Mixed carbohydrase and protease enzyme product...

  15. 21 CFR 184.1027 - Mixed carbohydrase and protease enzyme product.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... in § 170.3(o)(9) of this chapter, to hydrolyze proteins or carbohydrates. (2) The ingredient is used... beverages, as defined in § 170.3(n)(2) of this chapter, candy, nutritive sweeteners, and protein... 21 Food and Drugs 3 2012-04-01 2012-04-01 false Mixed carbohydrase and protease enzyme product...

  16. 21 CFR 184.1027 - Mixed carbohydrase and protease enzyme product.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... in § 170.3(o)(9) of this chapter, to hydrolyze proteins or carbohydrates. (2) The ingredient is used... beverages, as defined in § 170.3(n)(2) of this chapter, candy, nutritive sweeteners, and protein... 21 Food and Drugs 3 2011-04-01 2011-04-01 false Mixed carbohydrase and protease enzyme product...

  17. 21 CFR 184.1027 - Mixed carbohydrase and protease enzyme product.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... in § 170.3(o)(9) of this chapter, to hydrolyze proteins or carbohydrates. (2) The ingredient is used... beverages, as defined in § 170.3(n)(2) of this chapter, candy, nutritive sweeteners, and protein... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Mixed carbohydrase and protease enzyme product. 184...

  18. Oligonuclear ferrocene amides: mixed-valent peptides and potential redox-switchable foldamers.

    PubMed

    Siebler, Daniel; Linseis, Michael; Gasi, Teuta; Carrella, Luca M; Winter, Rainer F; Förster, Christoph; Heinze, Katja

    2011-04-11

    Trinuclear ferrocene tris-amides were synthesized from an Fmoc- or Boc-protected ferrocene amino acid, and hydrogen-bonded zigzag conformations were determined by NMR spectroscopy, molecular modelling, and X-ray diffraction. In these ordered secondary structures orientation of the individual amide dipole moments approximately in the same direction results in a macrodipole moment similar to that of α-helices composed of α-amino acids. Unlike ordinary α-amino acids, the building blocks in these ferrocene amides with defined secondary structure can be sequentially oxidized to mono-, di-, and trications. Singly and doubly charged mixed-valent cations were probed experimentally by Vis/NIR, paramagnetic ¹H NMR and Mössbauer spectroscopy and investigated theoretically by DFT calculations. According to the appearance of intervalence charge transfer (IVCT) bands in solution, the ferrocene/ferrocenium amides are described as Robin-Day class II mixed-valent systems. Mössbauer spectroscopy indicates trapped valences in the solid state. The secondary structure of trinuclear ferrocene tris-amides remains intact (coiled form) upon oxidation to mono- and dications according to DFT calculations, while oxidation to the trication should break the intramolecular hydrogen bonding and unfold the ferrocene peptide (uncoiled form).

  19. A continuous mixing model for pdf simulations and its applications to combusting shear flows

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Chen, J.-Y.

    1991-01-01

    The problem of time discontinuity (or jump condition) in the coalescence/dispersion (C/D) mixing model is addressed in this work. A C/D mixing model continuous in time is introduced. With the continuous mixing model, the process of chemical reaction can be fully coupled with mixing. In the case of homogeneous turbulence decay, the new model predicts a pdf very close to a Gaussian distribution, with finite higher moments also close to that of a Gaussian distribution. Results from the continuous mixing model are compared with both experimental data and numerical results from conventional C/D models.

  20. Developing A New Predictive Dispersion Equation Based on Tidal Average (TA) Condition in Alluvial Estuaries

    NASA Astrophysics Data System (ADS)

    Anak Gisen, Jacqueline Isabella; Nijzink, Remko C.; Savenije, Hubert H. G.

    2014-05-01

    Dispersion mathematical representation of tidal mixing between sea water and fresh water in The definition of dispersion somehow remains unclear as it is not directly measurable. The role of dispersion is only meaningful if it is related to the appropriate temporal and spatial scale of mixing, which are identified as the tidal period, tidal excursion (longitudinal), width of estuary (lateral) and mixing depth (vertical). Moreover, the mixing pattern determines the salt intrusion length in an estuary. If a physically based description of the dispersion is defined, this would allow the analytical solution of the salt intrusion problem. The objective of this study is to develop a predictive equation for estimating the dispersion coefficient at tidal average (TA) condition, which can be applied in the salt intrusion model to predict the salinity profile for any estuary during different events. Utilizing available data of 72 measurements in 27 estuaries (including 6 recently studied estuaries in Malaysia), regressions analysis has been performed with various combinations of dimensionless parameters . The predictive dispersion equations have been developed for two different locations, at the mouth D0TA and at the inflection point D1TA (where the convergence length changes). Regressions have been carried out with two separated datasets: 1) more reliable data for calibration; and 2) less reliable data for validation. The combination of dimensionless ratios that give the best performance is selected as the final outcome which indicates that the dispersion coefficient is depending on the tidal excursion, tidal range, tidal velocity amplitude, friction and the Richardson Number. A limitation of the newly developed equation is that the friction is generally unknown. In order to compensate this problem, further analysis has been performed adopting the hydraulic model of Cai et. al. (2012) to estimate the friction and depth. Keywords: dispersion, alluvial estuaries, mixing, salt intrusion, predictive equation

  1. Lagrange constraint neural network for audio varying BSS

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Hsu, Charles C.

    2002-03-01

    Lagrange Constraint Neural Network (LCNN) is a statistical-mechanical ab-initio model without assuming the artificial neural network (ANN) model at all but derived it from the first principle of Hamilton and Lagrange Methodology: H(S,A)= f(S)- (lambda) C(s,A(x,t)) that incorporates measurement constraint C(S,A(x,t))= (lambda) ([A]S-X)+((lambda) 0-1)((Sigma) isi -1) using the vector Lagrange multiplier-(lambda) and a- priori Shannon Entropy f(S) = -(Sigma) i si log si as the Contrast function of unknown number of independent sources si. Szu et al. have first solved in 1997 the general Blind Source Separation (BSS) problem for spatial-temporal varying mixing matrix for the real world remote sensing where a large pixel footprint implies the mixing matrix [A(x,t)] necessarily fill with diurnal and seasonal variations. Because the ground truth is difficult to be ascertained in the remote sensing, we have thus illustrated in this paper, each step of the LCNN algorithm for the simulated spatial-temporal varying BSS in speech, music audio mixing. We review and compare LCNN with other popular a-posteriori Maximum Entropy methodologies defined by ANN weight matrix-[W] sigmoid-(sigma) post processing H(Y=(sigma) ([W]X)) by Bell-Sejnowski, Amari and Oja (BSAO) called Independent Component Analysis (ICA). Both are mirror symmetric of the MaxEnt methodologies and work for a constant unknown mixing matrix [A], but the major difference is whether the ensemble average is taken at neighborhood pixel data X's in BASO or at the a priori sources S variables in LCNN that dictates which method works for spatial-temporal varying [A(x,t)] that would not allow the neighborhood pixel average. We expected the success of sharper de-mixing by the LCNN method in terms of a controlled ground truth experiment in the simulation of variant mixture of two music of similar Kurtosis (15 seconds composed of Saint-Saens Swan and Rachmaninov cello concerto).

  2. Mixing effects on geothermometric calculations of the Newdale geothermal area in the Eastern Snake River Plain, Idaho

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghanashayam Neupane; Earl D. Mattson; Travis L. McLing

    The Newdale geothermal area in Madison and Fremont Counties in Idaho is a known geothermal resource area whose thermal anomaly is expressed by high thermal gradients and numerous wells producing warm water (up to 51 °C). Geologically, the Newdale geothermal area is located within the Eastern Snake River Plain (ESRP) that has a time-transgressive history of sustained volcanic activities associated with the passage of Yellowstone Hotspot from the southwestern part of Idaho to its current position underneath Yellowstone National Park in Wyoming. Locally, the Newdale geothermal area is located within an area that was subjected to several overlapping and nestedmore » caldera complexes. The Tertiary caldera forming volcanic activities and associated rocks have been buried underneath Quaternary flood basalts and felsic volcanic rocks. Two southeast dipping young faults (Teton dam fault and an unnamed fault) in the area provide the structural control for this localized thermal anomaly zone. Geochemically, water samples from numerous wells in the area can be divided into two broad groups – Na-HCO3 and Ca-(Mg)-HCO3 type waters and are considered to be the product of water-rhyolite and water-basalt interactions, respectively. Each type of water can further be subdivided into two groups depending on their degree of mixing with other water types or interaction with other rocks. For example, some bivariate plots indicate that some Ca-(Mg)-HCO3 water samples have interacted only with basalts whereas some samples of this water type also show limited interaction with rhyolite or mixing with Na-HCO3 type water. Traditional geothermometers [e.g., silica variants, Na-K-Ca (Mg-corrected)] indicate lower temperatures for this area; however, a traditional silica-enthalpy mixing model results in higher reservoir temperatures. We applied a new multicomponent equilibrium geothermometry tool (e.g., Reservoir Temperature Estimator, RTEst) that is based on inverse geochemical modeling which explicitly accounts for boiling, mixing, and CO2 degassing. RTEst modeling results indicate that the well water samples are mixed with up to 75% of the near surface groundwater. Relatively, Ca-(Mg)-HCO3 type water samples are more diluted than the Na-HCO3 type water samples. However, both water types result in similar reservoir temperatures, up to 150 °C. Samples in the vicinity of faults produced higher reservoir temperatures than samples away from the faults. Although both the silica-enthalpy mixing and RTEst models indicated promising geothermal reservoir temperatures, evaluation of the subsurface permeability and extent of the thermal anomaly is needed to define the hydrothermal potential of the Newdale geothermal resource.« less

  3. Optimal generator bidding strategies for power and ancillary services

    NASA Astrophysics Data System (ADS)

    Morinec, Allen G.

    As the electric power industry transitions to a deregulated market, power transactions are made upon price rather than cost. Generator companies are interested in maximizing their profits rather than overall system efficiency. A method to equitably compensate generation providers for real power, and ancillary services such as reactive power and spinning reserve, will ensure a competitive market with an adequate number of suppliers. Optimizing the generation product mix during bidding is necessary to maximize a generator company's profits. The objective of this research work is to determine and formulate appropriate optimal bidding strategies for a generation company in both the energy and ancillary services markets. These strategies should incorporate the capability curves of their generators as constraints to define the optimal product mix and price offered in the day-ahead and real time spot markets. In order to achieve such a goal, a two-player model was composed to simulate market auctions for power generation. A dynamic game methodology was developed to identify Nash Equilibria and Mixed-Strategy Nash Equilibria solutions as optimal generation bidding strategies for two-player non-cooperative variable-sum matrix games with incomplete information. These games integrated the generation product mix of real power, reactive power, and spinning reserve with the generators's capability curves as constraints. The research includes simulations of market auctions, where strategies were tested for generators with different unit constraints, costs, types of competitors, strategies, and demand levels. Studies on the capability of large hydrogen cooled synchronous generators were utilized to derive useful equations that define the exact shape of the capability curve from the intersections of the arcs defined by the centers and radial vectors of the rotor, stator, and steady-state stability limits. The available reactive reserve and spinning reserve were calculated given a generator operating point in the P-Q plane. Four computer programs were developed to automatically perform the market auction simulations using the equal incremental cost rule. The software calculates the payoffs for the two competing competitors, dispatches six generators, and allocates ancillary services for 64 combinations of bidding strategies, three levels of system demand, and three different types of competitors. Matrix Game theory was utilized to calculate Nash Equilibrium solutions and mixed-strategy Nash solutions as the optimal generator bidding strategies. A method to incorporate ancillary services into the generation bidding strategy, to assure an adequate supply of ancillary services, and to allocate these necessary resources to the on-line units was devised. The optimal generator bid strategy in a power auction was shown to be the Nash Equilibrium solution found in two-player variable-sum matrix games.

  4. Cloud microphysics and aerosol indirect effects in the global climate model ECHAM5-HAM

    NASA Astrophysics Data System (ADS)

    Lohmann, U.; Stier, P.; Hoose, C.; Ferrachat, S.; Roeckner, E.; Zhang, J.

    2007-03-01

    The double-moment cloud microphysics scheme from ECHAM4 has been coupled to the size-resolved aerosol scheme ECHAM5-HAM. ECHAM5-HAM predicts the aerosol mass and number concentrations and the aerosol mixing state. This results in a much better agreement with observed vertical profiles of the black carbon and aerosol mass mixing ratios than with the previous version ECHAM4, where only the different aerosol mass mixing ratios were predicted. Also, the simulated liquid, ice and total water content and the cloud droplet and ice crystal number concentrations as a function of temperature in stratiform mixed-phase clouds between 0 and -35°C agree much better with aircraft observations in the ECHAM5 simulations. ECHAM5 performs better because more realistic aerosol concentrations are available for cloud droplet nucleation and because the Bergeron-Findeisen process is parameterized as being more efficient. The total anthropogenic aerosol effect includes the direct, semi-direct and indirect effects and is defined as the difference in the top-of-the-atmosphere net radiation between present-day and pre-industrial times. It amounts to -1.8 W m-2 in ECHAM5, when a relative humidity dependent cloud cover scheme and present-day aerosol emissions representative for the year 2000 are used. It is larger when either a statistical cloud cover scheme or a different aerosol emission inventory are employed.

  5. Survival and synergistic growth of mixed cultures of bifidobacteria and lactobacilli combined with prebiotic oligosaccharides in a gastrointestinal tract simulator

    PubMed Central

    Adamberg, Signe; Sumeri, Ingrid; Uusna, Riin; Ambalam, Padma; Kondepudi, Kanthi Kiran; Adamberg, Kaarel; Wadström, Torkel; Ljungh, Åsa

    2014-01-01

    Background Probiotics, especially in combination with non-digestible oligosaccharides, may balance the gut microflora while multistrain preparations may express an improved functionality over single strain cultures. In vitro gastrointestinal models enable to test survival and growth dynamics of mixed strain probiotics in a controlled, replicable manner. Methods The robustness and compatibility of multistrain probiotics composed of bifidobacteria and lactobacilli combined with mixed prebiotics (galacto-, fructo- and xylo-oligosaccharides or galactooligosaccharides and soluble starch) were studied using a dynamic gastrointestinal tract simulator (GITS). The exposure to acid and bile of the upper gastrointestinal tract was followed by dilution with a continuous decrease of the dilution rate (de-celerostat) to simulate the descending nutrient availability of the large intestine. The bacterial numbers and metabolic products were analyzed and the growth parameters determined. Results The most acid- and bile-resistant strains were Lactobacillus plantarum F44 and L. paracasei F8. Bifidobacterium breve 46 had the highest specific growth rate and, although sensitive to bile exposure, recovered during the dilution phase in most experiments. B. breve 46, L. plantarum F44, and L. paracasei F8 were selected as the most promising strains for further studies. Conclusions De-celerostat cultivation can be applied to study the mixed bacterial cultures under defined conditions of decreasing nutrient availability to select a compatible set of strains. PMID:25045346

  6. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.

    PubMed

    Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed

    2013-01-01

    In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.

  7. Combustor cap having non-round outlets for mixing tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Michael John; Boardman, Gregory Allen; McConnaughhay, Johnie Franklin

    2016-12-27

    A system includes a a combustor cap configured to be coupled to a plurality of mixing tubes of a multi-tube fuel nozzle, wherein each mixing tube of the plurality of mixing tubes is configured to mix air and fuel to form an air-fuel mixture. The combustor cap includes multiple nozzles integrated within the combustor cap. Each nozzle of the multiple nozzles is coupled to a respective mixing tube of the multiple mixing tubes. In addition, each nozzle of the multiple nozzles includes a first end and a second end. The first end is coupled to the respective mixing tube ofmore » the multiple mixing tubes. The second end defines a non-round outlet for the air-fuel mixture. Each nozzle of the multiple nozzles includes an inner surface having first and second portions, the first portion radially diverges along an axial direction from the first end to the second end, and the second portion radially converges along the axial direction from the first end to the second end.« less

  8. Prediction of gene expression with cis-SNPs using mixed models and regularization methods.

    PubMed

    Zeng, Ping; Zhou, Xiang; Huang, Shuiping

    2017-05-11

    It has been shown that gene expression in human tissues is heritable, thus predicting gene expression using only SNPs becomes possible. The prediction of gene expression can offer important implications on the genetic architecture of individual functional associated SNPs and further interpretations of the molecular basis underlying human diseases. We compared three types of methods for predicting gene expression using only cis-SNPs, including the polygenic model, i.e. linear mixed model (LMM), two sparse models, i.e. Lasso and elastic net (ENET), and the hybrid of LMM and sparse model, i.e. Bayesian sparse linear mixed model (BSLMM). The three kinds of prediction methods have very different assumptions of underlying genetic architectures. These methods were evaluated using simulations under various scenarios, and were applied to the Geuvadis gene expression data. The simulations showed that these four prediction methods (i.e. Lasso, ENET, LMM and BSLMM) behaved best when their respective modeling assumptions were satisfied, but BSLMM had a robust performance across a range of scenarios. According to R 2 of these models in the Geuvadis data, the four methods performed quite similarly. We did not observe any clustering or enrichment of predictive genes (defined as genes with R 2  ≥ 0.05) across the chromosomes, and also did not see there was any clear relationship between the proportion of the predictive genes and the proportion of genes in each chromosome. However, an interesting finding in the Geuvadis data was that highly predictive genes (e.g. R 2  ≥ 0.30) may have sparse genetic architectures since Lasso, ENET and BSLMM outperformed LMM for these genes; and this observation was validated in another gene expression data. We further showed that the predictive genes were enriched in approximately independent LD blocks. Gene expression can be predicted with only cis-SNPs using well-developed prediction models and these predictive genes were enriched in some approximately independent LD blocks. The prediction of gene expression can shed some light on the functional interpretation for identified SNPs in GWASs.

  9. Emile: Software-Realized Scaffolding for Science Learners Programming in Mixed Media

    NASA Astrophysics Data System (ADS)

    Guzdial, Mark Joseph

    Emile is a computer program that facilitates students using programming to create models of kinematics (physics of motion without forces) and executing these models as simulations. Emile facilitates student programming and model-building with software-realized scaffolding (SRS). Emile integrates a range of SRS and provides mechanisms to fade (or diminish) most scaffolding. By fading Emile's SRS, students can adapt support to their individual needs. Programming in Emile involves graphic and text elements (as compared with more traditional text-based programming). For example, students create graphical objects which can be dragged on the screen, and when dropped, fall as if in a gravitational field. Emile supports a simplified, HyperCard-like mixed media programming framework. Scaffolding is defined as support which enables student performance (called the immediate benefit of scaffolding) and which facilitates student learning (called the lasting benefit of scaffolding). Scaffolding provides this support through three methods: Modeling, coaching, and eliciting articulation. For example, Emile has tools to structure the programming task (modeling), menus identify the next step in the programming and model-building process (coaching), and prompts for student plans and predictions (eliciting articulation). Five students used Emile in a summer workshop (45 hours total) focusing on creating kinematics simulations and multimedia demonstrations. Evaluation of Emile's scaffolding addressed use of scaffolding and the expected immediate and lasting benefits. Emile created records of student interactions (log files) which were analyzed to determine how students used Emile's SRS and how they faded that scaffolding. Student projects and articulations about those projects were analyzed to assess success of student's model-building and programming activities. Clinical interviews were conducted before and after the workshop to determine students' conceptualizations of kinematics and programming and how they changed. The results indicate that students were successful at model-building and programming, learned physics and programming, and used and faded Emile's scaffolding over time. These results are from a small sample who were self -selected and highly-motivated. Nonetheless, this study provides a theory and operationalization for SRS, an example of a successful model-building environment, and a description of student use of mixed media programming.

  10. A time dependent mixing model to close PDF equations for transport in heterogeneous aquifers

    NASA Astrophysics Data System (ADS)

    Schüler, L.; Suciu, N.; Knabner, P.; Attinger, S.

    2016-10-01

    Probability density function (PDF) methods are a promising alternative to predicting the transport of solutes in groundwater under uncertainty. They make it possible to derive the evolution equations of the mean concentration and the concentration variance, used in moment methods. The mixing model, describing the transport of the PDF in concentration space, is essential for both methods. Finding a satisfactory mixing model is still an open question and due to the rather elaborate PDF methods, a difficult undertaking. Both the PDF equation and the concentration variance equation depend on the same mixing model. This connection is used to find and test an improved mixing model for the much easier to handle concentration variance. Subsequently, this mixing model is transferred to the PDF equation and tested. The newly proposed mixing model yields significantly improved results for both variance modelling and PDF modelling.

  11. Unifying error structures in commonly used biotracer mixing models.

    PubMed

    Stock, Brian C; Semmens, Brice X

    2016-10-01

    Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet. © 2016 by the Ecological Society of America.

  12. Lagrangian mixed layer modeling of the western equatorial Pacific

    NASA Technical Reports Server (NTRS)

    Shinoda, Toshiaki; Lukas, Roger

    1995-01-01

    Processes that control the upper ocean thermohaline structure in the western equatorial Pacific are examined using a Lagrangian mixed layer model. The one-dimensional bulk mixed layer model of Garwood (1977) is integrated along the trajectories derived from a nonlinear 1 1/2 layer reduced gravity model forced with actual wind fields. The Global Precipitation Climatology Project (GPCP) data are used to estimate surface freshwater fluxes for the mixed layer model. The wind stress data which forced the 1 1/2 layer model are used for the mixed layer model. The model was run for the period 1987-1988. This simple model is able to simulate the isothermal layer below the mixed layer in the western Pacific warm pool and its variation. The subduction mechanism hypothesized by Lukas and Lindstrom (1991) is evident in the model results. During periods of strong South Equatorial Current, the warm and salty mixed layer waters in the central Pacific are subducted below the fresh shallow mixed layer in the western Pacific. However, this subduction mechanism is not evident when upwelling Rossby waves reach the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific due to episodes of strong wind and light precipitation associated with the El Nino-Southern Oscillation. Comparison of the results between the Lagrangian mixed layer model and a locally forced Eulerian mixed layer model indicated that horizontal advection of salty waters from the central Pacific strongly affects the upper ocean salinity variation in the western Pacific, and that this advection is necessary to maintain the upper ocean thermohaline structure in this region.

  13. Meaning of Parental Involvement among Korean Immigrant Parents: A Mixed-Methods Approach

    ERIC Educational Resources Information Center

    Kim, Yanghee Anna; An, Sohyun; Kim, Hyun Chu Leah; Kim, Jihye

    2018-01-01

    The authors' goal was to identify ways in which Korean immigrant parents define the concept of parental involvement and to examine the statistical significances of interrelationships among these meanings. Seventy-seven parents responded to an open-ended question that asked them to define the meaning of parental involvement; 141 responses were…

  14. Design for application of the DETOX{sup SM} wet oxidation process to mixed wastes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bell, R.A.; Dhooge, P.M.

    1994-04-01

    Conceptual engineering has been performed for application of the DETOX{sup SM} wet oxidation process to treatment of specific mixed waste types. Chemical compositions, mass balances, energy balances, temperatures, pressures, and flows have been used to define design parameters for treatment units capable of destroying 5. Kg per hour of polychlorinated biphenyls and 25. Kg per hour of tributyl phosphate. Equipment for the units has been sized and materials of construction have been specified. Secondary waste streams have been defined. Environmental safety and health issues in design have been addressed. Capital and operating costs have been estimated based on the conceptualmore » designs.« less

  15. Detecting Habitats and Ecosystem Functions Considering the Mesozooplankton Size and Diversity Structures and Environmental Conditions in the Gulf of Lion, NW Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Carlotti, F.; Espinasse, B.; Zhou, M.; Jean-Luc, D.

    2016-02-01

    Environmental conditions and zooplankton size structure and taxonomic diversity were investigated in the Gulf of Lion in May 2010 and January 2011. The integrated physical and biological measurements provided a 3D view with high spatial resolution of the physical and biological variables and their correlations over the whole gulf. The effects of physical processes such as freshwater input, coastal upwelling, and water column mixing by winds on phytoplankton and zooplankton distributions were analyzed using these data. Several analytic tests were performed in order to define several ecoregions representing different habitats of plankton communities. Three habitats were distinguished based on statistical analysis performed on biological and physical variables: (1) the coastal area characterized by shallow waters, high chl a concentrations, and a steep slope of the normalized biomass size spectrum (NBSS); (2) the area affected by the Rhône with high stratification and flat NBSS slope; and (3) the continental shelf with a deep mixed layer, relatively low particle concentrations, and moderate NBSS slope. The zooplankton diversity was characterized by spatial differences in community composition among the Rhône plume area, the coastal shelf, and shelf break waters. Defining habitat is a relevant approach to designing new zooplankton sampling strategies, validating distribution models and including the zooplankton compartment in trophodynamic studies.

  16. Compatible-strain mixed finite element methods for incompressible nonlinear elasticity

    NASA Astrophysics Data System (ADS)

    Faghih Shojaei, Mostafa; Yavari, Arash

    2018-05-01

    We introduce a new family of mixed finite elements for incompressible nonlinear elasticity - compatible-strain mixed finite element methods (CSFEMs). Based on a Hu-Washizu-type functional, we write a four-field mixed formulation with the displacement, the displacement gradient, the first Piola-Kirchhoff stress, and a pressure-like field as the four independent unknowns. Using the Hilbert complexes of nonlinear elasticity, which describe the kinematics and the kinetics of motion, we identify the solution spaces of the independent unknown fields. In particular, we define the displacement in H1, the displacement gradient in H (curl), the stress in H (div), and the pressure field in L2. The test spaces of the mixed formulations are chosen to be the same as the corresponding solution spaces. Next, in a conforming setting, we approximate the solution and the test spaces with some piecewise polynomial subspaces of them. Among these approximation spaces are the tensorial analogues of the Nédélec and Raviart-Thomas finite element spaces of vector fields. This approach results in compatible-strain mixed finite element methods that satisfy both the Hadamard compatibility condition and the continuity of traction at the discrete level independently of the refinement level of the mesh. By considering several numerical examples, we demonstrate that CSFEMs have a good performance for bending problems and for bodies with complex geometries. CSFEMs are capable of capturing very large strains and accurately approximating stress and pressure fields. Using CSFEMs, we do not observe any numerical artifacts, e.g., checkerboarding of pressure, hourglass instability, or locking in our numerical examples. Moreover, CSFEMs provide an efficient framework for modeling heterogeneous solids.

  17. A Generalized Hybrid Multiscale Modeling Approach for Flow and Reactive Transport in Porous Media

    NASA Astrophysics Data System (ADS)

    Yang, X.; Meng, X.; Tang, Y. H.; Guo, Z.; Karniadakis, G. E.

    2017-12-01

    Using emerging understanding of biological and environmental processes at fundamental scales to advance predictions of the larger system behavior requires the development of multiscale approaches, and there is strong interest in coupling models at different scales together in a hybrid multiscale simulation framework. A limited number of hybrid multiscale simulation methods have been developed for subsurface applications, mostly using application-specific approaches for model coupling. The proposed generalized hybrid multiscale approach is designed with minimal intrusiveness to the at-scale simulators (pre-selected) and provides a set of lightweight C++ scripts to manage a complex multiscale workflow utilizing a concurrent coupling approach. The workflow includes at-scale simulators (using the lattice-Boltzmann method, LBM, at the pore and Darcy scale, respectively), scripts for boundary treatment (coupling and kriging), and a multiscale universal interface (MUI) for data exchange. The current study aims to apply the generalized hybrid multiscale modeling approach to couple pore- and Darcy-scale models for flow and mixing-controlled reaction with precipitation/dissolution in heterogeneous porous media. The model domain is packed heterogeneously that the mixing front geometry is more complex and not known a priori. To address those challenges, the generalized hybrid multiscale modeling approach is further developed to 1) adaptively define the locations of pore-scale subdomains, 2) provide a suite of physical boundary coupling schemes and 3) consider the dynamic change of the pore structures due to mineral precipitation/dissolution. The results are validated and evaluated by comparing with single-scale simulations in terms of velocities, reactive concentrations and computing cost.

  18. Verification of a One-Dimensional Model of CO2 Atmospheric Transport Inside and Above a Forest Canopy Using Observations at the Norunda Research Station

    NASA Astrophysics Data System (ADS)

    Kovalets, Ivan; Avila, Rodolfo; Mölder, Meelis; Kovalets, Sophia; Lindroth, Anders

    2018-02-01

    A model of CO2 atmospheric transport in vegetated canopies is tested against measurements of the flow, as well as CO2 concentrations at the Norunda research station located inside a mixed pine-spruce forest. We present the results of simulations of wind-speed profiles and CO2 concentrations inside and above the forest canopy with a one-dimensional model of profiles of the turbulent diffusion coefficient above the canopy accounting for the influence of the roughness sub-layer on turbulent mixing according to Harman and Finnigan (Boundary-Layer Meteorol 129:323-351, 2008; hereafter HF08). Different modelling approaches are used to define the turbulent exchange coefficients for momentum and concentration inside the canopy: (1) the modified HF08 theory—numerical solution of the momentum and concentration equations with a non-constant distribution of leaf area per unit volume; (2) empirical parametrization of the turbulent diffusion coefficient using empirical data concerning the vertical profiles of the Lagrangian time scale and root-mean-square deviation of the vertical velocity component. For neutral, daytime conditions, the second-order turbulence model is also used. The flexibility of the empirical model enables the best fit of the simulated CO2 concentrations inside the canopy to the observations, with the results of simulations for daytime conditions inside the canopy layer only successful provided the respiration fluxes are properly considered. The application of the developed model for radiocarbon atmospheric transport released in the form of ^{14}CO2 is presented and discussed.

  19. Verification of a One-Dimensional Model of CO2 Atmospheric Transport Inside and Above a Forest Canopy Using Observations at the Norunda Research Station

    NASA Astrophysics Data System (ADS)

    Kovalets, Ivan; Avila, Rodolfo; Mölder, Meelis; Kovalets, Sophia; Lindroth, Anders

    2018-07-01

    A model of CO2 atmospheric transport in vegetated canopies is tested against measurements of the flow, as well as CO2 concentrations at the Norunda research station located inside a mixed pine-spruce forest. We present the results of simulations of wind-speed profiles and CO2 concentrations inside and above the forest canopy with a one-dimensional model of profiles of the turbulent diffusion coefficient above the canopy accounting for the influence of the roughness sub-layer on turbulent mixing according to Harman and Finnigan (Boundary-Layer Meteorol 129:323-351, 2008; hereafter HF08). Different modelling approaches are used to define the turbulent exchange coefficients for momentum and concentration inside the canopy: (1) the modified HF08 theory—numerical solution of the momentum and concentration equations with a non-constant distribution of leaf area per unit volume; (2) empirical parametrization of the turbulent diffusion coefficient using empirical data concerning the vertical profiles of the Lagrangian time scale and root-mean-square deviation of the vertical velocity component. For neutral, daytime conditions, the second-order turbulence model is also used. The flexibility of the empirical model enables the best fit of the simulated CO2 concentrations inside the canopy to the observations, with the results of simulations for daytime conditions inside the canopy layer only successful provided the respiration fluxes are properly considered. The application of the developed model for radiocarbon atmospheric transport released in the form of ^{14}CO2 is presented and discussed.

  20. Defining the Intrinsic Cardiac Risks of Operations to Improve Preoperative Cardiac Risk Assessments.

    PubMed

    Liu, Jason B; Liu, Yaoming; Cohen, Mark E; Ko, Clifford Y; Sweitzer, Bobbie J

    2018-02-01

    Current preoperative cardiac risk stratification practices group operations into broad categories, which might inadequately consider the intrinsic cardiac risks of individual operations. We sought to define the intrinsic cardiac risks of individual operations and to demonstrate how grouping operations might lead to imprecise estimates of perioperative cardiac risk. Elective operations (based on Common Procedural Terminology codes) performed from January 1, 2010 to December 31, 2015 at hospitals participating in the American College of Surgeons National Surgical Quality Improvement Program were studied. A composite measure of perioperative adverse cardiac events was defined as either cardiac arrest requiring cardiopulmonary resuscitation or acute myocardial infarction. Operations' intrinsic cardiac risks were derived from mixed-effects models while controlling for patient mix. Resultant risks were sorted into low-, intermediate-, and high-risk categories, and the most commonly performed operations within each category were identified. Intrinsic operative risks were also examined using a representative grouping of operations to portray within-group variation. Sixty-six low, 30 intermediate, and 106 high intrinsic cardiac risk operations were identified. Excisional breast biopsy had the lowest intrinsic cardiac risk (overall rate, 0.01%; odds ratio, 0.11; 95% CI, 0.02 to 0.25) relative to the average, whereas aorto-bifemoral bypass grafting had the highest (overall rate, 4.1%; odds ratio, 6.61; 95% CI, 5.54 to 7.90). There was wide variation in the intrinsic cardiac risks of operations within the representative grouping (median odds ratio, 1.40; interquartile range, 0.88 to 2.17). A continuum of intrinsic cardiac risk exists among operations. Grouping operations into broad categories inadequately accounts for the intrinsic cardiac risk of individual operations.

  1. Miscibility and Thermodynamics of Mixing of Different Models of Formamide and Water in Computer Simulation.

    PubMed

    Kiss, Bálint; Fábián, Balázs; Idrissi, Abdenacer; Szőri, Milán; Jedlovszky, Pál

    2017-07-27

    The thermodynamic changes that occur upon mixing five models of formamide and three models of water, including the miscibility of these model combinations itself, is studied by performing Monte Carlo computer simulations using an appropriately chosen thermodynamic cycle and the method of thermodynamic integration. The results show that the mixing of these two components is close to the ideal mixing, as both the energy and entropy of mixing turn out to be rather close to the ideal term in the entire composition range. Concerning the energy of mixing, the OPLS/AA_mod model of formamide behaves in a qualitatively different way than the other models considered. Thus, this model results in negative, while the other ones in positive energy of mixing values in combination with all three water models considered. Experimental data supports this latter behavior. Although the Helmholtz free energy of mixing always turns out to be negative in the entire composition range, the majority of the model combinations tested either show limited miscibility, or, at least, approach the miscibility limit very closely in certain compositions. Concerning both the miscibility and the energy of mixing of these model combinations, we recommend the use of the combination of the CHARMM formamide and TIP4P water models in simulations of water-formamide mixtures.

  2. Role of emerging private hospitals in a post-Soviet mixed health system: a mixed methods comparative study of private and public hospital inpatient care in Mongolia.

    PubMed

    Tsevelvaanchig, Uranchimeg; Gouda, Hebe; Baker, Peter; Hill, Peter S

    2017-05-01

    The collapse of the Soviet Union in 1990 severely impacted the health sector in Mongolia. Limited public funding for the post-Soviet model public system and a rapid growth of poorly regulated private providers have been pressing issues for a government seeking to re-establish universal health coverage. However, the evidence available on the role of private providers that would inform sector management is very limited. This study analyses the current contribution of private hospitals in Mongolia for the improvement of accessibility of health care and efficiency. We used mixed research methods. A descriptive analysis of nationally representative hospital admission records from 2013 was followed by semi-structured interviews that were carried out with purposively selected key informants (N = 45), representing the main actors in Mongolia's mixed health system. Private-for-profit hospitals are concentrated in urban areas, where their financial model is most viable. The result is the duplication of private and public inpatient services, both in terms of their geographical location and the range of services delivered. The combination of persistent inpatient-oriented care and perverse financial incentives that privilege admission over outpatient management, have created unnecessary health costs. The engagement of the private sector to improve population health outcomes is constrained by a series of issues of governance, regulation and financing and the failure of the state to manage the private sector as an integral part of its health system planning. For a mixed system like in Mongolia, a comprehensive policy and plan which defines the complementary role of private providers to optimize private public service mix is critical in the early stages of the private sector development. It further supports the importance of a system perspective that combines regulation and incentives in consistent policy, rather than an isolated approach to provide regulation. © The Author 2016. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  3. Shades of white: diffusion properties of T1- and FLAIR-defined white matter signal abnormalities differ in stages from cognitively normal to dementia.

    PubMed

    Riphagen, Joost M; Gronenschild, Ed H B M; Salat, David H; Freeze, Whitney M; Ivanov, Dimo; Clerx, Lies; Verhey, Frans R J; Aalten, Pauline; Jacobs, Heidi I L

    2018-08-01

    The underlying pathology of white matter signal abnormalities (WMSAs) is heterogeneous and may vary dependent on the magnetic resonance imaging contrast used to define them. We investigated differences in white matter diffusivity as an indicator for white matter integrity underlying WMSA based on T1-weighted and fluid-attenuated inversion recovery (FLAIR) imaging contrast. In addition, we investigated which white matter region of interest (ROI) could predict clinical diagnosis best using diffusion metrics. One hundred three older individuals with varying cognitive impairment levels were included and underwent neuroimaging. Diffusion metrics were extracted from WMSA areas based on T1 and FLAIR contrast and from their overlapping areas, the border surrounding the WMSA and the normal-appearing white matter (NAWM). Regional diffusivity differences were calculated with linear mixed effects models. Multinomial logistic regression determined which ROI diffusion values classified individuals best into clinically defined diagnostic groups. T1-based WMSA showed lower white matter integrity compared to FLAIR WMSA-defined regions. Diffusion values of NAWM predicted diagnostic group best compared to other ROI's. To conclude, T1- or FLAIR-defined WMSA provides distinct information on the underlying white matter integrity associated with cognitive decline. Importantly, not the "diseased" but the NAWM is a potentially sensitive indicator for cognitive brain health status. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Fractional solubility of aerosol iron: Synthesis of a global-scale data set

    NASA Astrophysics Data System (ADS)

    Sholkovitz, Edward R.; Sedwick, Peter N.; Church, Thomas M.; Baker, Alexander R.; Powell, Claire F.

    2012-07-01

    Aerosol deposition provides a major input of the essential micronutrient iron to the open ocean. A critical parameter with respect to biological availability is the proportion of aerosol iron that enters the oceanic dissolved iron pool - the so-called fractional solubility of aerosol iron (%FeS). Here we present a global-scale compilation of total aerosol iron loading (FeT) and estimated %FeS values for ∼1100 samples collected over the open ocean, the coastal ocean, and some continental sites, including a new data set from the Atlantic Ocean. Despite the wide variety of methods that have been used to define 'soluble' aerosol iron, our global-scale compilation reveals a remarkably consistent trend in the fractional solubility of aerosol iron as a function of total aerosol iron loading, with the great bulk of the data defining an hyperbolic trend. The hyperbolic trends that we observe for both global- and regional-scale data are adequately described by a simple two-component mixing model, whereby the fractional solubility of iron in the bulk aerosol reflects the conservative mixing of 'lithogenic' mineral dust (high FeT and low %FeS) and non-lithogenic 'combustion' aerosols (low FeT and high %FeS). An increasing body of empirical and model-based evidence points to anthropogenic fuel combustion as the major source of these non-lithogenic 'combustion' aerosols, implying that human emissions are a major determinant of the fractional solubility of iron in marine aerosols. The robust global-scale relationship between %FeS and FeT provides a simple heuristic method for estimating aerosol iron solubility at the regional to global scale.

  5. Predicting Recovery from Episodes of Major Depression

    PubMed Central

    Solomon, David A.; Leon, Andrew C.; Coryell, William; Mueller, Timothy I.; Posternak, Michael; Endicott, Jean; Keller, Martin B.

    2008-01-01

    Background This study examined psychosocial functioning as a predictor of recovery from episodes of unipolar major depression. Methods 231 subjects diagnosed with major depressive disorder according to Research Diagnostic Criteria were prospectively followed for up to 20 years as part of the NIMH Collaborative Depression Study. The association between psychosocial functioning and recovery from episodes of unipolar major depression was analyzed with a mixed-effects logistic regression model which controlled for cumulative morbidity, defined as the amount of time ill with major depression during prospective follow-up. Recovery was defined as at least eight consecutive weeks with either no symptoms of major depression, or only one or two symptoms at a mild level of severity. Results In the mixed-effects model, a one standard deviation increase in psychosocial impairment was significantly associated with a 22% decrease in the likelihood of subsequent recovery from an episode of major depression (OR = 0.78, 95% CI: 0.74–0.82, Z = −3.17, p < 0.002). Also, a one standard deviation increase in cumulative morbidity was significantly associated with a 61% decrease in the probability of recovery (OR = 0.3899, 95% CI: 0.3894–0.3903, Z = −7.21, p < 0.001). Limitations The generalizability of the study is limited in so far as subjects were recruited as they sought treatment at academic medical centers. The analyses examined the relationship between psychosocial functioning and recovery from major depression, and did not include episodes of minor depression. Furthermore, this was an observational study and the investigators did not control treatment. Conclusions Assessment of psychosocial impairment may help identify patients less likely to recover from an episode of major depression. PMID:17920692

  6. A validation of the 3H/3He method for determining groundwater recharge

    NASA Astrophysics Data System (ADS)

    Solomon, D. K.; Schiff, S. L.; Poreda, R. J.; Clarke, W. B.

    1993-09-01

    Tritium and He isotopes have been measured at a site where groundwater flow is nearly vertical for a travel time of 100 years and where recharge rates are spatially variable. Because the mid-1960s 3H peak (arising from aboveground testing of thermonuclear devices) is well-defined, the vertical groundwater velocity is known with unusual accuracy at this site. Utilizing 3H and its stable daughter 3He to determine groundwater ages, we compute a recharge rate of 0.16 m/yr, which agrees to within about 5% of the value based on the depth of the 3H peak (measured both in 1986 and 1991) and two-dimensional modeling in an area of high recharge. Zero 3H/3He age occurs at a depth that is approximately equal to the average depth of the annual low water table, even though the capillary fringe extends to land surface during most of the year at the study site. In an area of low recharge (0.05 m/yr) where the 3H peak (and hence the vertical velocity) is also well-defined, the 3H/3He results could not be used to compute recharge because samples were not collected sufficiently far above the 3H peak; however, modeling indicates that the 3H/3He age gradient near the water table is an accurate measure of vertical velocities in the low-recharge area. Because 3H and 3He have different diffusion coefficients, and because the amount of mechanical mixing is different in the area of high recharge than in the low-recharge area, we have separated the dispersive effects of mechanical mixing from molecular diffusion. We estimate a longitudinal dispersivity of 0.07 m and effective diffusion coefficients for 3H (3HHO) and 3He of 2.4×10-5 and 1.3×10-4 m2/day, respectively. Although the 3H/3He age gradient is an excellent indicator of vertical groundwater velocities above the mid-1960s 3H peak, dispersive mixing and diffusive loss of 3He perturb the age gradient near and below the 3H peak.

  7. Vertical Subsurface Flow Mixing and Horizontal Anisotropy in Coarse Fluvial Aquifers: Structural Aspects

    NASA Astrophysics Data System (ADS)

    Huggenberger, P.; Huber, E.

    2014-12-01

    Detailed descriptions of the subsurface heterogeneities in coarse fluvial aquifer gravel often lack in concepts to distinguish between the essence and the noise of a permeability structure and the ability to extrapolate site specific hydraulic information at the tens to several hundred meters scale. At this scale the heterogeneity strongly influences the anisotropies of the flow field and the mixing processes in groundwater. However, in many hydrogeological models the complexity of natural systems is oversimplified. Understanding the link between the dynamics of the surface processes of braided-river systems and the resulting subsurface sedimentary structures is the key to characterizing the complexity of horizontal and vertical mixing processes in groundwater. From the different depositional elements of coarse braided-river systems, the largest permeability contrasts can be observed in the scour-fills. Other elements (e.g. different types of gravel sheets) show much smaller variabilities and could be considered as a kind of matrix. Field experiments on the river Tagliamento (Northeast Italy) based on morphological observation and ground-penetrating radar (GPR) surveys, as well as outcrop analyses of gravel pit exposures (Switzerland) allowed us to define the shape, sizes, spatial distribution and preservation potential of scour-fills. In vertical sections (e.g. 2D GPR data, vertical outcrop), the spatial density of remnant erosional bounding surfaces of scours is an indicator for the dynamics of the braided-river system (lateral mobility of the active floodplain, rate of sediment net deposition and spatial distribution of the confluence scours). In case of combined low aggradation rate and low lateral mobility the deposits may be dominated by a complex overprinting of scour-fills. The delineation of the erosional bounding surfaces, that are coherent over the survey area, is based on the identification of angular discontinuities of the reflectors. Fence diagrams and horizontal time-slices from GPR data are used to construct simplified 3D hydraulic properties distribution models and to derive anisotropy patterns. On the basis of this work, conceptual models could be designed and implemented into numerical models to simulate the flow field and mixing in heterogeneous braided-river deposits.

  8. Upscaling anomalous reactive kinetics (A+B-->C) from pore scale Lagrangian velocity analysis

    NASA Astrophysics Data System (ADS)

    De Anna, P.; Tartakovsky, A. M.; Le Borgne, T.; Dentz, M.

    2011-12-01

    Natural flow fields in porous media display a complex spatio-temporal organization due to heterogeneous geological structures at different scales. This multiscale disorder implies anomalous dispersion, mixing and reaction kinetics (Berkowitz et al. RG 2006, Tartakovsky PRE 2010). Here, we focus on the upscaling of anomalous kinetics arising from pore scale, non Gaussian and correlated, velocity distributions. We consider reactive front simulations, where a component A displaces a component B that saturates initially the porous domain. The reactive component C is produced at the dispersive front located at interface between the A and B domains. The simulations are performed with the SPH method. As the mixing zone grows, the total mass of C produced increases with time. The scaling of this evolution with time is different from that which would be obtained from the homogeneous advection dispersion reaction equation. This anomalous kinetics property is related to spatial structure of the reactive mixture, and its evolution with time under the combined action of advective and diffusive processes. We discuss the different scaling regimes arising depending on the dominant process that governs mixing. In order to upscale these processes, we analyze the Lagrangian velocity properties, which are characterized by the non Gaussian distributions and long range temporal correlation. The main origin of these properties is the existence of very low velocity regions where solute particles can remain trapped for a long time. Another source of strong correlation is the channeling of flow in localized high velocity regions, which created finger-like structures in the concentration field. We show the spatial Markovian, and temporal non Markovian, nature of the Lagrangian velocity field. Therefore, an upscaled model can be defined as a correlated Continuous Time Random Walk (Le Borgne et al. PRL 2008). A key feature of this model is the definition of a transition probability density for Lagrangian velocities across a characteristic correlation distance. We quantify this transition probability density from pore scale simulations and use it in the effective stochastic model. In this framework, we investigate the ability of this effective model to represent correctly dispersion and mixing.

  9. Conditions That Facilitate Music Learning among Students with Special Needs: A Mixed-Methods Inquiry

    ERIC Educational Resources Information Center

    Gerrity, Kevin W.; Hourigan, Ryan M.; Horton, Patrick W.

    2013-01-01

    The purpose of this mixed-methods study was to identify and define the conditions that facilitate learning in music among students with special needs. Children with special needs met once a week for 10 consecutive weeks and received instruction in primarily music as well as the other arts. The children completed pre- and posttest evaluations that…

  10. 21 CFR 184.1027 - Mixed carbohydrase and protease enzyme product.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... practice conditions of use: (1) The ingredient is used as an enzyme, as defined in § 170.3(o)(9) of this... 21 Food and Drugs 3 2014-04-01 2014-04-01 false Mixed carbohydrase and protease enzyme product..._regulations/ibr_locations.html. (c) In accordance with § 184.1(b)(1), the ingredient is used in food with no...

  11. A flavor symmetry model for bilarge leptonic mixing and the lepton masses

    NASA Astrophysics Data System (ADS)

    Ohlsson, Tommy; Seidl, Gerhart

    2002-11-01

    We present a model for leptonic mixing and the lepton masses based on flavor symmetries and higher-dimensional mass operators. The model predicts bilarge leptonic mixing (i.e., the mixing angles θ12 and θ23 are large and the mixing angle θ13 is small) and an inverted hierarchical neutrino mass spectrum. Furthermore, it approximately yields the experimental hierarchical mass spectrum of the charged leptons. The obtained values for the leptonic mixing parameters and the neutrino mass squared differences are all in agreement with atmospheric neutrino data, the Mikheyev-Smirnov-Wolfenstein large mixing angle solution of the solar neutrino problem, and consistent with the upper bound on the reactor mixing angle. Thus, we have a large, but not close to maximal, solar mixing angle θ12, a nearly maximal atmospheric mixing angle θ23, and a small reactor mixing angle θ13. In addition, the model predicts θ 12≃ {π}/{4}-θ 13.

  12. Mixed Membership Distributions with Applications to Modeling Multiple Strategy Usage

    ERIC Educational Resources Information Center

    Galyardt, April

    2012-01-01

    This dissertation examines two related questions. "How do mixed membership models work?" and "Can mixed membership be used to model how students use multiple strategies to solve problems?". Mixed membership models have been used in thousands of applications from text and image processing to genetic microarray analysis. Yet…

  13. Effects of Precipitation on Ocean Mixed-Layer Temperature and Salinity as Simulated in a 2-D Coupled Ocean-Cloud Resolving Atmosphere Model

    NASA Technical Reports Server (NTRS)

    Li, Xiaofan; Sui, C.-H.; Lau, K-M.; Adamec, D.

    1999-01-01

    A two-dimensional coupled ocean-cloud resolving atmosphere model is used to investigate possible roles of convective scale ocean disturbances induced by atmospheric precipitation on ocean mixed-layer heat and salt budgets. The model couples a cloud resolving model with an embedded mixed layer-ocean circulation model. Five experiment are performed under imposed large-scale atmospheric forcing in terms of vertical velocity derived from the TOGA COARE observations during a selected seven-day period. The dominant variability of mixed-layer temperature and salinity are simulated by the coupled model with imposed large-scale forcing. The mixed-layer temperatures in the coupled experiments with 1-D and 2-D ocean models show similar variations when salinity effects are not included. When salinity effects are included, however, differences in the domain-mean mixed-layer salinity and temperature between coupled experiments with 1-D and 2-D ocean models could be as large as 0.3 PSU and 0.4 C respectively. Without fresh water effects, the nocturnal heat loss over ocean surface causes deep mixed layers and weak cooling rates so that the nocturnal mixed-layer temperatures tend to be horizontally-uniform. The fresh water flux, however, causes shallow mixed layers over convective areas while the nocturnal heat loss causes deep mixed layer over convection-free areas so that the mixed-layer temperatures have large horizontal fluctuations. Furthermore, fresh water flux exhibits larger spatial fluctuations than surface heat flux because heavy rainfall occurs over convective areas embedded in broad non-convective or clear areas, whereas diurnal signals over whole model areas yield high spatial correlation of surface heat flux. As a result, mixed-layer salinities contribute more to the density differences than do mixed-layer temperatures.

  14. Prediction of stock markets by the evolutionary mix-game model

    NASA Astrophysics Data System (ADS)

    Chen, Fang; Gou, Chengling; Guo, Xiaoqian; Gao, Jieping

    2008-06-01

    This paper presents the efforts of using the evolutionary mix-game model, which is a modified form of the agent-based mix-game model, to predict financial time series. Here, we have carried out three methods to improve the original mix-game model by adding the abilities of strategy evolution to agents, and then applying the new model referred to as the evolutionary mix-game model to forecast the Shanghai Stock Exchange Composite Index. The results show that these modifications can improve the accuracy of prediction greatly when proper parameters are chosen.

  15. Using High-Resolution Comparison of Bedrock Properties and Channel Morphology to Empirically Characterize Erodibility in Fluvial Settings

    NASA Astrophysics Data System (ADS)

    Chilton, K.; Spotila, J. A.

    2017-12-01

    Bedrock erodibility exerts a primary control on landscape evolution and fluvial morphodynamics, but the relationships between erodibility and the many factors that influence it (rock strength, spacing and orientation of discontinuities, weathering susceptibility, erosive process, etc.) remain poorly defined. This results in oversimplification of erodibility in landscape evolution models, the primary example being the stream power incision model, which groups together factors which may influence erodibility into a single coefficient. There is therefore need to better define how bedrock properties influence erodibility and, in turn, channel form and evolution. This study seeks to deconvolve the relationships between bedrock material properties and erodibility by quantifying empirical relationships between substrate characteristics and bedrock channel morphology (slope, steepness index, width, form) at a high spatial resolution (5-10 m scale) in continuous and mixed alluvial-bedrock channels. We specifically focus on slowly eroding channels with minimal evidence for landscape transience, such that variations in channel morphology are mainly due to bedrock properties. We also use channels cut into sedimentary rock, which exhibit extreme variation (yet predictability and continuity) in discontinuity spacing. Here we present preliminary data comparing the morphology and bedrock properties of 1st through 4th order channels in the tectonically inactive Valley and Ridge province of the Appalachian Mountains, SW Virginia. Field surveys of channel slope, width, substrate, and form consist of 0.5 km long, continuous stream reaches through different intervals of tilted Paleozoic siliciclastic stratigraphy. Some surveys exhibit nearly complete bedrock exposure, whereas others are predominantly mixed, with localized bedrock reaches in high-slope knickzones. We statistically analyze relationships between fluvial morphology and lithology, strength (based on field and laboratory measurements), and discontinuity spacing and orientation. Results are informative for models of landscape evolution, and specifically provide insight into the controls on erosive process dominance (i.e., plucking vs. abrasion) and on the development and evolution of knickpoints in non-transient settings.

  16. Investigating cloud absorption effects: Global absorption properties of black carbon, tar balls, and soil dust in clouds and aerosols

    NASA Astrophysics Data System (ADS)

    Jacobson, Mark Z.

    2012-03-01

    This study examines modeled properties of black carbon (BC), tar ball (TB), and soil dust (SD) absorption within clouds and aerosols to understand better Cloud Absorption Effects I and II, which are defined as the effects on cloud heating of absorbing inclusions in hydrometeor particles and of absorbing aerosol particles interstitially between hydrometeor particles at their actual relative humidity (RH), respectively. The globally and annually averaged modeled 550 nm aerosol mass absorption coefficient (AMAC) of externally mixed BC was 6.72 (6.3-7.3) m2/g, within the laboratory range (6.3-8.7 m2/g). The global AMAC of internally mixed (IM) BC was 16.2 (13.9-18.2) m2/g, less than the measured maximum at 100% RH (23 m2/g). The resulting AMAC amplification factor due to internal mixing was 2.41 (2-2.9), with highest values in high RH regions. The global 650 nm hydrometeor mass absorption coefficient (HMAC) due to BC inclusions was 17.7 (10.6-19) m2/g, ˜9.3% higher than that of the IM-AMAC. The 650 nm HMACs of TBs and SD were half and 1/190th, respectively, that of BC. Modeled aerosol absorption optical depths were consistent with data. In column tests, BC inclusions in low and mid clouds (CAE I) gave column-integrated BC heating rates ˜200% and 235%, respectively, those of interstitial BC at the actual cloud RH (CAE II), which itself gave heating rates ˜120% and ˜130%, respectively, those of interstitial BC at the clear-sky RH. Globally, cloud optical depth increased then decreased with increasing aerosol optical depth, consistent with boomerang curves from satellite studies. Thus, CAEs, which are largely ignored, heat clouds significantly.

  17. A mixed state core for melancholia: an exploration in history, art and clinical science.

    PubMed

    Akiskal, H S; Akiskal, K K

    2007-01-01

    We argue for a mixed state core for melancholia comparing concepts of melancholia across centuries using examples from art, history and scientific literature. Literature reviews focusing on studies from Kraepelin onward, DSM-IV classification and view-points from clinical experience highlighting phenomenologic and biologic features as predictors of bipolar outcome in prospective studies of depression. Despite the implied chemical pathology in the term endogenous/melancholic depression, frequently reported glucocortical and sleep neurophysiologic abnormalities, there is little evidence that melancholia is inherited independently from more broadly defined depressions. Prospective follow-up of 'neurotic' depressions have shown melancholic outcomes in as many as a third; hypomania has also been observed in such follow-up. These findings and considerations overall do suggest that melancholia as defined today is more closely aligned with the depressive and/or mixed phase of bipolar disorder. Given the high suicidality from many of these patients the practice of treating them with antidepressant monotherapy needs re-evaluation.

  18. Combustor with fuel preparation chambers

    NASA Technical Reports Server (NTRS)

    Zelina, Joseph (Inventor); Myers, Geoffrey D. (Inventor); Srinivasan, Ram (Inventor); Reynolds, Robert S. (Inventor)

    2001-01-01

    An annular combustor having fuel preparation chambers mounted in the dome of the combustor. The fuel preparation chamber comprises an annular wall extending axially from an inlet to an exit that defines a mixing chamber. Mounted to the inlet are an air swirler and a fuel atomizer. The air swirler provides swirled air to the mixing chamber while the atomizer provides a fuel spray. On the downstream side of the exit, the fuel preparation chamber has an inwardly extending conical wall that compresses the swirling mixture of fuel and air exiting the mixing chamber.

  19. Geometric phase of mixed states for three-level open systems

    NASA Astrophysics Data System (ADS)

    Jiang, Yanyan; Ji, Y. H.; Xu, Hualan; Hu, Li-Yun; Wang, Z. S.; Chen, Z. Q.; Guo, L. P.

    2010-12-01

    Geometric phase of mixed state for three-level open system is defined by establishing in connecting density matrix with nonunit vector ray in a three-dimensional complex Hilbert space. Because the geometric phase depends only on the smooth curve on this space, it is formulated entirely in terms of geometric structures. Under the limiting of pure state, our approach is in agreement with the Berry phase, Pantcharatnam phase, and Aharonov and Anandan phase. We find that, furthermore, the Berry phase of mixed state correlated to population inversions of three-level open system.

  20. Effective surface and boundary conditions for heterogeneous surfaces with mixed boundary conditions

    NASA Astrophysics Data System (ADS)

    Guo, Jianwei; Veran-Tissoires, Stéphanie; Quintard, Michel

    2016-01-01

    To deal with multi-scale problems involving transport from a heterogeneous and rough surface characterized by a mixed boundary condition, an effective surface theory is developed, which replaces the original surface by a homogeneous and smooth surface with specific boundary conditions. A typical example corresponds to a laminar flow over a soluble salt medium which contains insoluble material. To develop the concept of effective surface, a multi-domain decomposition approach is applied. In this framework, velocity and concentration at micro-scale are estimated with an asymptotic expansion of deviation terms with respect to macro-scale velocity and concentration fields. Closure problems for the deviations are obtained and used to define the effective surface position and the related boundary conditions. The evolution of some effective properties and the impact of surface geometry, Péclet, Schmidt and Damköhler numbers are investigated. Finally, comparisons are made between the numerical results obtained with the effective models and those from direct numerical simulations with the original rough surface, for two kinds of configurations.

  1. Determinants of performance failure in the nursing home industry☆

    PubMed Central

    Zinn, Jacqueline; Mor, Vincent; Feng, Zhanlian; Intrator, Orna

    2013-01-01

    This study investigates the determinants of performance failure in U.S. nursing homes. The sample consisted of 91,168 surveys from 10,901 facilities included in the Online Survey Certification and Reporting system from 1996 to 2005. Failed performance was defined as termination from the Medicare and Medicaid programs. Determinants of performance failure were identified as core structural change (ownership change), peripheral change (related diversification), prior financial and quality of care performance, size and environmental shock (Medicaid case mix reimbursement and prospective payment system introduction). Additional control variables that could contribute to the likelihood of performance failure were included in a cross-sectional time series generalized estimating equation logistic regression model. Our results support the contention, derived from structural inertia theory, that where in an organization’s structure change occurs determines whether it is adaptive or disruptive. In addition, while poor prior financial and quality performance and the introduction of case mix reimbursement increases the risk of failure, larger size is protective, decreasing the likelihood of performance failure. PMID:19128865

  2. Zero Boil-Off Tank (ZBOT) Experiment

    NASA Technical Reports Server (NTRS)

    Mcquillen, John

    2016-01-01

    The Zero-Boil-Off Tank (ZBOT) experiment has been developed as a small scale ISS experiment aimed at delineating important fluid flow, heat and mass transport, and phase change phenomena that affect cryogenic storage tank pressurization and pressure control in microgravity. The experiments use a simulant transparent low boiling point fluid (PnP) in a sealed transparent Dewar to study and quantify: (a) fluid flow and thermal stratification during pressurization; (b) mixing, thermal destratification, depressurization, and jet-ullage penetration during pressure control by jet mixing. The experiment will provide valuable microgravity empirical two-phase data associated with the above-mentioned physical phenomena through highly accurate local wall and fluid temperature and pressure measurements, full-field phase-distribution and flow visualization. Moreover, the experiments are performed under tightly controlled and definable heat transfer boundary conditions to provide reliable high-fidelity data and precise input as required for validation verification of state-of-the-art two-phase CFD models developed as part of this research and by other groups in the international scientific and cryogenic fluid management communities.

  3. In vitro evaluation of nutrients that selectively confer a competitive advantage to lactobacilli.

    PubMed

    Vongsa, R A; Minerath, R A; Busch, M A; Tan, J; Koenig, D W

    2016-01-01

    An assay was developed that tested the ability of Lactobacillus acidophilus to outcompete a challenge of Escherichia coli in a mixed culture containing different test nutrients. Using this assay, addition of fructo-oligosaccharide to the media allowed L. acidophilus to outcompete a challenge of E. coli, whereas in a mixed culture without the prebiotic the trend was reversed. Growth curves generated for E. coli in a single culture showed that fructo-oligosaccharide did not affect growth, indicating that the carbohydrate was not toxic to E. coli. This indicates that fructo-oligosaccharides may increase the ability of beneficial microbes to outcompete a pathogenic challenge. These results were confirmed using a skin simulant model that incorporates growth of the organisms at an air-surface interface to mimic the vulvar environment. It is possible to use a co-culture assay as an in vitro screening tool to define nutrients that confer a competitive advantage to beneficial flora specific to the female urogenital tract.

  4. Weak form of Stokes-Dirac structures and geometric discretization of port-Hamiltonian systems

    NASA Astrophysics Data System (ADS)

    Kotyczka, Paul; Maschke, Bernhard; Lefèvre, Laurent

    2018-05-01

    We present the mixed Galerkin discretization of distributed parameter port-Hamiltonian systems. On the prototypical example of hyperbolic systems of two conservation laws in arbitrary spatial dimension, we derive the main contributions: (i) A weak formulation of the underlying geometric (Stokes-Dirac) structure with a segmented boundary according to the causality of the boundary ports. (ii) The geometric approximation of the Stokes-Dirac structure by a finite-dimensional Dirac structure is realized using a mixed Galerkin approach and power-preserving linear maps, which define minimal discrete power variables. (iii) With a consistent approximation of the Hamiltonian, we obtain finite-dimensional port-Hamiltonian state space models. By the degrees of freedom in the power-preserving maps, the resulting family of structure-preserving schemes allows for trade-offs between centered approximations and upwinding. We illustrate the method on the example of Whitney finite elements on a 2D simplicial triangulation and compare the eigenvalue approximation in 1D with a related approach.

  5. Determinants of performance failure in the nursing home industry.

    PubMed

    Zinn, Jacqueline; Mor, Vincent; Feng, Zhanlian; Intrator, Orna

    2009-03-01

    This study investigates the determinants of performance failure in U.S. nursing homes. The sample consisted of 91,168 surveys from 10,901 facilities included in the Online Survey Certification and Reporting system from 1996 to 2005. Failed performance was defined as termination from the Medicare and Medicaid programs. Determinants of performance failure were identified as core structural change (ownership change), peripheral change (related diversification), prior financial and quality of care performance, size and environmental shock (Medicaid case mix reimbursement and prospective payment system introduction). Additional control variables that could contribute to the likelihood of performance failure were included in a cross-sectional time series generalized estimating equation logistic regression model. Our results support the contention, derived from structural inertia theory, that where in an organization's structure change occurs determines whether it is adaptive or disruptive. In addition, while poor prior financial and quality performance and the introduction of case mix reimbursement increases the risk of failure, larger size is protective, decreasing the likelihood of performance failure.

  6. Collaborative Analysis of DRD4 and DAT Genotypes in Population-Defined ADHD Subtypes

    ERIC Educational Resources Information Center

    Todd, Richard D.; Huang, Hongyan; Smalley, Susan L.; Nelson, Stanley F.; Willcutt, Erik G.; Pennington, Bruce F.; Smith, Shelley D.; Faraone, Stephen V.; Neuman, Rosalind J.

    2005-01-01

    Background: It has been proposed that some of the variability in reporting of associations between attention deficit hyperactivity disorder (ADHD) and candidate genes may result from mixing of genetically heterogeneous forms of ADHD using DSM-IV criteria. The goal of the current study is to test whether population-based ADHD subtypes defined by…

  7. Minimalist model of ice microphysics in mixed-phase stratiform clouds

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Ovchinnikov, Mikhail; Shaw, Raymond A.

    2013-07-01

    The question of whether persistent ice crystal precipitation from supercooled layer clouds can be explained by time-dependent, stochastic ice nucleation is explored using an approximate, analytical model and a large-eddy simulation (LES) cloud model. The updraft velocity in the cloud defines an accumulation zone, where small ice particles cannot fall out until they are large enough, which will increase the residence time of ice particles in the cloud. Ice particles reach a quasi-steady state between growth by vapor deposition and fall speed at cloud base. The analytical model predicts that ice water content (wi) has a 2.5 power-law relationship with ice number concentration (ni). wi and ni from a LES cloud model with stochastic ice nucleation confirm the 2.5 power-law relationship, and initial indications of the scaling law are observed in data from the Indirect and Semi-Direct Aerosol Campaign. The prefactor of the power law is proportional to the ice nucleation rate and therefore provides a quantitative link to observations of ice microphysical properties.

  8. Minimalist Model of Ice Microphysics in Mixed-phase Stratiform Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, F.; Ovchinnikov, Mikhail; Shaw, Raymond A.

    The question of whether persistent ice crystal precipitation from super cooled layer clouds can be explained by time-dependent, stochastic ice nucleation is explored using an approximate, analytical model, and a large-eddy simulation (LES) cloud model. The updraft velocity in the cloud defines an accumulation zone, where small ice particles cannot fall out until they are large enough, which will increase the residence time of ice particles in the cloud. Ice particles reach a quasi-steady state between growth by vapor deposition and fall speed at cloud base. The analytical model predicts that ice water content (wi) has a 2.5 power lawmore » relationship with ice number concentration ni. wi and ni from a LES cloud model with stochastic ice nucleation also confirm the 2.5 power law relationship. The prefactor of the power law is proportional to the ice nucleation rate, and therefore provides a quantitative link to observations of ice microphysical properties.« less

  9. Modeling optimal treatment strategies in a heterogeneous mixing model.

    PubMed

    Choe, Seoyun; Lee, Sunmi

    2015-11-25

    Many mathematical models assume random or homogeneous mixing for various infectious diseases. Homogeneous mixing can be generalized to mathematical models with multi-patches or age structure by incorporating contact matrices to capture the dynamics of the heterogeneously mixing populations. Contact or mixing patterns are difficult to measure in many infectious diseases including influenza. Mixing patterns are considered to be one of the critical factors for infectious disease modeling. A two-group influenza model is considered to evaluate the impact of heterogeneous mixing on the influenza transmission dynamics. Heterogeneous mixing between two groups with two different activity levels includes proportionate mixing, preferred mixing and like-with-like mixing. Furthermore, the optimal control problem is formulated in this two-group influenza model to identify the group-specific optimal treatment strategies at a minimal cost. We investigate group-specific optimal treatment strategies under various mixing scenarios. The characteristics of the two-group influenza dynamics have been investigated in terms of the basic reproduction number and the final epidemic size under various mixing scenarios. As the mixing patterns become proportionate mixing, the basic reproduction number becomes smaller; however, the final epidemic size becomes larger. This is due to the fact that the number of infected people increases only slightly in the higher activity level group, while the number of infected people increases more significantly in the lower activity level group. Our results indicate that more intensive treatment of both groups at the early stage is the most effective treatment regardless of the mixing scenario. However, proportionate mixing requires more treated cases for all combinations of different group activity levels and group population sizes. Mixing patterns can play a critical role in the effectiveness of optimal treatments. As the mixing becomes more like-with-like mixing, treating the higher activity group in the population is almost as effective as treating the entire populations since it reduces the number of disease cases effectively but only requires similar treatments. The gain becomes more pronounced as the basic reproduction number increases. This can be a critical issue which must be considered for future pandemic influenza interventions, especially when there are limited resources available.

  10. Predicting Upscaled Behavior of Aqueous Reactants in Heterogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Wright, E. E.; Hansen, S. K.; Bolster, D.; Richter, D. H.; Vesselinov, V. V.

    2017-12-01

    When modeling reactive transport, reaction rates are often overestimated due to the improper assumption of perfect mixing at the support scale of the transport model. In reality, fronts tend to form between participants in thermodynamically favorable reactions, leading to segregation of reactants into islands or fingers. When such a configuration arises, reactions are limited to the interface between the reactive solutes. Closure methods for estimating control-volume-effective reaction rates in terms of quantities defined at the control volume scale do not presently exist, but their development is crucial for effective field-scale modeling. We attack this problem through a combination of analytical and numerical means. Specifically, we numerically study reactive transport through an ensemble of realizations of two-dimensional heterogeneous porous media. We then employ regression analysis to calibrate an analytically-derived relationship between reaction rate and various dimensionless quantities representing conductivity-field heterogeneity and the respective strengths of diffusion, reaction and advection.

  11. Biomass supply chain optimisation for Organosolv-based biorefineries.

    PubMed

    Giarola, Sara; Patel, Mayank; Shah, Nilay

    2014-05-01

    This work aims at providing a Mixed Integer Linear Programming modelling framework to help define planning strategies for the development of sustainable biorefineries. The up-scaling of an Organosolv biorefinery was addressed via optimisation of the whole system economics. Three real world case studies were addressed to show the high-level flexibility and wide applicability of the tool to model different biomass typologies (i.e. forest fellings, cereal residues and energy crops) and supply strategies. Model outcomes have revealed how supply chain optimisation techniques could help shed light on the development of sustainable biorefineries. Feedstock quality, quantity, temporal and geographical availability are crucial to determine biorefinery location and the cost-efficient way to supply the feedstock to the plant. Storage costs are relevant for biorefineries based on cereal stubble, while wood supply chains present dominant pretreatment operations costs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Investigation of the Bitumen Modification Process Regime Parameters Influence on Polymer-Bitumen Bonding Qualitative Indicators

    NASA Astrophysics Data System (ADS)

    Belyaev, P. S.; Mishchenko, S. V.; Belyaev, V. P.; Belousov, O. A.; Frolov, V. A.

    2018-04-01

    The objects of this study are petroleum road bitumen and polymeric bituminous binder for road surfaces obtained by polymer materials. The subject of the study is monitoring the polymer-bitumen binder quality changes as a result of varying the bitumen modification process. The purpose of the work is to identify the patterns of the modification process and build a mathematical model that provides the ability to calculate and select technological equipment. It is shown that the polymer-bitumen binder production with specified quality parameters can be ensured in apparatuses with agitators in turbulent mode without the colloidal mills use. Bitumen mix and modifying additives limiting indicators which can be used as restrictions in the form of mathematical model inequalities are defined. A mathematical model for the polymer-bitumen binder preparation has been developed and its adequacy has been confirmed.

  13. Candida Biofilms and the Host: Models and New Concepts for Eradication

    PubMed Central

    Tournu, Hélène; Van Dijck, Patrick

    2012-01-01

    Biofilms define mono- or multispecies communities embedded in a self-produced protective matrix, which is strongly attached to surfaces. They often are considered a general threat not only in industry but also in medicine. They constitute a permanent source of contamination, and they can disturb the proper usage of the material onto which they develop. This paper relates to some of the most recent approaches that have been elaborated to eradicate Candida biofilms, based on the vast effort put in ever-improving models of biofilm formation in vitro and in vivo, including novel flow systems, high-throughput techniques and mucosal models. Mixed biofilms, sustaining antagonist or beneficial cooperation between species, and their interplay with the host immune system are also prevalent topics. Alternative strategies against biofilms include the lock therapy and immunotherapy approaches, and material coating and improvements. The host-biofilm interactions are also discussed, together with their potential applications in Candida biofilm elimination. PMID:22164167

  14. Modeling viscosity and diffusion of plasma mixtures across coupling regimes

    NASA Astrophysics Data System (ADS)

    Arnault, Philippe

    2014-10-01

    Viscosity and diffusion of plasma for pure elements and multicomponent mixtures are modeled from the high-temperature low-density weakly coupled regime to the low-temperature high-density strongly coupled regime. Thanks to an atom in jellium modeling, the effect of electron screening on the ion-ion interaction is incorporated through a self-consistent definition of the ionization. This defines an effective One Component Plasma, or an effective Binary Ionic Mixture, that is representative of the strength of the interaction. For the viscosity and the interdiffusion of mixtures, approximate kinetic expressions are supplemented by mixing laws applied to the excess viscosity and self-diffusion of pure elements. The comparisons with classical and quantum molecular dynamics results reveal deviations in the range 20--40% on average with almost no predictions further than a factor of 2 over many decades of variation. Applications in the inertial confinement fusion context could help in predicting the growth of hydrodynamic instabilities.

  15. A Lagrangian mixing frequency model for transported PDF modeling

    NASA Astrophysics Data System (ADS)

    Turkeri, Hasret; Zhao, Xinyu

    2017-11-01

    In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.

  16. LOX/hydrocarbon fuel carbon formation and mixing data analysis

    NASA Technical Reports Server (NTRS)

    Fang, J.

    1983-01-01

    By applying the Priem-Heidmann Generalized-Length vaporization correlation, the computer model developed by the present study predicts the spatial variation of propellant vaporization rate using the injector cold flow results to define the streamtubes. The calculations show that the overall and local propellant vaporization rate and mixture ratio change drastically as the injection element type or the injector operating condition is changed. These results are compared with the regions of carbon formation observed in the photographic combustion testing. The correlation shows that the fuel vaporization rate and the local mixture ratio produced by the injector element have first order effects on the degree of carbon formation.

  17. NREL Screens Universities for Solar and Battery Storage Potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    In support of the U.S. Department of Energy's SunShot initiative, NREL provided solar photovoltaic (PV) screenings in 2016 for eight universities seeking to go solar. NREL conducted an initial technoeconomic assessment of PV and storage feasibility at the selected universities using the REopt model, an energy planning platform that can be used to evaluate RE options, estimate costs, and suggest a mix of RE technologies to meet defined assumptions and constraints. NREL provided each university with customized results, including the cost-effectiveness of PV and storage, recommended system size, estimated capital cost to implement the technology, and estimated life cycle costmore » savings.« less

  18. The effective compliance of spatially evolving planar wing-cracks

    NASA Astrophysics Data System (ADS)

    Ayyagari, R. S.; Daphalapurkar, N. P.; Ramesh, K. T.

    2018-02-01

    We present an analytic closed form solution for anisotropic change in compliance due to the spatial evolution of planar wing-cracks in a material subjected to largely compressive loading. A fully three-dimensional anisotropic compliance tensor is defined and evaluated considering the wing-crack mechanism, using a mixed-approach based on kinematic and energetic arguments to derive the coefficients in incremental compliance. Material, kinematic and kinetic parametric influences on the increments in compliance are studied in order to understand their physical implications on material failure. Model verification is carried out through comparisons to experimental uniaxial compression results to showcase the predictive capabilities of the current study.

  19. Performance analysis of quantum Diesel heat engines with a two-level atom as working substance

    NASA Astrophysics Data System (ADS)

    Huang, X. L.; Shang, Y. F.; Guo, D. Y.; Yu, Qian; Sun, Qi

    2017-07-01

    A quantum Diesel cycle, which consists of one quantum isobaric process, one quantum isochoric process and two quantum adiabatic processes, is established with a two-level atom as working substance. The parameter R in this model is defined as the ratio of the time in quantum isochoric process to the timescale for the potential width movement. The positive work condition, power output and efficiency are obtained, and the optimal performance is analyzed with different R. The effects of dissipation, the mixed state in the cycle and the results of other working substances are also discussed at the end of this analysis.

  20. Galaxy Formation in Sterile Neutrino Dark Matter Models

    NASA Astrophysics Data System (ADS)

    Menci, N.; Grazian, A.; Lamastra, A.; Calura, F.; Castellano, M.; Santini, P.

    2018-02-01

    We investigate galaxy formation in models with dark matter (DM) constituted by sterile neutrinos. Given their large parameter space, defined by the combinations of sterile neutrino mass {m}ν and mixing parameter {\\sin }2(2θ ) with active neutrinos, we focus on models with {m}ν =7 {keV}, consistent with the tentative 3.5 keV line detected in several X-ray spectra of clusters and galaxies. We consider (1) two resonant production models with {\\sin }2(2θ )=5 × {10}-11 and {\\sin }2(2θ )=2 × {10}-10, to cover the range of mixing parameters consistent with the 3.5 keV line; (2) two scalar-decay models, representative of the two possible cases characterizing such a scenario: a freeze-in and a freeze-out case. We also consider thermal warm DM with particle mass {m}X=3 {keV}. Using a semianalytic model, we compare the predictions for the different DM scenarios with a wide set of observables. We find that comparing the predicted evolution of the stellar mass function, the abundance of satellites of Milky Way–like galaxies, and the global star formation history of galaxies with observations does not allow us to disentangle the effects of the baryonic physics from those related to the different DM models. On the other hand, the distribution of the stellar-to-halo mass ratios, the abundance of faint galaxies in the UV luminosity function at z≳ 6, and the specific star formation and age distribution of local, low-mass galaxies constitute potential probes for the DM scenarios considered. We discuss how future observations with upcoming facilities will enable us to rule out or to strongly support DM models based on sterile neutrinos.

  1. Application of the Fokker-Planck molecular mixing model to turbulent scalar mixing using moment methods

    NASA Astrophysics Data System (ADS)

    Madadi-Kandjani, E.; Fox, R. O.; Passalacqua, A.

    2017-06-01

    An extended quadrature method of moments using the β kernel density function (β -EQMOM) is used to approximate solutions to the evolution equation for univariate and bivariate composition probability distribution functions (PDFs) of a passive scalar for binary and ternary mixing. The key element of interest is the molecular mixing term, which is described using the Fokker-Planck (FP) molecular mixing model. The direct numerical simulations (DNSs) of Eswaran and Pope ["Direct numerical simulations of the turbulent mixing of a passive scalar," Phys. Fluids 31, 506 (1988)] and the amplitude mapping closure (AMC) of Pope ["Mapping closures for turbulent mixing and reaction," Theor. Comput. Fluid Dyn. 2, 255 (1991)] are taken as reference solutions to establish the accuracy of the FP model in the case of binary mixing. The DNSs of Juneja and Pope ["A DNS study of turbulent mixing of two passive scalars," Phys. Fluids 8, 2161 (1996)] are used to validate the results obtained for ternary mixing. Simulations are performed with both the conditional scalar dissipation rate (CSDR) proposed by Fox [Computational Methods for Turbulent Reacting Flows (Cambridge University Press, 2003)] and the CSDR from AMC, with the scalar dissipation rate provided as input and obtained from the DNS. Using scalar moments up to fourth order, the ability of the FP model to capture the evolution of the shape of the PDF, important in turbulent mixing problems, is demonstrated. Compared to the widely used assumed β -PDF model [S. S. Girimaji, "Assumed β-pdf model for turbulent mixing: Validation and extension to multiple scalar mixing," Combust. Sci. Technol. 78, 177 (1991)], the β -EQMOM solution to the FP model more accurately describes the initial mixing process with a relatively small increase in computational cost.

  2. Heterogeneity of neuroblastoma cell identity defined by transcriptional circuitries.

    PubMed

    Boeva, Valentina; Louis-Brennetot, Caroline; Peltier, Agathe; Durand, Simon; Pierre-Eugène, Cécile; Raynal, Virginie; Etchevers, Heather C; Thomas, Sophie; Lermine, Alban; Daudigeos-Dubus, Estelle; Geoerger, Birgit; Orth, Martin F; Grünewald, Thomas G P; Diaz, Elise; Ducos, Bertrand; Surdez, Didier; Carcaboso, Angel M; Medvedeva, Irina; Deller, Thomas; Combaret, Valérie; Lapouble, Eve; Pierron, Gaelle; Grossetête-Lalami, Sandrine; Baulande, Sylvain; Schleiermacher, Gudrun; Barillot, Emmanuel; Rohrer, Hermann; Delattre, Olivier; Janoueix-Lerosey, Isabelle

    2017-09-01

    Neuroblastoma is a tumor of the peripheral sympathetic nervous system, derived from multipotent neural crest cells (NCCs). To define core regulatory circuitries (CRCs) controlling the gene expression program of neuroblastoma, we established and analyzed the neuroblastoma super-enhancer landscape. We discovered three types of identity in neuroblastoma cell lines: a sympathetic noradrenergic identity, defined by a CRC module including the PHOX2B, HAND2 and GATA3 transcription factors (TFs); an NCC-like identity, driven by a CRC module containing AP-1 TFs; and a mixed type, further deconvoluted at the single-cell level. Treatment of the mixed type with chemotherapeutic agents resulted in enrichment of NCC-like cells. The noradrenergic module was validated by ChIP-seq. Functional studies demonstrated dependency of neuroblastoma with noradrenergic identity on PHOX2B, evocative of lineage addiction. Most neuroblastoma primary tumors express TFs from the noradrenergic and NCC-like modules. Our data demonstrate a previously unknown aspect of tumor heterogeneity relevant for neuroblastoma treatment strategies.

  3. Impact of off-diagonal cross-shell interaction on 14C

    NASA Astrophysics Data System (ADS)

    Yuan, Cen-Xi

    2017-10-01

    A shell-model investigation is performed to show the impact on the structure of 14C from the off-diagonal cross-shell interaction, 〈pp|V|sdsd〉, which represents the mixing between the 0 and 2ħω configurations in the psd model space. The observed levels of the positive states in 14C can be nicely described in 0-4ħω or a larger model space through the well defined Hamiltonians, YSOX and WBP, with a reduction of the strength of the 〈pp|V|sdsd〉 interaction in the latter. The observed B(GT) values for 14C can be generally described by YSOX, while WBP and their modifications of the 〈pp|V|sdsd〉 interaction fail for some values. Further investigation shows the effect of such interactions on the configuration mixing and occupancy. The present work shows examples of how the off-diagonal cross-shell interaction strongly drives the nuclear structure. Supported by National Natural Science Foundation of China (11305272), Special Program for Applied Research on Super Computation of the NSFC Guangdong Joint Fund (the second phase), the Guangdong Natural Science Foundation (2014A030313217), the Pearl River S&T Nova Program of Guangzhou (201506010060), the Tip-top Scientific and Technical Innovative Youth Talents of Guangdong special support program (2016TQ03N575), and the Fundamental Research Funds for the Central Universities (17lgzd34)

  4. Operational limit conditions of the spur gears in lubricated modes

    NASA Astrophysics Data System (ADS)

    Benilha, S.; Belarifi, F.

    2018-01-01

    The calculation of the gear teeth resistance, shows the using of a certain number of coefficients determined experimentally and which are accepted by the various international standards. However, this kind of calculation determines the gears by excess material and does not support the tribological parameters of operation. We propose in this work the support of these parameters, to determine the limit operation conditions of the spur gears, using the equivalent geometry. This is represented by two cylinders, which geometrically models of the contact between two teeth of a gear and whose lubrication is generally in mixed lubrication mode. The concept of Mc cool is used to determine the distribution of the load and the friction force, which are distributed in liquid (elastohydrodynamic) and solid domains and interact with each other. The phenomenon of interaction between the two domains is used, to predict the tribological limit conditions of operation. The proposed model is based on the resolution of elastohydrodynamic equations for the determination of load and friction as well as the deduction of mixed friction by tracing the Stribeck curve. This is calculated by the model of the decomposition of the patterns profile of rough surfaces in contacts. The results of non-dimensional calculations allow us to deduce the boundary conditions and can be adapted for any type of gear pair defined according to pre-established operating conditions.

  5. The mutation-drift balance in spatially structured populations.

    PubMed

    Schneider, David M; Martins, Ayana B; de Aguiar, Marcus A M

    2016-08-07

    In finite populations the action of neutral mutations is balanced by genetic drift, leading to a stationary distribution of alleles that displays a transition between two different behaviors. For small mutation rates most individuals will carry the same allele at equilibrium, whereas for high mutation rates of the alleles will be randomly distributed with frequencies close to one half for a biallelic gene. For well-mixed haploid populations the mutation threshold is μc=1/2N, where N is the population size. In this paper we study how spatial structure affects this mutation threshold. Specifically, we study the stationary allele distribution for populations placed on regular networks where connected nodes represent potential mating partners. We show that the mutation threshold is sensitive to spatial structure only if the number of potential mates is very small. In this limit, the mutation threshold decreases substantially, increasing the diversity of the population at considerably low mutation rates. Defining kc as the degree of the network for which the mutation threshold drops to half of its value in well-mixed populations we show that kc grows slowly as a function of the population size, following a power law. Our calculations and simulations are based on the Moran model and on a mapping between the Moran model with mutations and the voter model with opinion makers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Mixed models and reduced/selective integration displacement models for nonlinear analysis of curved beams

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Peters, J. M.

    1981-01-01

    Simple mixed models are developed for use in the geometrically nonlinear analysis of deep arches. A total Lagrangian description of the arch deformation is used, the analytical formulation being based on a form of the nonlinear deep arch theory with the effects of transverse shear deformation included. The fundamental unknowns comprise the six internal forces and generalized displacements of the arch, and the element characteristic arrays are obtained by using Hellinger-Reissner mixed variational principle. The polynomial interpolation functions employed in approximating the forces are one degree lower than those used in approximating the displacements, and the forces are discontinuous at the interelement boundaries. Attention is given to the equivalence between the mixed models developed herein and displacement models based on reduced integration of both the transverse shear and extensional energy terms. The advantages of mixed models over equivalent displacement models are summarized. Numerical results are presented to demonstrate the high accuracy and effectiveness of the mixed models developed and to permit a comparison of their performance with that of other mixed models reported in the literature.

  7. Estimating the numerical diapycnal mixing in an eddy-permitting ocean model

    NASA Astrophysics Data System (ADS)

    Megann, Alex

    2018-01-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, having attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimates have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is a recent ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre. It forms the ocean component of the GC2 climate model, and is closely related to the ocean component of the UKESM1 Earth System Model, the UK's contribution to the CMIP6 model intercomparison. GO5.0 uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. An approach to quantifying the numerical diapycnal mixing in this model, based on the isopycnal watermass analysis of Lee et al. (2002), is described, and the estimates thereby obtained of the effective diapycnal diffusivity in GO5.0 are compared with the values of the explicit diffusivity used by the model. It is shown that the effective mixing in this model configuration is up to an order of magnitude higher than the explicit mixing in much of the ocean interior, implying that mixing in the model below the mixed layer is largely dominated by numerical mixing. This is likely to have adverse consequences for the representation of heat uptake in climate models intended for decadal climate projections, and in particular is highly relevant to the interpretation of the CMIP6 class of climate models, many of which use constant-depth ocean models at ¼° resolution

  8. Assessing stratospheric transport in the CMAM30 simulations using ACE-FTS measurements

    NASA Astrophysics Data System (ADS)

    Kolonjari, Felicia; Plummer, David A.; Walker, Kaley A.; Boone, Chris D.; Elkins, James W.; Hegglin, Michaela I.; Manney, Gloria L.; Moore, Fred L.; Pendlebury, Diane; Ray, Eric A.; Rosenlof, Karen H.; Stiller, Gabriele P.

    2018-05-01

    Stratospheric transport in global circulation models and chemistry-climate models is an important component in simulating the recovery of the ozone layer as well as changes in the climate system. The Brewer-Dobson circulation is not well constrained by observations and further investigation is required to resolve uncertainties related to the mechanisms driving the circulation. This study has assessed the specified dynamics mode of the Canadian Middle Atmosphere Model (CMAM30) by comparing to the Atmospheric Chemistry Experiment Fourier transform spectrometer (ACE-FTS) profile measurements of CFC-11 (CCl3F), CFC-12 (CCl2F2), and N2O. In the CMAM30 specified dynamics simulation, the meteorological fields are nudged using the ERA-Interim reanalysis and a specified tracer was employed for each species, with hemispherically defined surface measurements used as the boundary condition. A comprehensive sampling technique along the line of sight of the ACE-FTS measurements has been utilized to allow for direct comparisons between the simulated and measured tracer concentrations. The model consistently overpredicts tracer concentrations of CFC-11, CFC-12, and N2O in the lower stratosphere, particularly in the northern hemispheric winter and spring seasons. The three mixing barriers investigated, including the polar vortex, the extratropical tropopause, and the tropical pipe, show that there are significant inconsistencies between the measurements and the simulations. In particular, the CMAM30 simulation underpredicts mixing efficiency in the tropical lower stratosphere during the June-July-August season.

  9. Modeling molecular mixing in a spatially inhomogeneous turbulent flow

    NASA Astrophysics Data System (ADS)

    Meyer, Daniel W.; Deb, Rajdeep

    2012-02-01

    Simulations of spatially inhomogeneous turbulent mixing in decaying grid turbulence with a joint velocity-concentration probability density function (PDF) method were conducted. The inert mixing scenario involves three streams with different compositions. The mixing model of Meyer ["A new particle interaction mixing model for turbulent dispersion and turbulent reactive flows," Phys. Fluids 22(3), 035103 (2010)], the interaction by exchange with the mean (IEM) model and its velocity-conditional variant, i.e., the IECM model, were applied. For reference, the direct numerical simulation data provided by Sawford and de Bruyn Kops ["Direct numerical simulation and lagrangian modeling of joint scalar statistics in ternary mixing," Phys. Fluids 20(9), 095106 (2008)] was used. It was found that velocity conditioning is essential to obtain accurate concentration PDF predictions. Moreover, the model of Meyer provides significantly better results compared to the IECM model at comparable computational expense.

  10. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    PubMed

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  11. Stochastic approach to the derivation of emission limits for wastewater treatment plants.

    PubMed

    Stransky, D; Kabelkova, I; Bares, V

    2009-01-01

    Stochastic approach to the derivation of WWTP emission limits meeting probabilistically defined environmental quality standards (EQS) is presented. The stochastic model is based on the mixing equation with input data defined by probability density distributions and solved by Monte Carlo simulations. The approach was tested on a study catchment for total phosphorus (P(tot)). The model assumes input variables independency which was proved for the dry-weather situation. Discharges and P(tot) concentrations both in the study creek and WWTP effluent follow log-normal probability distribution. Variation coefficients of P(tot) concentrations differ considerably along the stream (c(v)=0.415-0.884). The selected value of the variation coefficient (c(v)=0.420) affects the derived mean value (C(mean)=0.13 mg/l) of the P(tot) EQS (C(90)=0.2 mg/l). Even after supposed improvement of water quality upstream of the WWTP to the level of the P(tot) EQS, the WWTP emission limits calculated would be lower than the values of the best available technology (BAT). Thus, minimum dilution ratios for the meaningful application of the combined approach to the derivation of P(tot) emission limits for Czech streams are discussed.

  12. MixSIAR: A Bayesian stable isotope mixing model for characterizing intrapopulation niche variation

    EPA Science Inventory

    Background/Question/Methods The science of stable isotope mixing models has tended towards the development of modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances or syntheses of the current state of the art are published in parity with software packa...

  13. A non-modal analytical method to predict turbulent properties applied to the Hasegawa-Wakatani model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, B., E-mail: friedman11@llnl.gov; Lawrence Livermore National Laboratory, Livermore, California 94550; Carter, T. A.

    2015-01-15

    Linear eigenmode analysis often fails to describe turbulence in model systems that have non-normal linear operators and thus nonorthogonal eigenmodes, which can cause fluctuations to transiently grow faster than expected from eigenmode analysis. When combined with energetically conservative nonlinear mode mixing, transient growth can lead to sustained turbulence even in the absence of eigenmode instability. Since linear operators ultimately provide the turbulent fluctuations with energy, it is useful to define a growth rate that takes into account non-modal effects, allowing for prediction of energy injection, transport levels, and possibly even turbulent onset in the subcritical regime. We define such amore » non-modal growth rate using a relatively simple model of the statistical effect that the nonlinearities have on cross-phases and amplitude ratios of the system state variables. In particular, we model the nonlinearities as delta-function-like, periodic forces that randomize the state variables once every eddy turnover time. Furthermore, we estimate the eddy turnover time to be the inverse of the least stable eigenmode frequency or growth rate, which allows for prediction without nonlinear numerical simulation. We test this procedure on the 2D and 3D Hasegawa-Wakatani model [A. Hasegawa and M. Wakatani, Phys. Rev. Lett. 50, 682 (1983)] and find that the non-modal growth rate is a good predictor of energy injection rates, especially in the strongly non-normal, fully developed turbulence regime.« less

  14. A non-modal analytical method to predict turbulent properties applied to the Hasegawa-Wakatani model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, B.; Carter, T. A.

    2015-01-15

    Linear eigenmode analysis often fails to describe turbulence in model systems that have non-normal linear operators and thus nonorthogonal eigenmodes, which can cause fluctuations to transiently grow faster than expected from eigenmode analysis. When combined with energetically conservative nonlinear mode mixing, transient growth can lead to sustained turbulence even in the absence of eigenmode instability. Since linear operators ultimately provide the turbulent fluctuations with energy, it is useful to define a growth rate that takes into account non-modal effects, allowing for prediction of energy injection, transport levels, and possibly even turbulent onset in the subcritical regime. Here, we define suchmore » a non-modal growth rate using a relatively simple model of the statistical effect that the nonlinearities have on cross-phases and amplitude ratios of the system state variables. In particular, we model the nonlinearities as delta-function-like, periodic forces that randomize the state variables once every eddy turnover time. Furthermore, we estimate the eddy turnover time to be the inverse of the least stable eigenmode frequency or growth rate, which allows for prediction without nonlinear numerical simulation. Also, we test this procedure on the 2D and 3D Hasegawa-Wakatani model [A. Hasegawa and M. Wakatani, Phys. Rev. Lett. 50, 682 (1983)] and find that the non-modal growth rate is a good predictor of energy injection rates, especially in the strongly non-normal, fully developed turbulence regime.« less

  15. Data-adaptive harmonic spectra and multilayer Stuart-Landau models

    NASA Astrophysics Data System (ADS)

    Chekroun, Mickaël D.; Kondrashov, Dmitri

    2017-09-01

    Harmonic decompositions of multivariate time series are considered for which we adopt an integral operator approach with periodic semigroup kernels. Spectral decomposition theorems are derived that cover the important cases of two-time statistics drawn from a mixing invariant measure. The corresponding eigenvalues can be grouped per Fourier frequency and are actually given, at each frequency, as the singular values of a cross-spectral matrix depending on the data. These eigenvalues obey, furthermore, a variational principle that allows us to define naturally a multidimensional power spectrum. The eigenmodes, as far as they are concerned, exhibit a data-adaptive character manifested in their phase which allows us in turn to define a multidimensional phase spectrum. The resulting data-adaptive harmonic (DAH) modes allow for reducing the data-driven modeling effort to elemental models stacked per frequency, only coupled at different frequencies by the same noise realization. In particular, the DAH decomposition extracts time-dependent coefficients stacked by Fourier frequency which can be efficiently modeled—provided the decay of temporal correlations is sufficiently well-resolved—within a class of multilayer stochastic models (MSMs) tailored here on stochastic Stuart-Landau oscillators. Applications to the Lorenz 96 model and to a stochastic heat equation driven by a space-time white noise are considered. In both cases, the DAH decomposition allows for an extraction of spatio-temporal modes revealing key features of the dynamics in the embedded phase space. The multilayer Stuart-Landau models (MSLMs) are shown to successfully model the typical patterns of the corresponding time-evolving fields, as well as their statistics of occurrence.

  16. Application of mixing-controlled combustion models to gas turbine combustors

    NASA Technical Reports Server (NTRS)

    Nguyen, Hung Lee

    1990-01-01

    Gas emissions were studied from a staged Rich Burn/Quick-Quench Mix/Lean Burn combustor were studied under test conditions encountered in High Speed Research engines. The combustor was modeled at conditions corresponding to different engine power settings, and the effect of primary dilution airflow split on emissions, flow field, flame size and shape, and combustion intensity, as well as mixing, was investigated. A mathematical model was developed from a two-equation model of turbulence, a quasi-global kinetics mechanism for the oxidation of propane, and the Zeldovich mechanism for nitric oxide formation. A mixing-controlled combustion model was used to account for turbulent mixing effects on the chemical reaction rate. This model assumes that the chemical reaction rate is much faster than the turbulent mixing rate.

  17. Methods of testing parameterizations: Vertical ocean mixing

    NASA Technical Reports Server (NTRS)

    Tziperman, Eli

    1992-01-01

    The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the large-scale ocean circulation, and examine methods of validating mixing parameterizations using large-scale ocean models.

  18. MIXING STUDY FOR JT-71/72 TANKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S.

    2013-11-26

    All modeling calculations for the mixing operations of miscible fluids contained in HBLine tanks, JT-71/72, were performed by taking a three-dimensional Computational Fluid Dynamics (CFD) approach. The CFD modeling results were benchmarked against the literature results and the previous SRNL test results to validate the model. Final performance calculations were performed by using the validated model to quantify the mixing time for the HB-Line tanks. The mixing study results for the JT-71/72 tanks show that, for the cases modeled, the mixing time required for blending of the tank contents is no more than 35 minutes, which is well below 2.5more » hours of recirculation pump operation. Therefore, the results demonstrate the adequacy of 2.5 hours’ mixing time of the tank contents by one recirculation pump to get well mixed.« less

  19. Nonreactive mixing study of a scramjet swept-strut fuel injector

    NASA Technical Reports Server (NTRS)

    Mcclinton, C. R.; Torrence, M. G.; Gooderum, P. B.; Young, I. G.

    1975-01-01

    The results are presented of a cold-mixing investigation performed to supply combustor design information and to determine optimum normal fuel-injector configurations for a general scramjet swept-strut fuel injector. The experimental investigation was made with two swept struts in a closed duct at a Mach number of 4.4 and a nominal ratio of jet mass flow to air mass flow of 0.0295, with helium used to simulate hydrogen fuel. Four injector patterns were evaluated; they represented the range of hole spacing and the ratio of jet dynamic pressure to free-stream dynamic pressure. Helium concentration, pitot pressure, and static pressure in the downstream mixing region were measured to generate the contour plots needed to define the mixing-region flow field and the mixing parameters. Experimental results show that the fuel penetration from the struts was less than the predicted values based on flat-plate data; but the mixing rate was faster and produced a mixing length less than one-half that predicted.

  20. NO and NOy in the upper troposphere: Nine years of CARIBIC measurements onboard a passenger aircraft

    NASA Astrophysics Data System (ADS)

    Stratmann, G.; Ziereis, H.; Stock, P.; Brenninkmeijer, C. A. M.; Zahn, A.; Rauthe-Schöch, A.; Velthoven, P. V.; Schlager, H.; Volz-Thomas, A.

    2016-05-01

    Nitrogen oxide (NO and NOy) measurements were performed onboard an in-service aircraft within the framework of CARIBIC (Civil Aircraft for the Regular Investigation of the atmosphere Based on an Instrument Container). A total of 330 flights were completed from May 2005 through April 2013 between Frankfurt/Germany and destination airports in Canada, the USA, Brazil, Venezuela, Chile, Argentina, Colombia, South Africa, China, South Korea, Japan, India, Thailand, and the Philippines. Different regions show differing NO and NOy mixing ratios. In the mid-latitudes, observed NOy and NO generally shows clear seasonal cycles in the upper troposphere with a maximum in summer and a minimum in winter. Mean NOy mixing ratios vary between 1.36 nmol/mol in summer and 0.27 nmol/mol in winter. Mean NO mixing ratios range between 0.05 nmol/mol and 0.22 nmol/mol. Regions south of 40°N show no consistent seasonal dependence. Based on CO observations, low, median and high CO air masses were defined. According to this classification, more data was obtained in high CO air masses in the regions south of 40°N compared to the midlatitudes. This indicates that boundary layer emissions are more important in these regions. In general, NOy mixing ratios are highest when measured in high CO air masses. This dataset is one of the most comprehensive NO and NOy dataset available today for the upper troposphere and is therefore highly suitable for the validation of atmosphere-chemistry-models.

  1. Structure of multiphoton quantum optics. I. Canonical formalism and homodyne squeezed states

    NASA Astrophysics Data System (ADS)

    dell'Anno, Fabio; de Siena, Silvio; Illuminati, Fabrizio

    2004-03-01

    We introduce a formalism of nonlinear canonical transformations for general systems of multiphoton quantum optics. For single-mode systems the transformations depend on a tunable free parameter, the homodyne local-oscillator angle; for n -mode systems they depend on n heterodyne mixing angles. The canonical formalism realizes nontrivial mixing of pairs of conjugate quadratures of the electromagnetic field in terms of homodyne variables for single-mode systems, and in terms of heterodyne variables for multimode systems. In the first instance the transformations yield nonquadratic model Hamiltonians of degenerate multiphoton processes and define a class of non-Gaussian, nonclassical multiphoton states that exhibit properties of coherence and squeezing. We show that such homodyne multiphoton squeezed states are generated by unitary operators with a nonlinear time evolution that realizes the homodyne mixing of a pair of conjugate quadratures. Tuning of the local-oscillator angle allows us to vary at will the statistical properties of such states. We discuss the relevance of the formalism for the study of degenerate (up-)down-conversion processes. In a companion paper [

    F. Dell’Anno, S. De Siena, and F. Illuminati, 69, 033813 (2004)
    ], we provide the extension of the nonlinear canonical formalism to multimode systems, we introduce the associated heterodyne multiphoton squeezed states, and we discuss their possible experimental realization.

  2. Structure of multiphoton quantum optics. I. Canonical formalism and homodyne squeezed states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dell'Anno, Fabio; De Siena, Silvio; Illuminati, Fabrizio

    2004-03-01

    We introduce a formalism of nonlinear canonical transformations for general systems of multiphoton quantum optics. For single-mode systems the transformations depend on a tunable free parameter, the homodyne local-oscillator angle; for n-mode systems they depend on n heterodyne mixing angles. The canonical formalism realizes nontrivial mixing of pairs of conjugate quadratures of the electromagnetic field in terms of homodyne variables for single-mode systems, and in terms of heterodyne variables for multimode systems. In the first instance the transformations yield nonquadratic model Hamiltonians of degenerate multiphoton processes and define a class of non-Gaussian, nonclassical multiphoton states that exhibit properties of coherencemore » and squeezing. We show that such homodyne multiphoton squeezed states are generated by unitary operators with a nonlinear time evolution that realizes the homodyne mixing of a pair of conjugate quadratures. Tuning of the local-oscillator angle allows us to vary at will the statistical properties of such states. We discuss the relevance of the formalism for the study of degenerate (up-)down-conversion processes. In a companion paper [F. Dell'Anno, S. De Siena, and F. Illuminati, 69, 033813 (2004)], we provide the extension of the nonlinear canonical formalism to multimode systems, we introduce the associated heterodyne multiphoton squeezed states, and we discuss their possible experimental realization.« less

  3. Development of a wet vapor homogeneous liquid metal MHD power system

    NASA Astrophysics Data System (ADS)

    1989-04-01

    During the period covered by this report (October 1988 to March 1989), the following work was done: the mixing stream condensation process was analyzed, and a theoretical model for simulating this process was modified. A parametric study is being conducted at the present time; the separation processes were analyzed; and the experimental system was specified and its design is at present in an advanced stage. The mixing stream condensation process was analyzed. For the parameters defined in the SOW of this project the process was found to be a mist flow direct contact condensation, where the hot gas mixture consisting of inert gas and vapor is the continuous phase, and the subcooled liquid on which the vapor is condensed if the droplets dispersed phase. Two possibilities of creating the mist flow were considered. The first, injecting the cold Liquid Metal (LM) into the Mixing Streams Condenser (MSC) entrance as a jet and breaking it into LM fragments and the fragments into droplets by momentum transfer breakup mechanism. The second, atomizing the cooled LM stream into little droplets (approximately 100 micrometers in diameter) and accelerating them by the gas. The second possibility was preferred due to its much higher heat and mass transfer surface and coefficients relative to the first one.

  4. Modeling Space Radiation with Radiomimetic Agent Bleomycin

    NASA Technical Reports Server (NTRS)

    Lu, Tao

    2017-01-01

    Space radiation consists of proton and helium from solar particle events (SPE) and high energy heavy ions from galactic cosmic ray (GCR). This mixture of radiation with particles at different energy levels has different effects on biological systems. Currently, majority studies of radiation effects on human were based on single-source radiation due to the limitation of available method to model effects of space radiation on living organisms. While NASA Space Radiation Laboratory is working on advanced switches to make it possible to have a mixed field radiation with particles of different energies, the radiation source will be limited. Development of an easily available experimental model for studying effects of mixed field radiation could greatly speed up our progress in our understanding the molecular mechanisms of damage and responses from exposure to space radiation, and facilitate the discovery of protection and countermeasures against space radiation, which is critical for the mission to Mars. Bleomycin, a radiomimetic agent, has been widely used to study radiation induced DNA damage and cellular responses. Previously, bleomycin was often compared to low low Linear Energy Transfer (LET) gamma radiation without defined characteristics. Our recent work demonstrated that bleomycin could induce complex clustered DNA damage in human fibroblasts that is similar to DNA damage induced by high LET radiation. These type of DNA damage is difficult to repair and can be visualized by gamma-H2Ax staining weeks after the initial insult. The survival ratio between early and late plating of human fibroblasts after bleomycin treatment is between low LET and high LET radiation. Our results suggest that bleomycin induces DNA damage and other cellular stresses resembling those resulted from mixed field radiation with both low and high LET particles. We hypothesize that bleomycin could be used to mimic space radiation in biological systems. Potential advantages and limitations of using bleomycin to treat biological specimen as an easily available model to study effects of space radiation on biological systems and to develop countermeasures for space radiation associated risks will be discussed.

  5. Theoretical Relationships between Luminescence and Hillslope Soil Vertical Diffusivity: a Numerical Modeling Approach

    NASA Astrophysics Data System (ADS)

    Gray, H. J.; Tucker, G. E.; Mahan, S.

    2017-12-01

    Luminescence is a property of matter that can be used to obtain depositional ages from fine sand. Luminescence generates due to exposure to background ionizing radiation and is removed by sunlight exposure in a process known as bleaching. There is evidence to suggest that luminescence can also serve as a sediment tracer in fluvial and hillslope environments. For hillslope environments, it has been suggested that the magnitude of luminescence as a function of soil depth is related to the strength of soil mixing. Hillslope soils with a greater extent of mixing will have previously surficial sand grains moved to greater depths in a soil column. These previously surface-exposed grains will contain a lower luminescence than those which have never seen the surface. To attempt to connect luminescence profiles with soil mixing rate, here defined as the soil vertical diffusivity, I conduct numerical modelling of particles in hillslope soils coupled with equations describing the physics of luminescence. I use recently published equations describing the trajectories of particles under both exponential and uniform soil velocity soils profiles and modify them to include soil diffusivity. Results from the model demonstrates a strong connection between soil diffusivity and luminescence. Both the depth profiles of luminescence and the total percent of surface exposed grains will change drastically based on the magnitude of the diffusivity. This suggests that luminescence could potentially be used to infer the magnitude of soil diffusivity. However, I test other variables such as the soil production rate, e-folding length of soil velocity, background dose rate, and soil thickness, and I find these other variables can also affect the relationship between luminescence and diffusivity. This suggests that these other variables may need to be constrained prior to any inferences of soil diffusivity from luminescence measurements. Further field testing of the model in areas where the soil vertical diffusivity and other parameters are independently known will provide a test of this potential new method.

  6. New Particle Formation in the Mid-Latitude Upper Troposphere

    NASA Astrophysics Data System (ADS)

    Axisa, Duncan

    Primary aerosol production due to new particle formation (NPF) in the upper troposphere and the impact that this might have on cloud condensation nuclei (CCN) concentration can be of sufficient magnitude to contribute to the uncertainty in radiative forcing. This uncertainty affects our ability to estimate how sensitive the climate is to greenhouse gas emissions. Therefore, new particle formation must be accurately defined, parametrized and accounted for in models. This research involved the deployment of instruments, data analysis and interpretation of particle formation events during the Mid-latitude Airborne Cirrus Properties Experiment (MACPEX) campaign. The approach combined field measurements and observations with extensive data analysis and modeling to study the process of new particle formation and growth to CCN active sizes. Simultaneous measurements of O3, CO, ultrafine aerosol particles and surface area from a high-altitude research aircraft were used to study tropospheric-stratospheric mixing as well as the frequency and location of NPF. It was found that the upper troposphere was an active region in the production of new particles by gas-to-particle conversion, that nucleation was triggered by convective clouds and mixing processes, and that NPF occurred in regions with high relative humidity and low surface area. In certain cases, mesoscale and synoptic features enhanced mixing and facilitated the formation of new particles in the northern mid-latitudes. A modeling study of particle growth and CCN formation was done based on measured aerosol size distributions and modeled growth. The results indicate that when SO2 is of sufficient concentration NPF is a significant source of potential CCN in the upper troposphere. In conditions where convective cloud outflow eject high concentrations of SO2, a large number of new particles can form especially in the instance when the preexisting surface area is low. The fast growth of nucleated clusters produces a particle mode that becomes CCN active within 24-hours.

  7. Parametrization of turbulence models using 3DVAR data assimilation in laboratory conditions

    NASA Astrophysics Data System (ADS)

    Olbert, A. I.; Nash, S.; Ragnoli, E.; Hartnett, M.

    2013-12-01

    In this research the 3DVAR data assimilation scheme is implemented in the numerical model DIVAST in order to optimize the performance of the numerical model by selecting an appropriate turbulence scheme and tuning its parameters. Two turbulence closure schemes: the Prandtl mixing length model and the two-equation k-ɛ model were incorporated into DIVAST and examined with respect to their universality of application, complexity of solutions, computational efficiency and numerical stability. A square harbour with one symmetrical entrance subject to tide-induced flows was selected to investigate the structure of turbulent flows. The experimental part of the research was conducted in a tidal basin. A significant advantage of such laboratory experiment is a fully controlled environment where domain setup and forcing are user-defined. The research shows that the Prandtl mixing length model and the two-equation k-ɛ model, with default parameterization predefined according to literature recommendations, overestimate eddy viscosity which in turn results in a significant underestimation of velocity magnitudes in the harbour. The data assimilation of the model-predicted velocity and laboratory observations significantly improves model predictions for both turbulence models by adjusting modelled flows in the harbour to match de-errored observations. Such analysis gives an optimal solution based on which numerical model parameters can be estimated. The process of turbulence model optimization by reparameterization and tuning towards optimal state led to new constants that may be potentially applied to complex turbulent flows, such as rapidly developing flows or recirculating flows. This research further demonstrates how 3DVAR can be utilized to identify and quantify shortcomings of the numerical model and consequently to improve forecasting by correct parameterization of the turbulence models. Such improvements may greatly benefit physical oceanography in terms of understanding and monitoring of coastal systems and the engineering sector through applications in coastal structure design, marine renewable energy and pollutant transport.

  8. [Concordance between a head circumference growth function and intellectual disability in relation with the cause of microcephaly].

    PubMed

    Coronado, R; Macaya Ruíz, A; Giraldo Arjonilla, J; Roig-Quilis, M

    2015-08-01

    Our aim was to investigate the correlations between patterns of head growth and intellectual disability among distinct aetiological presentations of microcephaly. 3,269 head circumference (HC) charts of patients from a tertiary neuropediatric unit were reviewed and 136 microcephalic participants selected. Using the Z-scores of registered HC measurements we defined the variables: HC Minimum, HC Drop and HC Catch-up. We classified patients according to the presence or absence of intellectual disability (IQ below 71) and according to the cause of microcephaly (idiopathic, familial, syndromic, symptomatic and mixed). Using Discriminant Analysis a C-function was defined as C=HC Minimum + HC Drop with a cut-off level of C=-4.32 Z-score. In our sample 95% of patients scoring below this level, severe microcephaly, were classified in the disabled group while the overall concordance was 66%. In the symptomatic-mixed group the concordance between HC function and outcome reached 82% in contrast to only 54% in the idiopathic-syndromic group (P-value=0.0002). We defined a HC growth function which discriminates intellectual disability of microcephalic patients better than isolated HC measurements, especially for those with secondary and mixed aetiologies. Copyright © 2014 Asociación Española de Pediatría. Published by Elsevier España, S.L.U. All rights reserved.

  9. RFQ (radio-frequency quadrupole) accelerator tuning system

    DOEpatents

    Bolie, V.W.

    1988-04-12

    A cooling system is provided for maintaining a preselected operating temperature in a device, which may be an RFQ accelerator, having a variable heat removal requirement, by circulating a cooling fluid through a cooling system remote from the device. Internal sensors in the device enable an estimated error signal to be generated from parameters which are indicative of the heat removal requirement from the device. Sensors are provided at predetermined locations in the cooling system for outputting operational temperature signals. Analog and digital computers define a control signal functionally related to the temperature signals and the estimated error signal, where the control signal is defined effective to return the device to the preselected operating temperature in a stable manner. The cooling system includes a first heat sink responsive to a first portion of the control signal to remove heat from a major portion of the circulating fluid. A second heat sink is responsive to a second portion of the control to remove heat from a minor portion of the circulating fluid. The cooled major and minor portions of the circulating fluid are mixed in responsive to a mixing portion of the control signal, which is effective to proportion the major and minor portions of the circulating fluid to establish a mixed fluid temperature which is effective to define the preselected operating temperature for the remote device. 3 figs., 2 tabs.

  10. Who mixes with whom among men who have sex with men? Implications for modelling the HIV epidemic in southern India

    PubMed Central

    Mitchell, K.M.; Foss, A.M.; Prudden, H.J.; Mukandavire, Z.; Pickles, M.; Williams, J.R.; Johnson, H.C.; Ramesh, B.M.; Washington, R.; Isac, S.; Rajaram, S.; Phillips, A.E.; Bradley, J.; Alary, M.; Moses, S.; Lowndes, C.M.; Watts, C.H.; Boily, M.-C.; Vickerman, P.

    2014-01-01

    In India, the identity of men who have sex with men (MSM) is closely related to the role taken in anal sex (insertive, receptive or both), but little is known about sexual mixing between identity groups. Both role segregation (taking only the insertive or receptive role) and the extent of assortative (within-group) mixing are known to affect HIV epidemic size in other settings and populations. This study explores how different possible mixing scenarios, consistent with behavioural data collected in Bangalore, south India, affect both the HIV epidemic, and the impact of a targeted intervention. Deterministic models describing HIV transmission between three MSM identity groups (mostly insertive Panthis/Bisexuals, mostly receptive Kothis/Hijras and versatile Double Deckers), were parameterised with behavioural data from Bangalore. We extended previous models of MSM role segregation to allow each of the identity groups to have both insertive and receptive acts, in differing ratios, in line with field data. The models were used to explore four different mixing scenarios ranging from assortative (maximising within-group mixing) to disassortative (minimising within-group mixing). A simple model was used to obtain insights into the relationship between the degree of within-group mixing, R0 and equilibrium HIV prevalence under different mixing scenarios. A more complex, extended version of the model was used to compare the predicted HIV prevalence trends and impact of an HIV intervention when fitted to data from Bangalore. With the simple model, mixing scenarios with increased amounts of assortative (within-group) mixing tended to give rise to a higher R0 and increased the likelihood that an epidemic would occur. When the complex model was fit to HIV prevalence data, large differences in the level of assortative mixing were seen between the fits identified using different mixing scenarios, but little difference was projected in future HIV prevalence trends. An oral pre-exposure prophylaxis (PrEP) intervention was modelled, targeted at the different identity groups. For intervention strategies targeting the receptive or receptive and versatile MSM together, the overall impact was very similar for different mixing patterns. However, for PrEP scenarios targeting insertive or versatile MSM alone, the overall impact varied considerably for different mixing scenarios; more impact was achieved with greater levels of disassortative mixing. PMID:24727187

  11. Controlling Actinide Hydration in Mixed Solvent Systems: Towards Tunable Solvent Systems to Close the Fuel Cycle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, Sue B.

    2016-10-31

    The goal of this project has been to define the extent of hydration the f-elements and other cations in mixed solvent electrolyte systems. Methanol-water and other mixed solvent systems have been studied, where the solvent dielectric constant was varied systematically. Thermodynamic and spectroscopic studies provide details concerning the energetics of complexation and other reactions of these cations. This information has also been used to advance new understanding of the behavior of these cations in a variety of systems, ranging from environmental studies, chromatographic approaches, and ionization processes for mass spectrometry.

  12. Principles of biorefineries.

    PubMed

    Kamm, B; Kamm, M

    2004-04-01

    Sustainable economic growth requires safe, sustainable resources for industrial production. For the future re-arrangement of a substantial economy to biological raw materials, completely new approaches in research and development, production and economy are necessary. Biorefineries combine the necessary technologies between biological raw materials and industrial intermediates and final products. The principal goal in the development of biorefineries is defined by the following: (biomass) feedstock-mix + process-mix --> product-mix. Here, particularly the combination between biotechnological and chemical conversion of substances will play an important role. Currently the "whole-crop biorefinery", "green biorefinery" and "lignocellulose-feedstock biorefinery" systems are favored in research and development.

  13. Texture Mixing via Universal Simulation

    DTIC Science & Technology

    2005-08-01

    classes and universal simulation. Based on the well-known Lempel and Ziv (LZ) universal compression scheme, the universal type class of a one...length that produce the same tree (dictionary) under the Lempel - Ziv (LZ) incre- mental parsing defined in the well-known LZ78 universal compression ...the well known Lempel - Ziv parsing algorithm . The goal is not just to synthesize mixed textures, but to understand what texture is. We are currently

  14. Identifying geochemical processes using End Member Mixing Analysis to decouple chemical components for mixing ratio calculations

    NASA Astrophysics Data System (ADS)

    Pelizardi, Flavia; Bea, Sergio A.; Carrera, Jesús; Vives, Luis

    2017-07-01

    Mixing calculations (i.e., the calculation of the proportions in which end-members are mixed in a sample) are essential for hydrological research and water management. However, they typically require the use of conservative species, a condition that may be difficult to meet due to chemical reactions. Mixing calculation also require identifying end-member waters, which is usually achieved through End Member Mixing Analysis (EMMA). We present a methodology to help in the identification of both end-members and such reactions, so as to improve mixing ratio calculations. The proposed approach consists of: (1) identifying the potential chemical reactions with the help of EMMA; (2) defining decoupled conservative chemical components consistent with those reactions; (3) repeat EMMA with the decoupled (i.e., conservative) components, so as to identify end-members waters; and (4) computing mixing ratios using the new set of components and end-members. The approach is illustrated by application to two synthetic mixing examples involving mineral dissolution and cation exchange reactions. Results confirm that the methodology can be successfully used to identify geochemical processes affecting the mixtures, thus improving the accuracy of mixing ratios calculations and relaxing the need for conservative species.

  15. Anisotropic transverse mixing and its effect on reaction rates in multi-scale, 3D heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Engdahl, N. B.

    2016-12-01

    Mixing rates in porous media have been a heavily research topic in recent years covering analytic, random, and structured fields. However, there are some persistent assumptions and common features to these models that raise some questions about the generality of the results. One of these commonalities is the orientation of the flow field with respect to the heterogeneity structure, which are almost always defined to be parallel each other if there is an elongated axis of permeability correlation. Given the vastly different tortuosities for flow parallel to bedding and flow transverse to bedding, this assumption of parallel orientation may have significant effects on reaction rates when natural flows deviate from this assumed setting. This study investigates the role of orientation on mixing and reaction rates in multi-scale, 3D heterogeneous porous media with varying degrees of anisotropy in the correlation structure. Ten realizations of a small flow field, with three anisotropy levels, were simulated for flow parallel and transverse to bedding. Transport was simulated in each model with an advective-diffusive random walk and reactions were simulated using the chemical Langevin equation. The reaction system is a vertically segregated, transverse mixing problem between two mobile reactants. The results show that different transport behaviors and reaction rates are obtained by simply rotating the direction of flow relative to bedding, even when the net flux in both directions is the same. This kind of behavior was observed for three different weightings of the initial condition: 1) uniform, 2) flux-based, and 3) travel time based. The different schemes resulted in 20-50% more mass formation in the transverse direction than the longitudinal. The greatest variability in mass was observed for the flux weights and these were proportionate to the level of anisotropy. The implications of this study are that flux or travel time weights do not provide any guarantee of a fair comparison in this kind of a mixing scenario and that the role of directional tendencies on reaction rates can be significant. Further, it may be necessary to include anisotropy in future upscaled models to create robust methods that give representative reaction rates for any flow direction relative to geologic bedding.

  16. Distinct Contributions of Ice Nucleation, Large-Scale Environment, and Shallow Cumulus Detrainment to Cloud Phase Partitioning With NCAR CAM5

    DOE PAGES

    Wang, Yong; Zhang, Damao; Liu, Xiaohong; ...

    2018-01-06

    Mixed-phase clouds containing both liquid droplets and ice particles occur frequently at high latitudes and in the midlatitude storm track regions. Simulations of the cloud phase partitioning between liquid and ice hydrometeors in state-of-the-art global climate models are still associated with large biases. For this study, the phase partitioning in terms of liquid mass phase ratio (MPR liq, defined as the ratio of liquid mass to total condensed water mass) simulated from the NCAR Community Atmosphere Model version 5 (CAM5) is evaluated against the observational data from A-Train satellite remote sensors. Modeled MPR liq is significantly lower than observations onmore » the global scale, especially in the Southern Hemisphere (e.g., Southern Ocean and the Antarctic). Sensitivity tests with CAM5 are conducted to investigate the distinct contributions of heterogeneous ice nucleation, shallow cumulus detrainment, and large-scale environment (e.g., winds, temperature, and water vapor) to the low MPR liq biases. Our results show that an aerosol-aware ice nucleation parameterization increases the MPR liq especially at temperatures colder than -20°C and significantly improves the model agreements with observations in the Polar regions in summer. The decrease of threshold temperature over which all detrained cloud water is liquid from 268 to 253 K enhances the MPR liq and improves the MPR liq mostly over the Southern Ocean. By constraining water vapor in CAM5 toward reanalysis, modeled low biases in many geographical regions are largely reduced through a significant decrease of cloud ice mass mixing ratio.« less

  17. Distinct Contributions of Ice Nucleation, Large-Scale Environment, and Shallow Cumulus Detrainment to Cloud Phase Partitioning With NCAR CAM5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yong; Zhang, Damao; Liu, Xiaohong

    Mixed-phase clouds containing both liquid droplets and ice particles occur frequently at high latitudes and in the midlatitude storm track regions. Simulations of the cloud phase partitioning between liquid and ice hydrometeors in state-of-the-art global climate models are still associated with large biases. For this study, the phase partitioning in terms of liquid mass phase ratio (MPR liq, defined as the ratio of liquid mass to total condensed water mass) simulated from the NCAR Community Atmosphere Model version 5 (CAM5) is evaluated against the observational data from A-Train satellite remote sensors. Modeled MPR liq is significantly lower than observations onmore » the global scale, especially in the Southern Hemisphere (e.g., Southern Ocean and the Antarctic). Sensitivity tests with CAM5 are conducted to investigate the distinct contributions of heterogeneous ice nucleation, shallow cumulus detrainment, and large-scale environment (e.g., winds, temperature, and water vapor) to the low MPR liq biases. Our results show that an aerosol-aware ice nucleation parameterization increases the MPR liq especially at temperatures colder than -20°C and significantly improves the model agreements with observations in the Polar regions in summer. The decrease of threshold temperature over which all detrained cloud water is liquid from 268 to 253 K enhances the MPR liq and improves the MPR liq mostly over the Southern Ocean. By constraining water vapor in CAM5 toward reanalysis, modeled low biases in many geographical regions are largely reduced through a significant decrease of cloud ice mass mixing ratio.« less

  18. Net community production at Ocean Station Papa observed with nitrate and oxygen sensors on profiling floats

    NASA Astrophysics Data System (ADS)

    Plant, Joshua N.; Johnson, Kenneth S.; Sakamoto, Carole M.; Jannasch, Hans W.; Coletti, Luke J.; Riser, Stephen C.; Swift, Dana D.

    2016-06-01

    Six profiling floats equipped with nitrate and oxygen sensors were deployed at Ocean Station P in the Gulf of Alaska. The resulting six calendar years and 10 float years of nitrate and oxygen data were used to determine an average annual cycle for net community production (NCP) in the top 35 m of the water column. NCP became positive in February as soon as the mixing activity in the surface layer began to weaken, but nearly 3 months before the traditionally defined mixed layer began to shoal from its winter time maximum. NCP displayed two maxima, one toward the end of May and another in August with a summertime minimum in June corresponding to the historical peak in mesozooplankton biomass. The average annual NCP was determined to be 1.5 ± 0.6 mol C m-2 yr-1 using nitrate and 1.5 ± 0.7 mol C m-2 yr-1 using oxygen. The results from oxygen data proved to be quite sensitive to the gas exchange model used as well as the accuracy of the oxygen measurement. Gas exchange models optimized for carbon dioxide flux generally ignore transport due to gas exchange through the injection of bubbles, and these models yield NCP values that are two to three time higher than the nitrate-based estimates. If nitrate and oxygen NCP rates are assumed to be related by the Redfield model, we show that the oxygen gas exchange model can be optimized by tuning the exchange terms to reproduce the nitrate NCP annual cycle.

  19. Distinct Contributions of Ice Nucleation, Large-Scale Environment, and Shallow Cumulus Detrainment to Cloud Phase Partitioning With NCAR CAM5

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Zhang, Damao; Liu, Xiaohong; Wang, Zhien

    2018-01-01

    Mixed-phase clouds containing both liquid droplets and ice particles occur frequently at high latitudes and in the midlatitude storm track regions. Simulations of the cloud phase partitioning between liquid and ice hydrometeors in state-of-the-art global climate models are still associated with large biases. In this study, the phase partitioning in terms of liquid mass phase ratio (MPRliq, defined as the ratio of liquid mass to total condensed water mass) simulated from the NCAR Community Atmosphere Model version 5 (CAM5) is evaluated against the observational data from A-Train satellite remote sensors. Modeled MPRliq is significantly lower than observations on the global scale, especially in the Southern Hemisphere (e.g., Southern Ocean and the Antarctic). Sensitivity tests with CAM5 are conducted to investigate the distinct contributions of heterogeneous ice nucleation, shallow cumulus detrainment, and large-scale environment (e.g., winds, temperature, and water vapor) to the low MPRliq biases. Our results show that an aerosol-aware ice nucleation parameterization increases the MPRliq especially at temperatures colder than -20°C and significantly improves the model agreements with observations in the Polar regions in summer. The decrease of threshold temperature over which all detrained cloud water is liquid from 268 to 253 K enhances the MPRliq and improves the MPRliq mostly over the Southern Ocean. By constraining water vapor in CAM5 toward reanalysis, modeled low biases in many geographical regions are largely reduced through a significant decrease of cloud ice mass mixing ratio.

  20. Declines in Outpatient Antimicrobial Use in Canada (1995–2010)

    PubMed Central

    Finley, Rita; Glass-Kaastra, Shiona K.; Hutchinson, Jim; Patrick, David M.; Weiss, Karl; Conly, John

    2013-01-01

    Background With rising reports of antimicrobial resistance in outpatient communities, surveillance of antimicrobial use is imperative for supporting stewardship programs. The primary objective of this article is to assess the levels of antimicrobial use in Canada over time. Methods Canadian antimicrobial use data from 1995 to 2010 were acquired and assessed by four metrics: population-adjusted prescriptions, Defined Daily Doses, spending on antimicrobials (inflation-adjusted), and average Defined Daily Doses per prescription. Linear mixed models were built to assess significant differences among years and antimicrobial groups, and to account for repeated measurements over time. Measures were also compared to published reports from European countries. Results Temporal trends in antimicrobial use in Canada vary by metric and antimicrobial grouping. Overall reductions were seen for inflation-adjusted spending, population-adjusted prescription rates and Defined Daily Doses, and increases were observed for the average number of Defined Daily Doses per prescription. The population-adjusted prescription and Defined Daily Doses values for 2009 were comparable to those reported by many European countries, while the average Defined Daily Dose per prescription for Canada ranked high. A significant reduction in the use of broad spectrum penicillins occurred between 1995 and 2004, coupled with increases in macrolide and quinolone use, suggesting that replacement of antimicrobial drugs may occur as new products arrive on the market. Conclusions There have been modest decreases of antimicrobial use in Canada over the past 15 years. However, continued surveillance of antimicrobial use coupled with data detailing antimicrobial resistance within bacterial pathogens affecting human populations is critical for targeting interventions and maintaining the effectiveness of these products for future generations. PMID:24146863

  1. Extending existing structural identifiability analysis methods to mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2018-01-01

    The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. The role of Compensatory Health Beliefs in eating behavior change: A mixed method study.

    PubMed

    Amrein, Melanie A; Rackow, Pamela; Inauen, Jennifer; Radtke, Theda; Scholz, Urte

    2017-09-01

    Compensatory Health Beliefs (CHBs), defined as beliefs that an unhealthy behavior can be compensated for by engaging in another healthy behavior, are assumed to hinder health behavior change. The aim of the present study was to investigate the role of CHBs for two distinct eating behaviors (increased fruit and vegetable consumption and eating fewer unhealthy snacks) with a mixed method approach. Participants (N = 232, mean age = 27.3 years, 76.3% women) were randomly assigned to a fruit and vegetable or an unhealthy snack condition. For the quantitative approach, path models were fitted to analyze the role of CHBs within a social-cognitive theory of health behavior change, the Health Action Process Approach (HAPA). With a content analysis, the qualitative approach investigated the occurrence of CHBs in smartphone chat groups when pursuing an eating goal. Both analyses were conducted for each eating behavior separately. Path models showed that CHBs added predictive value for intention, but not behavior over and above HAPA variables only in the unhealthy snack condition. CHBs were significantly negatively associated with intention and action planning. Content analysis revealed that people generated only a few CHB messages. However, CHBs were more likely to be present and were also more diverse in the unhealthy snack condition compared to the fruit and vegetable condition. Based on a mixed method approach, this study suggests that CHBs play a more important role for eating unhealthy snacks than for fruit and vegetable consumption. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Injury profiles related to mortality in patients with a low Injury Severity Score: a case-mix issue?

    PubMed

    Joosse, Pieter; Schep, Niels W L; Goslings, J Carel

    2012-07-01

    Outcome prediction models are widely used to evaluate trauma care. External benchmarking provides individual institutions with a tool to compare survival with a reference dataset. However, these models do have limitations. In this study, the hypothesis was tested whether specific injuries are associated with increased mortality and whether differences in case-mix of these injuries influence outcome comparison. A retrospective study was conducted in a Dutch trauma region. Injury profiles, based on injuries most frequently endured by unexpected death, were determined. The association between these injury profiles and mortality was studied in patients with a low Injury Severity Score by logistic regression. The standardized survival of our population (Ws statistic) was compared with North-American and British reference databases, with and without patients suffering from previously defined injury profiles. In total, 14,811 patients were included. Hip fractures, minor pelvic fractures, femur fractures, and minor thoracic injuries were significantly associated with mortality corrected for age, sex, and physiologic derangement in patients with a low injury severity. Odds ratios ranged from 2.42 to 2.92. The Ws statistic for comparison with North-American databases significantly improved after exclusion of patients with these injuries. The Ws statistic for comparison with a British reference database remained unchanged. Hip fractures, minor pelvic fractures, femur fractures, and minor thoracic wall injuries are associated with increased mortality. Comparative outcome analysis of a population with a reference database that differs in case-mix with respect to these injuries should be interpreted cautiously. Prognostic study, level II.

  4. Spatial organization of a model 15-member human gut microbiota established in gnotobiotic mice

    PubMed Central

    Mark Welch, Jessica L.; Hasegawa, Yuko; McNulty, Nathan P.; Gordon, Jeffrey I.; Borisy, Gary G.

    2017-01-01

    Knowledge of the spatial organization of the gut microbiota is important for understanding the physical and molecular interactions among its members. These interactions are thought to influence microbial succession, community stability, syntrophic relationships, and resiliency in the face of perturbations. The complexity and dynamism of the gut microbiota pose considerable challenges for quantitative analysis of its spatial organization. Here, we illustrate an approach for addressing this challenge, using (i) a model, defined 15-member consortium of phylogenetically diverse, sequenced human gut bacterial strains introduced into adult gnotobiotic mice fed a polysaccharide-rich diet, and (ii) in situ hybridization and spectral imaging analysis methods that allow simultaneous detection of multiple bacterial strains at multiple spatial scales. Differences in the binding affinities of strains for substrates such as mucus or food particles, combined with more rapid replication in a preferred microhabitat, could, in principle, lead to localized clonally expanded aggregates composed of one or a few taxa. However, our results reveal a colonic community that is mixed at micrometer scales, with distinct spatial distributions of some taxa relative to one another, notably at the border between the mucosa and the lumen. Our data suggest that lumen and mucosa in the proximal colon should be conceptualized not as stratified compartments but as components of an incompletely mixed bioreactor. Employing the experimental approaches described should allow direct tests of whether and how specified host and microbial factors influence the nature and functional contributions of “microscale” mixing to the dynamic operations of the microbiota in health and disease. PMID:29073107

  5. Approach for estimating the dynamic physical thresholds of phytoplankton production and biomass in the tropical-subtropical Pacific Ocean

    NASA Astrophysics Data System (ADS)

    Gómez-Ocampo, E.; Gaxiola-Castro, G.; Durazo, Reginaldo

    2017-06-01

    Threshold is defined as the point where small changes in an environmental driver produce large responses in the ecosystem. Generalized additive models (GAMs) were used to estimate the thresholds and contribution of key dynamic physical variables in terms of phytoplankton production and variations in biomass in the tropical-subtropical Pacific Ocean off Mexico. The statistical approach used here showed that thresholds were shallower for primary production than for phytoplankton biomass (pycnocline < 68 m and mixed layer < 30 m versus pycnocline < 45 m and mixed layer < 80 m) but were similar for absolute dynamic topography and Ekman pumping (ADT < 59 cm and EkP > 0 cm d-1 versus ADT < 60 cm and EkP > 4 cm d-1). The relatively high productivity on seasonal (spring) and interannual (La Niña 2008) scales was linked to low ADT (45-60 cm) and shallow pycnocline depth (9-68 m) and mixed layer (8-40 m). Statistical estimations from satellite data indicated that the contributions of ocean circulation to phytoplankton variability were 18% (for phytoplankton biomass) and 46% (for phytoplankton production). Although the statistical contribution of models constructed with in situ integrated chlorophyll a and primary production data was lower than the one obtained with satellite data (11%), the fits were better for the former, based on the residual distribution. The results reported here suggest that estimated thresholds may reliably explain the spatial-temporal variations of phytoplankton in the tropical-subtropical Pacific Ocean off the coast of Mexico.

  6. Fatigue damage prognosis using affine arithmetic

    NASA Astrophysics Data System (ADS)

    Gbaguidi, Audrey; Kim, Daewon

    2014-02-01

    Among the essential steps to be taken in structural health monitoring systems, damage prognosis would be the field that is least investigated due to the complexity of the uncertainties. This paper presents the possibility of using Affine Arithmetic for uncertainty propagation of crack damage in damage prognosis. The structures examined are thin rectangular plates made of titanium alloys with central mode I cracks and a composite plate with an internal delamination caused by mixed mode I and II fracture modes, under a harmonic uniaxial loading condition. The model-based method for crack growth rates are considered using the Paris Erdogan law model for the isotropic plates and the delamination growth law model proposed by Kardomateas for the composite plate. The parameters for both models are randomly taken and their uncertainties are considered as defined by an interval instead of a probability distribution. A Monte Carlo method is also applied to check whether Affine Arithmetic (AA) leads to tight bounds on the lifetime of the structure.

  7. Modelling disease outbreaks in realistic urban social networks

    NASA Astrophysics Data System (ADS)

    Eubank, Stephen; Guclu, Hasan; Anil Kumar, V. S.; Marathe, Madhav V.; Srinivasan, Aravind; Toroczkai, Zoltán; Wang, Nan

    2004-05-01

    Most mathematical models for the spread of disease use differential equations based on uniform mixing assumptions or ad hoc models for the contact process. Here we explore the use of dynamic bipartite graphs to model the physical contact patterns that result from movements of individuals between specific locations. The graphs are generated by large-scale individual-based urban traffic simulations built on actual census, land-use and population-mobility data. We find that the contact network among people is a strongly connected small-world-like graph with a well-defined scale for the degree distribution. However, the locations graph is scale-free, which allows highly efficient outbreak detection by placing sensors in the hubs of the locations network. Within this large-scale simulation framework, we then analyse the relative merits of several proposed mitigation strategies for smallpox spread. Our results suggest that outbreaks can be contained by a strategy of targeted vaccination combined with early detection without resorting to mass vaccination of a population.

  8. Population modelling to describe pharmacokinetics of amiodarone in rats: relevance of plasma protein and tissue depot binding.

    PubMed

    Campos Moreno, Eduardo; Merino Sanjuán, Matilde; Merino, Virginia; Nácher, Amparo; Martín Algarra, Rafael V; Casabó, Vicente G

    2007-02-01

    The objective of this paper was to characterize the disposition phase of AM in rats, after different high doses and modalities of i.v. administration. Three fitting programs, WINNONLIN, ADAPT II and NONMEM were employed. The two-stage fitting methods led to different results, none of which can adequately explain amiodarone's behaviour, although a great amount of data per subject is available. The non-linear mixed effect modelling approach allows satisfactory estimation of population pharmacokinetic parameters, and their respective variability. The best model to define the AM pharmacokinetic profile is a two-compartment model, with saturable and dynamic plasma protein binding and linear tissular depot dynamic binding. These results indicate that peripheral tissues act as depots, causing an important fall in AM plasma levels in the first moment after dosing. Later, the return of the drug from these depots causes a slow increase in serum concentration whenever the dose is reduced.

  9. Identification of usual interstitial pneumonia pattern using RNA-Seq and machine learning: challenges and solutions.

    PubMed

    Choi, Yoonha; Liu, Tiffany Ting; Pankratz, Daniel G; Colby, Thomas V; Barth, Neil M; Lynch, David A; Walsh, P Sean; Raghu, Ganesh; Kennedy, Giulia C; Huang, Jing

    2018-05-09

    We developed a classifier using RNA sequencing data that identifies the usual interstitial pneumonia (UIP) pattern for the diagnosis of idiopathic pulmonary fibrosis. We addressed significant challenges, including limited sample size, biological and technical sample heterogeneity, and reagent and assay batch effects. We identified inter- and intra-patient heterogeneity, particularly within the non-UIP group. The models classified UIP on transbronchial biopsy samples with a receiver-operating characteristic area under the curve of ~ 0.9 in cross-validation. Using in silico mixed samples in training, we prospectively defined a decision boundary to optimize specificity at ≥85%. The penalized logistic regression model showed greater reproducibility across technical replicates and was chosen as the final model. The final model showed sensitivity of 70% and specificity of 88% in the test set. We demonstrated that the suggested methodologies appropriately addressed challenges of the sample size, disease heterogeneity and technical batch effects and developed a highly accurate and robust classifier leveraging RNA sequencing for the classification of UIP.

  10. Evaluation of vertical coordinate and vertical mixing algorithms in the HYbrid-Coordinate Ocean Model (HYCOM)

    NASA Astrophysics Data System (ADS)

    Halliwell, George R.

    Vertical coordinate and vertical mixing algorithms included in the HYbrid Coordinate Ocean Model (HYCOM) are evaluated in low-resolution climatological simulations of the Atlantic Ocean. The hybrid vertical coordinates are isopycnic in the deep ocean interior, but smoothly transition to level (pressure) coordinates near the ocean surface, to sigma coordinates in shallow water regions, and back again to level coordinates in very shallow water. By comparing simulations to climatology, the best model performance is realized using hybrid coordinates in conjunction with one of the three available differential vertical mixing models: the nonlocal K-Profile Parameterization, the NASA GISS level 2 turbulence closure, and the Mellor-Yamada level 2.5 turbulence closure. Good performance is also achieved using the quasi-slab Price-Weller-Pinkel dynamical instability model. Differences among these simulations are too small relative to other errors and biases to identify the "best" vertical mixing model for low-resolution climate simulations. Model performance deteriorates slightly when the Kraus-Turner slab mixed layer model is used with hybrid coordinates. This deterioration is smallest when solar radiation penetrates beneath the mixed layer and when shear instability mixing is included. A simulation performed using isopycnic coordinates to emulate the Miami Isopycnic Coordinate Ocean Model (MICOM), which uses Kraus-Turner mixing without penetrating shortwave radiation and shear instability mixing, demonstrates that the advantages of switching from isopycnic to hybrid coordinates and including more sophisticated turbulence closures outweigh the negative numerical effects of maintaining hybrid vertical coordinates.

  11. On the coalescence-dispersion modeling of turbulent molecular mixing

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Kosaly, George

    1987-01-01

    The general coalescence-dispersion (C/D) closure provides phenomenological modeling of turbulent molecular mixing. The models of Curl and Dopazo and O'Brien appear as two limiting C/D models that bracket the range of results one can obtain by various models. This finding is used to investigate the sensitivtiy of the results to the choice of the model. Inert scalar mixing is found to be less model-sensitive than mixing accompanied by chemical reaction. Infinitely fast chemistry approximation is used to relate the C/D approach to Toor's earlier results. Pure mixing and infinite rate chemistry calculations are compared to study further a recent result of Hsieh and O'Brien who found that higher concentration moments are not sensitive to chemistry.

  12. Optical space weathering on Vesta: Radiative-transfer models and Dawn observations

    NASA Astrophysics Data System (ADS)

    Blewett, David T.; Denevi, Brett W.; Le Corre, Lucille; Reddy, Vishnu; Schröder, Stefan E.; Pieters, Carle M.; Tosi, Federico; Zambon, Francesca; De Sanctis, Maria Cristina; Ammannito, Eleonora; Roatsch, Thomas; Raymond, Carol A.; Russell, Christopher T.

    2016-02-01

    Exposure to ion and micrometeoroid bombardment in the space environment causes physical and chemical changes in the surface of an airless planetary body. These changes, called space weathering, can strongly influence a surface's optical characteristics, and hence complicate interpretation of composition from reflectance spectroscopy. Prior work using data from the Dawn spacecraft (Pieters, C.M. et al. [2012]. Nature 491, 79-82) found that accumulation of nanophase metallic iron (npFe0), which is a key space-weathering product on the Moon, does not appear to be important on Vesta, and instead regolith evolution is dominated by mixing with carbonaceous chondrite (CC) material delivered by impacts. In order to gain further insight into the nature of space weathering on Vesta, we constructed model reflectance spectra using Hapke's radiative-transfer theory and used them as an aid to understanding multispectral observations obtained by Dawn's Framing Cameras (FC). The model spectra, for a howardite mineral assemblage, include both the effects of npFe0 and that of a mixed CC component. We found that a plot of the 438-nm/555-nm ratio vs. the 555-nm reflectance for the model spectra helps to separate the effects of lunar-style space weathering (LSSW) from those of CC-mixing. We then constructed ratio-reflectance pixel scatterplots using FC images for four areas of contrasting composition: a eucritic area at Vibidia crater, a diogenitic area near Antonia crater, olivine-bearing material within Bellicia crater, and a light mantle unit (referred to as an ;orange patch; in some previous studies, based on steep spectral slope in the visible) northeast of Oppia crater. In these four cases the observed spectral trends are those expected from CC-mixing, with no evidence for weathering dominated by production of npFe0. In order to survey a wider range of surfaces, we also defined a spectral parameter that is a function of the change in 438-nm/555-nm ratio and the 555-nm reflectance between fresh and mature surfaces, permitting the spectral change to be classified as LSSW-like or CC-mixing-like. When applied to 21 fresh and mature FC spectral pairs, it was found that none have changes consistent with LSSW. We discuss Vesta's lack of LSSW in relation to the possible agents of space weathering, the effects of physical and compositional differences among asteroid surfaces, and the possible role of magnetic shielding from the solar wind.

  13. Reliability and Validity in Hospital Case-Mix Measurement

    PubMed Central

    Pettengill, Julian; Vertrees, James

    1982-01-01

    There is widespread interest in the development of a measure of hospital output. This paper describes the problem of measuring the expected cost of the mix of inpatient cases treated in a hospital (hospital case-mix) and a general approach to its solution. The solution is based on a set of homogenous groups of patients, defined by a patient classification system, and a set of estimated relative cost weights corresponding to the patient categories. This approach is applied to develop a summary measure of the expected relative costliness of the mix of Medicare patients treated in 5,576 participating hospitals. The Medicare case-mix index is evaluated by estimating a hospital average cost function. This provides a direct test of the hypothesis that the relationship between Medicare case-mix and Medicare cost per case is proportional. The cost function analysis also provides a means of simulating the effects of classification error on our estimate of this relationship. Our results indicate that this general approach to measuring hospital case-mix provides a valid and robust measure of the expected cost of a hospital's case-mix. PMID:10309909

  14. Pervious concrete mix optimization for sustainable pavement solution

    NASA Astrophysics Data System (ADS)

    Barišić, Ivana; Galić, Mario; Netinger Grubeša, Ivanka

    2017-10-01

    In order to fulfill requirements of sustainable road construction, new materials for pavement construction are investigated with the main goal to preserve natural resources and achieve energy savings. One of such sustainable pavement material is pervious concrete as a new solution for low volume pavements. To accommodate required strength and porosity as the measure of appropriate drainage capability, four mixtures of pervious concrete are investigated and results of laboratory tests of compressive and flexural strength and porosity are presented. For defining the optimal pervious concrete mixture in a view of aggregate and financial savings, optimization model is utilized and optimal mixtures defined according to required strength and porosity characteristics. Results of laboratory research showed that comparing single-sized aggregate pervious concrete mixtures, coarse aggregate mixture result in increased porosity but reduced strengths. The optimal share of the coarse aggregate turn to be 40.21%, the share of fine aggregate is 49.79% for achieving required compressive strength of 25 MPa, flexural strength of 4.31 MPa and porosity of 21.66%.

  15. Jet Topics: Disentangling Quarks and Gluons at Colliders

    NASA Astrophysics Data System (ADS)

    Metodiev, Eric M.; Thaler, Jesse

    2018-06-01

    We introduce jet topics: a framework to identify underlying classes of jets from collider data. Because of a close mathematical relationship between distributions of observables in jets and emergent themes in sets of documents, we can apply recent techniques in "topic modeling" to extract jet topics from the data with minimal or no input from simulation or theory. As a proof of concept with parton shower samples, we apply jet topics to determine separate quark and gluon jet distributions for constituent multiplicity. We also determine separate quark and gluon rapidity spectra from a mixed Z -plus-jet sample. While jet topics are defined directly from hadron-level multidifferential cross sections, one can also predict jet topics from first-principles theoretical calculations, with potential implications for how to define quark and gluon jets beyond leading-logarithmic accuracy. These investigations suggest that jet topics will be useful for extracting underlying jet distributions and fractions in a wide range of contexts at the Large Hadron Collider.

  16. Hydrogeochemical variables regionalization--applying cluster analysis for a seasonal evolution model from an estuarine system affected by AMD.

    PubMed

    Grande, J A; Carro, B; Borrego, J; de la Torre, M L; Valente, T; Santisteban, M

    2013-04-15

    This study describes the spatial evolution of the hydrogeochemical parameters which characterise a strongly affected estuary by Acid Mine Drainage (AMD). The studied estuarine system receives AMD from the Iberian Pyrite Belt (SW Spain) and, simultaneously, is affected by the presence of an industrial chemical complex. Water sampling was performed in the year of 2008, comprising four sampling campaigns, in order to represent seasonality. The results show how the estuary can be divided into three areas of different behaviour in response to hydrogeochemical variables concentrations that define each sampling stations: on one hand, an area dominated by tidal influence; in the opposite end there is a second area including the points located in the two rivers headwaters that are not influenced by seawater; finally there is the area that can be defined as mixing zone. These areas are moved along the hydrological year due to seasonal chemical variations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. 24 CFR 960.401 - Purpose.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Population Projects § 960.401 Purpose. This subpart establishes a preference for elderly families and disabled families for admission to mixed population public housing projects, as defined in § 960.405. ...

  18. 24 CFR 960.401 - Purpose.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Population Projects § 960.401 Purpose. This subpart establishes a preference for elderly families and disabled families for admission to mixed population public housing projects, as defined in § 960.405. ...

  19. 24 CFR 960.401 - Purpose.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Population Projects § 960.401 Purpose. This subpart establishes a preference for elderly families and disabled families for admission to mixed population public housing projects, as defined in § 960.405. ...

  20. 24 CFR 960.401 - Purpose.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Population Projects § 960.401 Purpose. This subpart establishes a preference for elderly families and disabled families for admission to mixed population public housing projects, as defined in § 960.405. ...

  1. 24 CFR 960.401 - Purpose.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Population Projects § 960.401 Purpose. This subpart establishes a preference for elderly families and disabled families for admission to mixed population public housing projects, as defined in § 960.405. ...

  2. When the C in C P does not matter: Anatomy of order-4 C P eigenstates and their Yukawa interactions

    NASA Astrophysics Data System (ADS)

    Aranda, Alfredo; Ivanov, Igor P.; Jiménez, Enrique

    2017-03-01

    We explore the origin and Yukawa interactions of the scalars with peculiar C P properties which were recently found in a multi-Higgs model based on an order-4 C P symmetry. We relate the existence of such scalars to the enhanced freedom of defining C P , even beyond the well-known generalized C P symmetries, which arises in models with several zero-charge scalar fields. We also show that despite possessing exotic C P quantum numbers, these scalars do not have to be inert: they can have C P -conserving Yukawa interactions provided the C P acts on fermions by also mixing generations. This paper focuses on formal aspects—exposed in a pedagogical manner—and includes a brief discussion of possible phenomenological consequences.

  3. Development of Kinetics and Mathematical Models for High-Pressure Gasification of Lignite-Switchgrass Blends: Cooperative Research and Development Final Report, CRADA Number CRD-11-447

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iisa, Kristiina

    2016-04-06

    NREL will work with Participant as a subtier partner under DE-FOA-0000240 titled "Co-Production of Power, Fuels, and Chemicals via Coal/Biomass Mixtures." The goal of the project is to determine the gasification characteristics of switchgrass and lignite mixtures and develop kinetic models. NREL will utilize a pressurized thermogravimetric analyzer to measure the reactivity of chars generated in a pressurized entrained-flow reactor at Participant's facilities and to determine the evolution of gaseous species during pyrolysis of switchgrass-lignite mixtures. Mass spectrometry and Fourier-transform infrared analysis will be used to identify and quantify the gaseous species. The results of the project will aid inmore » defining key reactive properties of mixed coal biomass fuels.« less

  4. Impact Ignition of Low Density Mechanically Activated and Multilayer Foil Ni/Al

    NASA Astrophysics Data System (ADS)

    Beason, Matthew; Mason, B.; Son, Steven; Groven, Lori

    2013-06-01

    Mechanical activation (MA) via milling of reactive materials provides a means of lowering the ignition threshold of shock initiated reactions. This treatment provides a finely mixed microstructure with wide variation in the resulting scales of the intraparticle microstructure that makes model validation difficult. In this work we consider nanofoils produced through vapor deposition with well defined periodicity and a similar degree of fine scale mixing. This allows experiments that may be easier to compare with computational models. To achieve this, both equimolar Ni/Al powder that has undergone MA using high energy ball milling and nanofoils milled into a powder using low energy ball milling were used. The Asay Shear impact experiment was conducted on both MA Ni/Al and Ni/Al nanofoil-based powders at low densities (<60%) to examine their impact response and reaction behavior. Scanning electron microscopy and energy-dispersive x-ray spectroscopy were used to verify the microstructure of the materials. The materials' mechanical properties were evaluated using nano-indentation. Onset temperatures were evaluated using differential thermal analysis/differential scanning calorimetry. Impact ignition thresholds, burning rates, temperature field, and ignition delays are reported. Funding from the Defense Threat Reduction Agency (DTRA) Grant Number HDTRA1-10-1-0119. Counter-WMD basic research program, Dr. Suhithi M. Peiris, program director is gratefully acknowledged.

  5. Computational fluid dynamics study of viscous fingering in supercritical fluid chromatography.

    PubMed

    Subraveti, Sai Gokul; Nikrityuk, Petr; Rajendran, Arvind

    2018-01-26

    Axi-symmetric numerical simulations are carried out to study the dynamics of a plug introduced through a mixed-stream injection in supercritical fluid chromatographic columns. The computational fluid dynamics model developed in this work takes into account both the hydrodynamics and adsorption equilibria to describe the phenomena of viscous fingering and plug effect that contribute to peak distortions in mixed-stream injections. The model was implemented into commercial computational fluid dynamics software using user-defined functions. The simulations describe the propagation of both the solute and modifier highlighting the interplay between the hydrodynamics and plug effect. The simulated peaks showed good agreement with experimental data published in the literature involving different injection volumes (5 μL, 50 μL, 1 mL and 2 mL) of flurbiprofen on Chiralpak AD-H column using a mobile phase of CO 2 and methanol. The study demonstrates that while viscous fingering is the main source of peak distortions for large-volume injections (1 mL and 2 mL) it has negligible impact on small-volume injections (5 μL and 50 μL). Band broadening in small-volume injections arise mainly due to the plug effect. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  6. Beneath the surface: Characteristics of oceanic ecosystems under weak mixing conditions A theoretical investigation

    NASA Astrophysics Data System (ADS)

    Beckmann, Aike; Hense, Inga

    2007-12-01

    This study considers an important biome in aquatic environments, the subsurface ecosystem that evolves under low mixing conditions, from a theoretical point of view. Employing a conceptual model that involves phytoplankton, a limiting nutrient and sinking detritus, we use a set of key characteristics (thickness, depth, biomass amplitude/productivity) to qualitatively and quantitatively describe subsurface biomass maximum layers (SBMLs) of phytoplankton. These SBMLs are defined by the existence of two community compensation depths in the water column, which confine the layer of net community production; their depth coincides with the upper nutricline. Analysing the results of a large ensemble of simulations with a one-dimensional numerical model, we explore the parameter dependencies to obtain fundamental steady-state relationships that connect primary production, mortality and grazing, remineralization, vertical diffusion and detrital sinking. As a main result, we find that we can distinguish between factors that determine the vertically integrated primary production and others that affect only depth and shape (thickness and biomass amplitude) of this subsurface production layer. A simple relationship is derived analytically, which can be used to estimate the steady-state primary productivity in the subsurface oligotrophic ocean. The fundamental nature of the results provides further insight into the dynamics of these “hidden” ecosystems and their role in marine nutrient cycling.

  7. Estimation of the linear mixed integrated Ornstein–Uhlenbeck model

    PubMed Central

    Hughes, Rachael A.; Kenward, Michael G.; Sterne, Jonathan A. C.; Tilling, Kate

    2017-01-01

    ABSTRACT The linear mixed model with an added integrated Ornstein–Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance). PMID:28515536

  8. Modelling of upper ocean mixing by wave-induced turbulence

    NASA Astrophysics Data System (ADS)

    Ghantous, Malek; Babanin, Alexander

    2013-04-01

    Mixing of the upper ocean affects the sea surface temperature by bringing deeper, colder water to the surface. Because even small changes in the surface temperature can have a large impact on weather and climate, accurately determining the rate of mixing is of central importance for forecasting. Although there are several mixing mechanisms, one that has until recently been overlooked is the effect of turbulence generated by non-breaking, wind-generated surface waves. Lately there has been a lot of interest in introducing this mechanism into models, and real gains have been made in terms of increased fidelity to observational data. However our knowledge of the mechanism is still incomplete. We indicate areas where we believe the existing models need refinement and propose an alternative model. We use two of the models to demonstrate the effect on the mixed layer of wave-induced turbulence by applying them to a one-dimensional mixing model and a stable temperature profile. Our modelling experiment suggests a strong effect on sea surface temperature due to non-breaking wave-induced turbulent mixing.

  9. Progress Report on SAM Reduced-Order Model Development for Thermal Stratification and Mixing during Reactor Transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, R.

    This report documents the initial progress on the reduced-order flow model developments in SAM for thermal stratification and mixing modeling. Two different modeling approaches are pursued. The first one is based on one-dimensional fluid equations with additional terms accounting for the thermal mixing from both flow circulations and turbulent mixing. The second approach is based on three-dimensional coarse-grid CFD approach, in which the full three-dimensional fluid conservation equations are modeled with closure models to account for the effects of turbulence.

  10. Precision calculations for h → WW/ZZ → 4 fermions in the Two-Higgs-Doublet Model with Prophecy4f

    NASA Astrophysics Data System (ADS)

    Altenkamp, Lukas; Dittmaier, Stefan; Rzehak, Heidi

    2018-03-01

    We have calculated the next-to-leading-order electroweak and QCD corrections to the decay processes h → WW/ZZ → 4 fermions of the light CP-even Higgs boson h of various types of Two-Higgs-Doublet Models (Types I and II, "lepton-specific" and "flipped" models). The input parameters are defined in four different renormalization schemes, where parameters that are not directly accessible by experiments are defined in the \\overline{MS} scheme. Numerical results are presented for the corrections to partial decay widths for various benchmark scenarios previously motivated in the literature, where we investigate the dependence on the \\overline{MS} renormalization scale and on the choice of the renormalization scheme in detail. We find that it is crucial to be precise with these issues in parameter analyses, since parameter conversions between different schemes can involve sizeable or large corrections, especially in scenarios that are close to experimental exclusion limits or theoretical bounds. It even turns out that some renormalization schemes are not applicable in specific regions of parameter space. Our investigation of differential distributions shows that corrections beyond the Standard Model are mostly constant offsets induced by the mixing between the light and heavy CP-even Higgs bosons, so that differential analyses of h→4 f decay observables do not help to identify Two-Higgs-Doublet Models. Moreover, the decay widths do not significantly depend on the specific type of those models. The calculations are implemented in the public Monte Carlo generator Prophecy4f and ready for application.

  11. Experimental Assessment of the Emissions Control Potential of a Rich/Quench/Lean Combustor for High Speed Civil Transport Aircraft Engines

    NASA Technical Reports Server (NTRS)

    Rosfjord, T. J.; Padget, F. C.; Tacina, Robert R. (Technical Monitor)

    2001-01-01

    In support of Pratt & Whitney efforts to define the Rich burn/Quick mix/Lean burn (RQL) combustor for the High Speed Civil Transport (HSCT) aircraft engine, UTRC conducted a flametube-scale study of the RQL concept. Extensive combustor testing was performed at the Supersonic Cruise (SSC) condition of a HSCT engine cycle, Data obtained from probe traverses near the exit of the mixing section confirmed that the mixing section was the critical component in controlling combustor emissions. Circular-hole configurations, which produced rapidly-, highly-penetrating jets, were most effective in limiting NOx. The spatial profiles of NOx and CO at the mixer exit were not directly interpretable using a simple flow model based on jet penetration, and a greater understanding of the flow and chemical processes in this section are required to optimize it. Neither the rich-combustor equivalence ratio nor its residence time was a direct contributor to the exit NOx. Based on this study, it was also concluded that (1) While NOx formation in both the mixing section and the lean combustor contribute to the overall emission, the NOx formation in the mixing section dominates. The gas composition exiting the rich combustor can be reasonably represented by the equilibrium composition corresponding to the rich combustor operating condition. Negligible NOx exits the rich combustor. (2) At the SSC condition, the oxidation processes occurring in the mixing section consume 99 percent of the CO exiting the rich combustor. Soot formed in the rich combustor is also highly oxidized, with combustor exit SAE Smoke Number <3. (3) Mixing section configurations which demonstrated enhanced emissions control at SSC also performed better at part-power conditions. Data from mixer exit traverses reflected the expected mixing behavior for off-design jet to crossflow momentum-flux ratios. (4) Low power operating conditions require that the RQL combustor operate as a lean-lean combustor to achieve low CO and high efficiency. (5) A RQL combustor can achieve the emissions goal of EINOX = 5 at the Supersonic Cruise operating condition for a HSCT engine.

  12. Experimental Assessment of the Emissions Control Potential of a Rich/Quench/ Lean Combustor for High Speed Civil Transport Aircraft Engines

    NASA Technical Reports Server (NTRS)

    Tacina, Robert R. (Technical Monitor); Rosfjord, T. J.; Padget, F. C.

    2001-01-01

    In support of Pratt & Whitney efforts to define the Rich burn/Quick mix/Lean burn (RQL) combustor for the High Speed Civil Transport (HSCT) aircraft engine, UTRC conducted a flametube-scale study of the RQL concept. Extensive combustor testing was performed at the Supersonic Cruise (SSC) condition of an HSCT engine cycle. Data obtained from probe traverses near the exit of the mixing section confirmed that the mixing section was the critical component in controlling combustor emissions. Circular-hole configurations, which produced rapidly-, highly-penetrating jets, were most effective in limiting NO(x). The spatial profiles of NO(x) and CO at the mixer exit were not directly interpretable using a simple flow model based on jet penetration, and a greater understanding of the flow and chemical processes in this section are required to optimize it. Neither the rich-combustor equivalence ratio nor its residence time was a direct contributor to the exit NO(x). Based on this study, it was also concluded that: (1) While NO(x) formation in both the mixing section and the lean combustor contribute to the overall emission, the NOx formation in the mixing section dominates. The gas composition exiting the rich combustor can be reasonably represented by the equilibrium composition corresponding to the rich combustor operating condition. Negligible NO(x) exits the rich combustor. (2) At the SSC condition, the oxidation processes occurring in the mixing section consume 99 percent of the CO exiting the rich combustor. Soot formed in the rich combustor is also highly oxidized, with combustor exit SAE Smoke Number <3. (3) Mixing section configurations which demonstrated enhanced emissions control at SSC also performed better at part-power conditions. Data from mixer exit traverses reflected the expected mixing behavior for off-design jet to crossflow momentum-flux ratios. (4) Low power operating conditions require that the RQL combustor operate as a lean-lean combustor to achieve low CO and high efficiency. (5) An RQL combustor can achieve the emissions goal of EINO(x) = 5 at the Supersonic Cruise operating condition for an HSCT engine.

  13. Systemwide Reform in Districts under Pressure: The Role of Social Networks in Defining, Acquiring, Using, and Diffusing Research Evidence

    ERIC Educational Resources Information Center

    Finnigan, Kara S.; Daly, Alan J.; Che, Jing

    2013-01-01

    Purpose: The purpose of this paper is to examine the way in which low-performing schools and their district define, acquire, use, and diffuse research-based evidence. Design/methodology/approach: The mixed methods case study builds upon the prior research on research evidence and social networks, drawing on social network analyses, survey data and…

  14. Mixed material formation and erosion

    NASA Astrophysics Data System (ADS)

    Linsmeier, Ch.; Luthin, J.; Goldstraß, P.

    2001-03-01

    The formation of mixed phases on materials relevant for first wall components of fusion devices is studied under well-defined conditions in ultra-high vacuum (UHV). This is necessary in order to determine fundamental parameters governing the basic processes of chemical reaction, material mixing and erosion. We examined the binary systems comprising of the wall materials beryllium, silicon, tungsten and titanium and carbon, the latter being both a wall material and a plasma impurity. Experiments were carried out to study the interaction of carbon in the form of a vapor-deposited component on clean, well-defined elemental surfaces. The chemical composition and the binding state are measured by X-ray photoelectron spectroscopy (XPS) after annealing treatments. For all materials, a limited carbide formation is found at room temperature. Annealing carbon films on elemental substrate leads to a complete carbidization of the carbon layer. The carbide layers on Be and Si are stable even at very high temperatures, whereas the carbides of Ti and W dissolve. The erosion of these two metals by sputtering is then identical to the pure metals, whereas for Be and Si a protective carbide layer can reduce the sputtering yields.

  15. Extended Mixed-Efects Item Response Models with the MH-RM Algorithm

    ERIC Educational Resources Information Center

    Chalmers, R. Philip

    2015-01-01

    A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…

  16. On Local Homogeneity and Stochastically Ordered Mixed Rasch Models

    ERIC Educational Resources Information Center

    Kreiner, Svend; Hansen, Mogens; Hansen, Carsten Rosenberg

    2006-01-01

    Mixed Rasch models add latent classes to conventional Rasch models, assuming that the Rasch model applies within each class and that relative difficulties of items are different in two or more latent classes. This article considers a family of stochastically ordered mixed Rasch models, with ordinal latent classes characterized by increasing total…

  17. Machine Learning Based Multi-Physical-Model Blending for Enhancing Renewable Energy Forecast -- Improvement via Situation Dependent Error Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Siyuan; Hwang, Youngdeok; Khabibrakhmanov, Ildar

    With increasing penetration of solar and wind energy to the total energy supply mix, the pressing need for accurate energy forecasting has become well-recognized. Here we report the development of a machine-learning based model blending approach for statistically combining multiple meteorological models for improving the accuracy of solar/wind power forecast. Importantly, we demonstrate that in addition to parameters to be predicted (such as solar irradiance and power), including additional atmospheric state parameters which collectively define weather situations as machine learning input provides further enhanced accuracy for the blended result. Functional analysis of variance shows that the error of individual modelmore » has substantial dependence on the weather situation. The machine-learning approach effectively reduces such situation dependent error thus produces more accurate results compared to conventional multi-model ensemble approaches based on simplistic equally or unequally weighted model averaging. Validation over an extended period of time results show over 30% improvement in solar irradiance/power forecast accuracy compared to forecasts based on the best individual model.« less

  18. REVEAL: An Extensible Reduced Order Model Builder for Simulation and Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Khushbu; Sharma, Poorva; Ma, Jinliang

    2013-04-30

    Many science domains need to build computationally efficient and accurate representations of high fidelity, computationally expensive simulations. These computationally efficient versions are known as reduced-order models. This paper presents the design and implementation of a novel reduced-order model (ROM) builder, the REVEAL toolset. This toolset generates ROMs based on science- and engineering-domain specific simulations executed on high performance computing (HPC) platforms. The toolset encompasses a range of sampling and regression methods that can be used to generate a ROM, automatically quantifies the ROM accuracy, and provides support for an iterative approach to improve ROM accuracy. REVEAL is designed to bemore » extensible in order to utilize the core functionality with any simulator that has published input and output formats. It also defines programmatic interfaces to include new sampling and regression techniques so that users can ‘mix and match’ mathematical techniques to best suit the characteristics of their model. In this paper, we describe the architecture of REVEAL and demonstrate its usage with a computational fluid dynamics model used in carbon capture.« less

  19. Conservative mixing, competitive mixing and their applications

    NASA Astrophysics Data System (ADS)

    Klimenko, A. Y.

    2010-12-01

    In many of the models applied to simulations of turbulent transport and turbulent combustion, the mixing between particles is used to reflect the influence of the continuous diffusion terms in the transport equations. Stochastic particles with properties and mixing can be used not only for simulating turbulent combustion, but also for modeling a large spectrum of physical phenomena. Traditional mixing, which is commonly used in the modeling of turbulent reacting flows, is conservative: the total amount of scalar is (or should be) preserved during a mixing event. It is worthwhile, however, to consider a more general mixing that does not possess these conservative properties; hence, our consideration lies beyond traditional mixing. In non-conservative mixing, the particle post-mixing average becomes biased towards one of the particles participating in mixing. The extreme form of non-conservative mixing can be called competitive mixing or competition: after a mixing event, the loser particle simply receives the properties of the winner particle. Particles with non-conservative mixing can be used to emulate various phenomena involving competition. In particular, we investigate cyclic behavior that can be attributed to complex competing systems. We show that the localness and intransitivity of competitive mixing are linked to the cyclic behavior.

  20. A unique approach to quantifying the changing workload and case mix in laparoscopic colorectal surgery.

    PubMed

    Shah, P R; Gupta, V; Haray, P N

    2011-03-01

    Laparoscopic colorectal surgery includes a range of operations with differing technical difficulty, and traditional parameters, such as conversion and complication rates, may not be sensitive enough to assess the complexity of these procedures. This study aims to define a reproducible and reliable tool for quantifying the total workload and the complexity of the case mix. This is a review of a single surgeon's 10-year experience. The intermediate equivalent value scoring system was used to code complexity of cases. To assess changes in the workload and case mix, the period has been divided into five phases. Three hundred and forty-nine laparoscopic operations were performed, of which there were 264 (75.6%) resections. The overall conversion rate was 17.8%, with progressive improvement over the phases. Complex major operation (CMO), as defined in the British United Provident Association (BUPA) schedule of procedures, accounted for 35% of the workload. In spite of similar numbers of cases in each phase, there was a steady increase in the workload score, correlating with the increasing complexity of the case mix. There was no significant difference in the conversion and complications rates between CMO and non-CMO. The paradoxical increase in the mean operating time with increasing experience corresponded to the progressive increase in the workload score, reflecting the increasing complexity of the case mix. This article establishes a reliable and reproducible tool for quantifying the total laparoscopic colorectal workload of an individual surgeon or of an entire department, while at the same time providing a measure of the complexity of the case mix. © 2011 The Authors. Colorectal Disease © 2011 The Association of Coloproctology of Great Britain and Ireland.

  1. Convection Enhances Mixing in the Southern Ocean

    NASA Astrophysics Data System (ADS)

    Sohail, Taimoor; Gayen, Bishakhdatta; Hogg, Andrew McC.

    2018-05-01

    Mixing efficiency is a measure of the energy lost to mixing compared to that lost to viscous dissipation. In a turbulent stratified fluid the mixing efficiency is often assumed constant at η = 0.2, whereas with convection it takes values closer to 1. The value of mixing efficiency when both stratified shear flow and buoyancy-driven convection are active remains uncertain. We use a series of numerical simulations to determine the mixing efficiency in an idealized Southern Ocean model. The model is energetically closed and fully resolves convection and turbulence such that mixing efficiency can be diagnosed. Mixing efficiency decreases with increasing wind stress but is enhanced by turbulent convection and by large thermal gradients in regions with a strongly stratified thermocline. Using scaling theory and the model results, we predict an overall mixing efficiency for the Southern Ocean that is significantly greater than 0.2 while emphasizing that mixing efficiency is not constant.

  2. Modeling of Mixing Behavior in a Combined Blowing Steelmaking Converter with a Filter-Based Euler-Lagrange Model

    NASA Astrophysics Data System (ADS)

    Li, Mingming; Li, Lin; Li, Qiang; Zou, Zongshu

    2018-05-01

    A filter-based Euler-Lagrange multiphase flow model is used to study the mixing behavior in a combined blowing steelmaking converter. The Euler-based volume of fluid approach is employed to simulate the top blowing, while the Lagrange-based discrete phase model that embeds the local volume change of rising bubbles for the bottom blowing. A filter-based turbulence method based on the local meshing resolution is proposed aiming to improve the modeling of turbulent eddy viscosities. The model validity is verified through comparison with physical experiments in terms of mixing curves and mixing times. The effects of the bottom gas flow rate on bath flow and mixing behavior are investigated and the inherent reasons for the mixing result are clarified in terms of the characteristics of bottom-blowing plumes, the interaction between plumes and top-blowing jets, and the change of bath flow structure.

  3. Assessing spatial and temporal variability of phytoplankton communities' composition in the Iroise Sea ecosystem (Brittany, France): A 3D modeling approach. Part 2: Linking summer mesoscale distribution of phenotypic diversity to hydrodynamism

    NASA Astrophysics Data System (ADS)

    Cadier, Mathilde; Sourisseau, Marc; Gorgues, Thomas; Edwards, Christopher A.; Memery, Laurent

    2017-05-01

    Tidal front ecosystems are especially dynamic environments usually characterized by high phytoplankton biomass and high primary production. However, the description of functional microbial diversity occurring in these regions remains only partially documented. In this article, we use a numerical model, simulating a large number of phytoplankton phenotypes to explore the three-dimensional spatial patterns of phytoplankton abundance and diversity in the Iroise Sea (western Brittany). Our results suggest that, in boreal summer, a seasonally marked tidal front shapes the phytoplankton species richness. A diversity maximum is found in the surface mixed layer located slightly west of the tidal front (i.e., not strictly co-localized with high biomass concentrations) which separates tidally mixed from stratified waters. Differences in phenotypic composition between sub-regions with distinct hydrodynamic regimes (defined by vertical mixing, nutrients gradients and light penetration) are discussed. Local growth and/or physical transport of phytoplankton phenotypes are shown to explain our simulated diversity distribution. We find that a large fraction (64%) of phenotypes present during the considered period of September are ubiquitous, found in the frontal area and on both sides of the front (i.e., over the full simulated domain). The frontal area does not exhibit significant differences between its community composition and that of either the well-mixed region or an offshore Deep Chlorophyll Maximum (DCM). Only three phenotypes (out of 77) specifically grow locally and are found at substantial concentration only in the surface diversity maximum. Thus, this diversity maximum is composed of a combination of ubiquitous phenotypes with specific picoplankton deriving from offshore, stratified waters (including specific phenotypes from both the surface and the DCM) and imported through physical transport, completed by a few local phenotypes. These results are discussed in light of the three-dimensional general circulation at frontal interfaces. Processes identified by this study are likely to be common in tidal front environments and may be generalized to other shallow, tidally mixed environments worldwide.

  4. Bounded fractional diffusion in geological media: Definition and Lagrangian approximation

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Green, Christopher T.; LaBolle, Eric M.; Neupauer, Roseanna M.; Sun, HongGuang

    2016-11-01

    Spatiotemporal fractional-derivative models (FDMs) have been increasingly used to simulate non-Fickian diffusion, but methods have not been available to define boundary conditions for FDMs in bounded domains. This study defines boundary conditions and then develops a Lagrangian solver to approximate bounded, one-dimensional fractional diffusion. Both the zero-value and nonzero-value Dirichlet, Neumann, and mixed Robin boundary conditions are defined, where the sign of Riemann-Liouville fractional derivative (capturing nonzero-value spatial-nonlocal boundary conditions with directional superdiffusion) remains consistent with the sign of the fractional-diffusive flux term in the FDMs. New Lagrangian schemes are then proposed to track solute particles moving in bounded domains, where the solutions are checked against analytical or Eulerian solutions available for simplified FDMs. Numerical experiments show that the particle-tracking algorithm for non-Fickian diffusion differs from Fickian diffusion in relocating the particle position around the reflective boundary, likely due to the nonlocal and nonsymmetric fractional diffusion. For a nonzero-value Neumann or Robin boundary, a source cell with a reflective face can be applied to define the release rate of random-walking particles at the specified flux boundary. Mathematical definitions of physically meaningful nonlocal boundaries combined with bounded Lagrangian solvers in this study may provide the only viable techniques at present to quantify the impact of boundaries on anomalous diffusion, expanding the applicability of FDMs from infinite domains to those with any size and boundary conditions.

  5. Multi-Fluid Interpenetration Mixing in X-ray and Directly Laser driven ICF Capsule Implosions

    NASA Astrophysics Data System (ADS)

    Wilson, Douglas

    2003-10-01

    Mix between a surrounding shell and the fuel leads to degradation in ICF capsule performance. Both indirectly (X-ray) and directly laser driven implosions provide a wealth of data to test mix models. One model, the multi-fluid interpenetration mix model of Scannapieco and Cheng (Phys. Lett. A., 299, 49, 2002), was implemented in an ICF code and applied to a wide variety of experiments (e.g. J. D. Kilkenny et al., Proc. Conf Plasm. Phys. Contr. Nuc. Fus. Res. 3, 29(1988), P. Amendt, R. E. Turner, O. L. Landen, Phy. Rev. Lett., 89, 165001 (2002), or Li et al., Phy. Rev. Lett, 89, 165002 (2002)). With its single adjustable parameter fixed, it replicates well the yield degradation with increasing convergence ratio for both directly and indirectly driven capsules. Often, but not always the ion temperatures with mixing are calculated to be higher than in an unmixed implosion, agreeing with observations. Comparison with measured directly driven implosion yield rates ( from the neutron temporal diagnostic or NTD) shows mixing increases rapidly during the burn. The model also reproduces the decrease of the fuel "rho-r" with fill gas pressure, measured by observing escaping deuterons or secondary neutrons. The mix model assumes fully atomically mixed constituents, but when experiments with deuterated plastic layers and 3He fuel are modeled, less that full atomic mix is appropriate. Applying the mix model to the ablator - solid DT interface in indirectly driven ignition capsules for the NIF or LMJ suggests that the capsules will ignite, but that burn after ignition may be somewhat degraded. Situations in which the Scannapieco and Cheng model fails to agree with experiments can guide us to improvements or the development of other models. Some directly driven symmetric implosions suggest that in highly mixed situations, a higher value of the mix parameter may needed. Others show the model underestimating the fuel burn temperature. This work was performed by the Los Alamos National Laboratory under DOE contract number W-7405-Eng-36.

  6. Ozone Production from the 2004 North American Boreal Fires

    NASA Technical Reports Server (NTRS)

    Pfister, G. G.; Emmons, L. K.; Hess, P. G.; Honrath, R.; Lamarque, J.-F.; Val Martin, M.; Owen, R. C.; Avery, M. A.; Browell, E. V.; Holloway, J. S.; hide

    2006-01-01

    We examine the ozone production from boreal forest fires based on a case study of wildfires in Alaska and Canada in summer 2004. The model simulations were performed with the chemistry transport model, MOZART-4, and were evaluated by comparison with a comprehensive set of aircraft measurements. In the analysis we use measurements and model simulations of carbon monoxide (CO) and ozone (O3) at the PICO-NARE station located in the Azores within the pathway of North American outflow. The modeled mixing ratios were used to test the robustness of the enhancement ratio deltaO3/deltaCO (defined as the excess O3 mixing ratio normalized by the increase in CO) and the feasibility for using this ratio in estimating the O3 production from the wildfires. Modeled and observed enhancement ratios are about 0.25 ppbv/ppbv which is in the range of values found in the literature, and results in a global net O3 production of 12.9 2 Tg O3 during summer 2004. This matches the net O3 production calculated in the model for a region extending from Alaska to the East Atlantic (9-11 Tg O3) indicating that observations at PICO-NARE representing photochemically well-aged plumes provide a good measure of the O3 production of North American boreal fires. However, net chemical loss of fire related O3 dominates in regions far downwind from the fires (e.g. Europe and Asia) resulting in a global net O3 production of 6 Tg O3 during the same time period. On average, the fires increased the O3 burden (surface-300 mbar) over Alaska and Canada during summer 2004 by about 7-9%, and over Europe by about 2-3%.

  7. Formaldehyde Production From Isoprene Oxidation Across NOx Regimes

    NASA Technical Reports Server (NTRS)

    Wolfe, G. M.; Kaiser, J.; Hanisco, T. F.; Keutsch, F. N.; de Gouw, J. A.; Gilman, J. B.; Graus, M.; Hatch, C. D.; Holloway, J.; Horowitz, L. W.; hide

    2016-01-01

    The chemical link between isoprene and formaldehyde (HCHO) is a strong, non-linear function of NOx (= NO + NO2). This relationship is a linchpin for top-down isoprene emission inventory verification from orbital HCHO column observations. It is also a benchmark for overall photochemical mechanism performance with regard to VOC oxidation. Using a comprehensive suite of airborne in situ observations over the southeast US, we quantify HCHO production across the urban-rural spectrum. Analysis of isoprene and its major first-generation oxidation products allows us to define both a prompt yield of HCHO (molecules of HCHO produced per molecule of freshly emitted isoprene) and the background HCHO mixing ratio (from oxidation of longer-lived hydrocarbons). Over the range of observed NOx values (roughly 0.1 - 2 ppbv), the prompt yield increases by a factor of 3 (from 0.3 to 0.9 ppbv ppbv(exp. -10), while background HCHO increases by a factor of 2 (from 1.6 to 3.3 ppbv). We apply the same method to evaluate the performance of both a global chemical transport model (AM3) and a measurement-constrained 0-D steady-state box model. Both models reproduce the NOx dependence of the prompt HCHO yield, illustrating that models with updated isoprene oxidation mechanisms can adequately capture the link between HCHO and recent isoprene emissions. On the other hand, both models underestimate background HCHO mixing ratios, suggesting missing HCHO precursors, inadequate representation of later-generation isoprene degradation and/or underestimated hydroxyl radical concentrations. Detailed process rates from the box model simulation demonstrate a 3-fold increase in HCHO production across the range of observed NOx values, driven by a 100% increase in OH and a 40% increase in branching of organic peroxy radical reactions to produce HCHO.

  8. Systematic investigation of non-Boussinesq effects in variable-density groundwater flow simulations.

    PubMed

    Guevara Morel, Carlos R; van Reeuwijk, Maarten; Graf, Thomas

    2015-12-01

    The validity of three mathematical models describing variable-density groundwater flow is systematically evaluated: (i) a model which invokes the Oberbeck-Boussinesq approximation (OB approximation), (ii) a model of intermediate complexity (NOB1) and (iii) a model which solves the full set of equations (NOB2). The NOB1 and NOB2 descriptions have been added to the HydroGeoSphere (HGS) model, which originally contained an implementation of the OB description. We define the Boussinesq parameter ερ=βω Δω where βω is the solutal expansivity and Δω is the characteristic difference in solute mass fraction. The Boussinesq parameter ερ is used to systematically investigate three flow scenarios covering a range of free and mixed convection problems: 1) the low Rayleigh number Elder problem (Van Reeuwijk et al., 2009), 2) a convective fingering problem (Xie et al., 2011) and 3) a mixed convective problem (Schincariol et al., 1994). Results indicate that small density differences (ερ≤ 0.05) produce no apparent changes in the total solute mass in the system, plume penetration depth, center of mass and mass flux independent of the mathematical model used. Deviations between OB, NOB1 and NOB2 occur for large density differences (ερ>0.12), where lower description levels will underestimate the vertical plume position and overestimate mass flux. Based on the cases considered here, we suggest the following guidelines for saline convection: the OB approximation is valid for cases with ερ<0.05, and the full NOB set of equations needs to be used for cases with ερ>0.10. Whether NOB effects are important in the intermediate region differ from case to case. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Chemical Evolution of Groundwater Near a Sinkhole Lake, Northern Florida: 2. Chemical Patterns, Mass Transfer Modeling, and Rates of Mass Transfer Reactions

    NASA Astrophysics Data System (ADS)

    Katz, Brian G.; Plummer, L. Niel; Busenberg, Eurybiades; Revesz, Kinga M.; Jones, Blair F.; Lee, Terrie M.

    1995-06-01

    Chemical patterns along evolutionary groundwater flow paths in silicate and carbonate aquifers were interpreted using solute tracers, carbon and sulfur isotopes, and mass balance reaction modeling for a complex hydrologic system involving groundwater inflow to and outflow from a sinkhole lake in northern Florida. Rates of dominant reactions along defined flow paths were estimated from modeled mass transfer and ages obtained from CFC-modeled recharge dates. Groundwater upgradient from Lake Barco remains oxic as it moves downward, reacting with silicate minerals in a system open to carbon dioxide (CO2), producing only small increases in dissolved species. Beneath and downgradient of Lake Barco the oxic groundwater mixes with lake water leakage in a highly reducing, silicate-carbonate mineral environment. A mixing model, developed for anoxic groundwater downgradient from the lake, accounted for the observed chemical and isotopic composition by combining different proportions of lake water leakage and infiltrating meteoric water. The evolution of major ion chemistry and the 13C isotopic composition of dissolved carbon species in groundwater downgradient from the lake can be explained by the aerobic oxidation of organic matter in the lake, anaerobic microbial oxidation of organic carbon, and incongruent dissolution of smectite minerals to kaolinite. The dominant process for the generation of methane was by the CO2 reduction pathway based on the isotopic composition of hydrogen (δ2H(CH4) = -186 to -234‰) and carbon (δ13C(CH4) = -65.7 to -72.3‰). Rates of microbial metabolism of organic matter, estimated from the mass transfer reaction models, ranged from 0.0047 to 0.039 mmol L-1 yr-1 for groundwater downgradient from the lake.

  10. Formaldehyde production from isoprene oxidation across NOx regimes

    PubMed Central

    Wolfe, G. M.; Kaiser, J.; Hanisco, T. F.; Keutsch, F. N.; de Gouw, J. A.; Gilman, J. B.; Graus, M.; Hatch, C. D.; Holloway, J.; Horowitz, L. W.; Lee, B. H.; Lerner, B. M.; Lopez-Hilifiker, F.; Mao, J.; Marvin, M. R.; Peischl, J.; Pollack, I. B.; Roberts, J. M.; Ryerson, T. B.; Thornton, J. A.; Veres, P. R.; Warneke, C.

    2018-01-01

    The chemical link between isoprene and formaldehyde (HCHO) is a strong, non-linear function of NOx (= NO + NO2). This relationship is a linchpin for top-down isoprene emission inventory verification from orbital HCHO column observations. It is also a benchmark for overall photochemical mechanism performance with regard to VOC oxidation. Using a comprehensive suite of airborne in situ observations over the Southeast U.S., we quantify HCHO production across the urban-rural spectrum. Analysis of isoprene and its major first-generation oxidation products allows us to define both a “prompt” yield of HCHO (molecules of HCHO produced per molecule of freshly-emitted isoprene) and the background HCHO mixing ratio (from oxidation of longer-lived hydrocarbons). Over the range of observed NOx values (roughly 0.1 – 2 ppbv), the prompt yield increases by a factor of 3 (from 0.3 to 0.9 ppbv ppbv−1), while background HCHO increases by a factor of 2 (from 1.6 to 3.3 ppbv). We apply the same method to evaluate the performance of both a global chemical transport model (AM3) and a measurement-constrained 0-D steady state box model. Both models reproduce the NOx dependence of the prompt HCHO yield, illustrating that models with updated isoprene oxidation mechanisms can adequately capture the link between HCHO and recent isoprene emissions. On the other hand, both models under-estimate background HCHO mixing ratios, suggesting missing HCHO precursors, inadequate representation of later-generation isoprene degradation and/or under-estimated hydroxyl radical concentrations. Detailed process rates from the box model simulation demonstrate a 3-fold increase in HCHO production across the range of observed NOx values, driven by a 100% increase in OH and a 40% increase in branching of organic peroxy radical reactions to produce HCHO. PMID:29619046

  11. Formaldehyde production from isoprene oxidation across NOx regimes.

    PubMed

    Wolfe, G M; Kaiser, J; Hanisco, T F; Keutsch, F N; de Gouw, J A; Gilman, J B; Graus, M; Hatch, C D; Holloway, J; Horowitz, L W; Lee, B H; Lerner, B M; Lopez-Hilifiker, F; Mao, J; Marvin, M R; Peischl, J; Pollack, I B; Roberts, J M; Ryerson, T B; Thornton, J A; Veres, P R; Warneke, C

    2016-01-01

    The chemical link between isoprene and formaldehyde (HCHO) is a strong, non-linear function of NO x (= NO + NO 2 ). This relationship is a linchpin for top-down isoprene emission inventory verification from orbital HCHO column observations. It is also a benchmark for overall photochemical mechanism performance with regard to VOC oxidation. Using a comprehensive suite of airborne in situ observations over the Southeast U.S., we quantify HCHO production across the urban-rural spectrum. Analysis of isoprene and its major first-generation oxidation products allows us to define both a "prompt" yield of HCHO (molecules of HCHO produced per molecule of freshly-emitted isoprene) and the background HCHO mixing ratio (from oxidation of longer-lived hydrocarbons). Over the range of observed NO x values (roughly 0.1 - 2 ppbv), the prompt yield increases by a factor of 3 (from 0.3 to 0.9 ppbv ppbv -1 ), while background HCHO increases by a factor of 2 (from 1.6 to 3.3 ppbv). We apply the same method to evaluate the performance of both a global chemical transport model (AM3) and a measurement-constrained 0-D steady state box model. Both models reproduce the NO x dependence of the prompt HCHO yield, illustrating that models with updated isoprene oxidation mechanisms can adequately capture the link between HCHO and recent isoprene emissions. On the other hand, both models under-estimate background HCHO mixing ratios, suggesting missing HCHO precursors, inadequate representation of later-generation isoprene degradation and/or under-estimated hydroxyl radical concentrations. Detailed process rates from the box model simulation demonstrate a 3-fold increase in HCHO production across the range of observed NO x values, driven by a 100% increase in OH and a 40% increase in branching of organic peroxy radical reactions to produce HCHO.

  12. Modeling the diurnal cycle of carbon monoxide: Sensitivity to physics, chemistry, biology, and optics

    NASA Astrophysics Data System (ADS)

    Gnanadesikan, Anand

    1996-05-01

    As carbon monoxide within the oceanic surface layer is produced by solar radiation, diluted by mixing, consumed by biota, and outgassed to the atmosphere, it exhibits a diurnal cycle. The effect of dilution and mixing on this cycle is examined using a simple model for production and consumption, coupled to three different mixed layer models. The magnitude and timing of the peak concentration, the magnitude of the average concentration, and the air-sea flux are considered. The models are run through a range of heating and wind stress and compared to experimental data reported by Kettle [1994]. The key to the dynamics is the relative size of four length scales; Dmix, the depth to which mixing occurs over the consumption time; L, the length scale over which production occurs; Lout, the depth to which the mixed layer is ventilated over the consumption time; and Lcomp, the depth to which the diurnal production can maintain a concentration in equilibrium with the atmosphere. If Dmix ≫ L, the actual model parameterization can be important. If the mixed layer is maintained by turbulent diffusion, Dmix can be substantially less than the mixed layer depth. If the mixed layer is parameterized as a homogeneous slab, Dmix is equivalent to the mixed layer depth. If Dmix > Lout, production is balanced by consumption rather than outgassing. The ratio between Dmix and Lcomp determines whether the ocean is a source or a sink for CO. The main thermocline depth H sets an upper limit for Dmix and hence Dmix/L, Dmix/Lout, and Dmix/Lcomp. The models are run to simulate a single day of observations. The mixing parameterization is shown to be very important, with a model which mixes using small-scale diffusion, producing markedly larger surface concentrations than models which homogenize the mixed layer completely and instantaneously.

  13. Clinical prediction model to identify vulnerable patients in ambulatory surgery: towards optimal medical decision-making.

    PubMed

    Mijderwijk, Herjan; Stolker, Robert Jan; Duivenvoorden, Hugo J; Klimek, Markus; Steyerberg, Ewout W

    2016-09-01

    Ambulatory surgery patients are at risk of adverse psychological outcomes such as anxiety, aggression, fatigue, and depression. We developed and validated a clinical prediction model to identify patients who were vulnerable to these psychological outcome parameters. We prospectively assessed 383 mixed ambulatory surgery patients for psychological vulnerability, defined as the presence of anxiety (state/trait), aggression (state/trait), fatigue, and depression seven days after surgery. Three psychological vulnerability categories were considered-i.e., none, one, or multiple poor scores, defined as a score exceeding one standard deviation above the mean for each single outcome according to normative data. The following determinants were assessed preoperatively: sociodemographic (age, sex, level of education, employment status, marital status, having children, religion, nationality), medical (heart rate and body mass index), and psychological variables (self-esteem and self-efficacy), in addition to anxiety, aggression, fatigue, and depression. A prediction model was constructed using ordinal polytomous logistic regression analysis, and bootstrapping was applied for internal validation. The ordinal c-index (ORC) quantified the discriminative ability of the model, in addition to measures for overall model performance (Nagelkerke's R (2) ). In this population, 137 (36%) patients were identified as being psychologically vulnerable after surgery for at least one of the psychological outcomes. The most parsimonious and optimal prediction model combined sociodemographic variables (level of education, having children, and nationality) with psychological variables (trait anxiety, state/trait aggression, fatigue, and depression). Model performance was promising: R (2)  = 30% and ORC = 0.76 after correction for optimism. This study identified a substantial group of vulnerable patients in ambulatory surgery. The proposed clinical prediction model could allow healthcare professionals the opportunity to identify vulnerable patients in ambulatory surgery, although additional modification and validation are needed. (ClinicalTrials.gov number, NCT01441843).

  14. Realist explanatory theory building method for social epidemiology: a protocol for a mixed method multilevel study of neighbourhood context and postnatal depression.

    PubMed

    Eastwood, John G; Jalaludin, Bin B; Kemp, Lynn A

    2014-01-01

    A recent criticism of social epidemiological studies, and multi-level studies in particular has been a paucity of theory. We will present here the protocol for a study that aims to build a theory of the social epidemiology of maternal depression. We use a critical realist approach which is trans-disciplinary, encompassing both quantitative and qualitative traditions, and that assumes both ontological and hierarchical stratification of reality. We describe a critical realist Explanatory Theory Building Method comprising of an: 1) emergent phase, 2) construction phase, and 3) confirmatory phase. A concurrent triangulated mixed method multilevel cross-sectional study design is described. The Emergent Phase uses: interviews, focus groups, exploratory data analysis, exploratory factor analysis, regression, and multilevel Bayesian spatial data analysis to detect and describe phenomena. Abductive and retroductive reasoning will be applied to: categorical principal component analysis, exploratory factor analysis, regression, coding of concepts and categories, constant comparative analysis, drawing of conceptual networks, and situational analysis to generate theoretical concepts. The Theory Construction Phase will include: 1) defining stratified levels; 2) analytic resolution; 3) abductive reasoning; 4) comparative analysis (triangulation); 5) retroduction; 6) postulate and proposition development; 7) comparison and assessment of theories; and 8) conceptual frameworks and model development. The strength of the critical realist methodology described is the extent to which this paradigm is able to support the epistemological, ontological, axiological, methodological and rhetorical positions of both quantitative and qualitative research in the field of social epidemiology. The extensive multilevel Bayesian studies, intensive qualitative studies, latent variable theory, abductive triangulation, and Inference to Best Explanation provide a strong foundation for Theory Construction. The study will contribute to defining the role that realism and mixed methods can play in explaining the social determinants and developmental origins of health and disease.

  15. Criteria for Evaluating the Performance of Compilers

    DTIC Science & Technology

    1974-10-01

    cannot be made to fit, then an auxiliary mechanism outside the parser might be used . Finally, changing the choice of parsing tech - nique to a...was not useful in providing a basic for compiler evaluation. The study of the first question eztablished criteria and methodb for assigning four...program. The study of the second question estab- lished criteria for defining a "compiler Gibson mix", and established methods for using this "mix" to

  16. Bayesian inference for two-part mixed-effects model using skew distributions, with application to longitudinal semicontinuous alcohol data.

    PubMed

    Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie

    2017-08-01

    Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.

  17. A new unsteady mixing model to predict NO(x) production during rapid mixing in a dual-stage combustor

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    1992-01-01

    An advanced gas turbine engine to power supersonic transport aircraft is currently under study. In addition to high combustion efficiency requirements, environmental concerns have placed stringent restrictions on the pollutant emissions from these engines. A combustor design with the potential for minimizing pollutants such as NO(x) emissions is undergoing experimental evaluation. A major technical issue in the design of this combustor is how to rapidly mix the hot, fuel-rich primary zone product with the secondary diluent air to obtain a fuel-lean mixture for combustion in the second stage. Numerical predictions using steady-state methods cannot account for the unsteady phenomena in the mixing region. Therefore, to evaluate the effect of unsteady mixing and combustion processes, a novel unsteady mixing model is demonstrated here. This model has been used to study multispecies mixing as well as propane-air and hydrogen-air jet nonpremixed flames, and has been used to predict NO(x) production in the mixing region. Comparison with available experimental data show good agreement, thereby providing validation of the mixing model. With this demonstration, this mixing model is ready to be implemented in conjunction with steady-state prediction methods and provide an improved engineering design analysis tool.

  18. Diagnostic tools for mixing models of stream water chemistry

    USGS Publications Warehouse

    Hooper, Richard P.

    2003-01-01

    Mixing models provide a useful null hypothesis against which to evaluate processes controlling stream water chemical data. Because conservative mixing of end‐members with constant concentration is a linear process, a number of simple mathematical and multivariate statistical methods can be applied to this problem. Although mixing models have been most typically used in the context of mixing soil and groundwater end‐members, an extension of the mathematics of mixing models is presented that assesses the “fit” of a multivariate data set to a lower dimensional mixing subspace without the need for explicitly identified end‐members. Diagnostic tools are developed to determine the approximate rank of the data set and to assess lack of fit of the data. This permits identification of processes that violate the assumptions of the mixing model and can suggest the dominant processes controlling stream water chemical variation. These same diagnostic tools can be used to assess the fit of the chemistry of one site into the mixing subspace of a different site, thereby permitting an assessment of the consistency of controlling end‐members across sites. This technique is applied to a number of sites at the Panola Mountain Research Watershed located near Atlanta, Georgia.

  19. Epidemiology of hypertension in Northern Tanzania: a community-based mixed-methods study.

    PubMed

    Galson, Sophie W; Staton, Catherine A; Karia, Francis; Kilonzo, Kajiru; Lunyera, Joseph; Patel, Uptal D; Hertz, Julian T; Stanifer, John W

    2017-11-09

    Sub-Saharan Africa is particularly vulnerable to the growing global burden of hypertension, but epidemiological studies are limited and barriers to optimal management are poorly understood. Therefore, we undertook a community-based mixed-methods study in Tanzania to investigate the epidemiology of hypertension and barriers to care. In Northern Tanzania, between December 2013 and June 2015, we conducted a mixed-methods study, including a cross-sectional household epidemiological survey and qualitative sessions of focus groups and in-depth interviews. For the survey, we assessed for hypertension, defined as a single blood pressure ≥160/100 mm Hg, a two-time average of ≥140/90 mm Hg or current use of antihypertensive medications. To investigate relationships with potential risk factors, we used adjusted generalised linear models. Uncontrolled hypertension was defined as a two-time average measurement of ≥160/100 mm Hg irrespective of treatment status. Hypertension awareness was defined as a self-reported disease history in a participant with confirmed hypertension. To explore barriers to care, we identified emerging themes using an inductive approach within the framework method. We enrolled 481 adults (median age 45 years) from 346 households, including 123 men (25.6%) and 358 women (74.4%). Overall, the prevalence of hypertension was 28.0% (95% CI 19.4% to 38.7%), which was independently associated with age >60 years (prevalence risk ratio (PRR) 4.68; 95% CI 2.25 to 9.74) and alcohol use (PRR 1.72; 95% CI 1.15 to 2.58). Traditional medicine use was inversely associated with hypertension (PRR 0.37; 95% CI 0.26 to 0.54). Nearly half (48.3%) of the participants were aware of their disease, but almost all (95.3%) had uncontrolled hypertension. In the qualitative sessions, we identified barriers to optimal care, including poor point-of-care communication, poor understanding of hypertension and structural barriers such as long wait times and undertrained providers. In Northern Tanzania, the burden of hypertensive disease is substantial, and optimal hypertension control is rare. Transdisciplinary strategies sensitive to local practices should be explored to facilitate early diagnosis and sustained care delivery. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  20. Quantifying spatial distribution of spurious mixing in ocean models.

    PubMed

    Ilıcak, Mehmet

    2016-12-01

    Numerical mixing is inevitable for ocean models due to tracer advection schemes. Until now, there is no robust way to identify the regions of spurious mixing in ocean models. We propose a new method to compute the spatial distribution of the spurious diapycnic mixing in an ocean model. This new method is an extension of available potential energy density method proposed by Winters and Barkan (2013). We test the new method in lock-exchange and baroclinic eddies test cases. We can quantify the amount and the location of numerical mixing. We find high-shear areas are the main regions which are susceptible to numerical truncation errors. We also test the new method to quantify the numerical mixing in different horizontal momentum closures. We conclude that Smagorinsky viscosity has less numerical mixing than the Leith viscosity using the same non-dimensional constant.

  1. On the Choice of Variable for Atmospheric Moisture Analysis

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.; DaSilva, Arlindo M.; Atlas, Robert (Technical Monitor)

    2002-01-01

    The implications of using different control variables for the analysis of moisture observations in a global atmospheric data assimilation system are investigated. A moisture analysis based on either mixing ratio or specific humidity is prone to large extrapolation errors, due to the high variability in space and time of these parameters and to the difficulties in modeling their error covariances. Using the logarithm of specific humidity does not alleviate these problems, and has the further disadvantage that very dry background estimates cannot be effectively corrected by observations. Relative humidity is a better choice from a statistical point of view, because this field is spatially and temporally more coherent and error statistics are therefore easier to obtain. If, however, the analysis is designed to preserve relative humidity in the absence of moisture observations, then the analyzed specific humidity field depends entirely on analyzed temperature changes. If the model has a cool bias in the stratosphere this will lead to an unstable accumulation of excess moisture there. A pseudo-relative humidity can be defined by scaling the mixing ratio by the background saturation mixing ratio. A univariate pseudo-relative humidity analysis will preserve the specific humidity field in the absence of moisture observations. A pseudorelative humidity analysis is shown to be equivalent to a mixing ratio analysis with flow-dependent covariances. In the presence of multivariate (temperature-moisture) observations it produces analyzed relative humidity values that are nearly identical to those produced by a relative humidity analysis. Based on a time series analysis of radiosonde observed-minus-background differences it appears to be more justifiable to neglect specific humidity-temperature correlations (in a univariate pseudo-relative humidity analysis) than to neglect relative humidity-temperature correlations (in a univariate relative humidity analysis). A pseudo-relative humidity analysis is easily implemented in an existing moisture analysis system, by simply scaling observed-minus background moisture residuals prior to solving the analysis equation, and rescaling the analyzed increments afterward.

  2. Software engineering the mixed model for genome-wide association studies on large samples.

    PubMed

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  3. Real longitudinal data analysis for real people: building a good enough mixed model.

    PubMed

    Cheng, Jing; Edwards, Lloyd J; Maldonado-Molina, Mildred M; Komro, Kelli A; Muller, Keith E

    2010-02-20

    Mixed effects models have become very popular, especially for the analysis of longitudinal data. One challenge is how to build a good enough mixed effects model. In this paper, we suggest a systematic strategy for addressing this challenge and introduce easily implemented practical advice to build mixed effects models. A general discussion of the scientific strategies motivates the recommended five-step procedure for model fitting. The need to model both the mean structure (the fixed effects) and the covariance structure (the random effects and residual error) creates the fundamental flexibility and complexity. Some very practical recommendations help to conquer the complexity. Centering, scaling, and full-rank coding of all the predictor variables radically improve the chances of convergence, computing speed, and numerical accuracy. Applying computational and assumption diagnostics from univariate linear models to mixed model data greatly helps to detect and solve the related computational problems. Applying computational and assumption diagnostics from the univariate linear models to the mixed model data can radically improve the chances of convergence, computing speed, and numerical accuracy. The approach helps to fit more general covariance models, a crucial step in selecting a credible covariance model needed for defensible inference. A detailed demonstration of the recommended strategy is based on data from a published study of a randomized trial of a multicomponent intervention to prevent young adolescents' alcohol use. The discussion highlights a need for additional covariance and inference tools for mixed models. The discussion also highlights the need for improving how scientists and statisticians teach and review the process of finding a good enough mixed model. (c) 2009 John Wiley & Sons, Ltd.

  4. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    PubMed

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  5. Quantitative MR assessment of longitudinal parenchymal changes in children treated for medulloblastoma

    NASA Astrophysics Data System (ADS)

    Reddick, Wilburn E.; Glass, John O.; Wu, Shingjie; Palmer, Shawna L.; Mulhern, Raymond K.; Gajjar, Amar

    2002-05-01

    Our research builds on the hypothesis that white matter damage, in children treated for cancer with cranial spinal irradiation, spans a continuum of severity that can be reliably probed using non-invasive MR technology and results in potentially debilitating neurological and neuropsychological problems. This longitudinal project focuses on 341 quantitative volumetric MR examinations from 58 children treated for medulloblastoma (MB) with cranial irradiation (CRT) of 35-40 Gy. Quadratic mixed effects models were used to fit changes in tissue volumes (white matter, gray matter, CSF, and cerebral) with time since CRT and age at CRT as covariates. We successfully defined algorithms that are useful in the prediction of brain development among children treated for MB.

  6. Geochemical modeling of magma mixing and magma reservoir volumes during early episodes of Kīlauea Volcano's Pu`u `Ō`ō eruption

    NASA Astrophysics Data System (ADS)

    Shamberger, Patrick J.; Garcia, Michael O.

    2007-02-01

    Geochemical modeling of magma mixing allows for evaluation of volumes of magma storage reservoirs and magma plumbing configurations. A new analytical expression is derived for a simple two-component box-mixing model describing the proportions of mixing components in erupted lavas as a function of time. Four versions of this model are applied to a mixing trend spanning episodes 3 31 of Kilauea Volcano’s Puu Oo eruption, each testing different constraints on magma reservoir input and output fluxes. Unknown parameters (e.g., magma reservoir influx rate, initial reservoir volume) are optimized for each model using a non-linear least squares technique to fit model trends to geochemical time-series data. The modeled mixing trend closely reproduces the observed compositional trend. The two models that match measured lava effusion rates have constant magma input and output fluxes and suggest a large pre-mixing magma reservoir (46±2 and 49±1 million m3), with little or no volume change over time. This volume is much larger than a previous estimate for the shallow, dike-shaped magma reservoir under the Puu Oo vent, which grew from ˜3 to ˜10 12 million m3. These volumetric differences are interpreted as indicating that mixing occurred first in a larger, deeper reservoir before the magma was injected into the overlying smaller reservoir.

  7. Methodologies for Verification and Validation of Space Launch System (SLS) Structural Dynamic Models

    NASA Technical Reports Server (NTRS)

    Coppolino, Robert N.

    2018-01-01

    Responses to challenges associated with verification and validation (V&V) of Space Launch System (SLS) structural dynamics models are presented in this paper. Four methodologies addressing specific requirements for V&V are discussed. (1) Residual Mode Augmentation (RMA), which has gained acceptance by various principals in the NASA community, defines efficient and accurate FEM modal sensitivity models that are useful in test-analysis correlation and reconciliation and parametric uncertainty studies. (2) Modified Guyan Reduction (MGR) and Harmonic Reduction (HR, introduced in 1976), developed to remedy difficulties encountered with the widely used Classical Guyan Reduction (CGR) method, are presented. MGR and HR are particularly relevant for estimation of "body dominant" target modes of shell-type SLS assemblies that have numerous "body", "breathing" and local component constituents. Realities associated with configuration features and "imperfections" cause "body" and "breathing" mode characteristics to mix resulting in a lack of clarity in the understanding and correlation of FEM- and test-derived modal data. (3) Mode Consolidation (MC) is a newly introduced procedure designed to effectively "de-feature" FEM and experimental modes of detailed structural shell assemblies for unambiguous estimation of "body" dominant target modes. Finally, (4) Experimental Mode Verification (EMV) is a procedure that addresses ambiguities associated with experimental modal analysis of complex structural systems. Specifically, EMV directly separates well-defined modal data from spurious and poorly excited modal data employing newly introduced graphical and coherence metrics.

  8. Remedying excessive numerical diapycnal mixing in a global 0.25° NEMO configuration

    NASA Astrophysics Data System (ADS)

    Megann, Alex; Nurser, George; Storkey, Dave

    2016-04-01

    If numerical ocean models are to simulate faithfully the upwelling branches of the global overturning circulation, they need to have a good representation of the diapycnal mixing processes which contribute to conversion of the bottom and deep waters produced in high latitudes into less dense watermasses. It is known that the default class of depth-coordinate ocean models such as NEMO and MOM5, as used in many state-of-the art coupled climate models and Earth System Models, have excessive numerical diapycnal mixing, resulting from irreversible advection across coordinate surfaces. The GO5.0 configuration of the NEMO ocean model, on an "eddy-permitting" 0.25° global grid, is used in the current UK GC1 and GC2 coupled models. Megann and Nurser (2016) have shown, using the isopycnal watermass analysis of Lee et al (2002), that spurious numerical mixing is substantially larger than the explicit mixing prescribed by the mixing scheme used by the model. It will be shown that increasing the biharmonic viscosity by a factor of three tends to suppress small-scale noise in the vertical velocity in the model. This significantly reduces the numerical mixing in GO5.0, and we shall show that it also leads to large-scale improvements in model biases.

  9. Normal injection of helium from swept struts into ducted supersonic flow

    NASA Technical Reports Server (NTRS)

    Mcclinton, C. R.; Torrence, M. G.

    1975-01-01

    Recent design studies have shown that airframe-integrated scramjets should include instream mounted, swept-back strut fuel injectors to obtain short combustors. Because there was no data in the literature on mixing characteristics of swept strut fuel injectors, the present investigation was undertaken to provide such data. This investigation was made with two swept struts in a closed duct at Mach number of 4.4 and nominal jet-to-air mass flow ratio of 0.029 with helium used to simulate hydrogen fuel. The data is compared with flat plate mounted normal injector data to obtain the effect of swept struts on mixing. Three injector patterns were evaluated representing the range of hole spacing and jet-to-freestream dynamic pressure ratio of interest. Measured helium concentration, pitot pressure, and static pressure in the downstream mixing region are used to generate contour plots necessary to define the mixing region flow field and the mixing parameters.

  10. Effect of electrode positions on the mixing characteristics of an electroosmotic micromixer.

    PubMed

    Seo, H S; Kim, Y J

    2014-08-01

    In this study, an electrokinetic microchannel with a ring-type mixing chamber is introduced for fast mixing. The modeled micromixer that is used for the study of the electroosmotic effect takes two fluids from different inlets and combines them in a ring-type mixing chamber and, then, they are mixed by the electric fields at the electrodes. In order to compare the mixing performance in the modeled micromixer, we numerically investigated the flow characteristics with different positions of the electrodes in the mixing chamber using the commercial code, COMSOL. In addition, we discussed the concentration distributions of the dissolved substances in the flow fields and compared the mixing efficiency in the modeled micromixer with different electrode positions and operating conditions, such as the frequencies and electric potentials at the electrodes.

  11. Changes in Case-Mix and Health Outcomes of Medicare Fee-for-Service Beneficiaries and Managed Care Enrollees During the Years 1992-2011.

    PubMed

    Koroukian, Siran M; Basu, Jayasree; Schiltz, Nicholas K; Navale, Suparna; Bakaki, Paul M; Warner, David F; Dor, Avi; Given, Charles W; Stange, Kurt C

    2018-01-01

    Recent studies suggest that managed care enrollees (MCEs) and fee-for-service beneficiaries (FFSBs) have become similar in case-mix over time; but comparisons of health outcomes have yielded mixed results. To examine changes in differentials between MCEs and FFSBs both in case-mix and health outcomes over time. Temporal study of the linked Health and Retirement Study (HRS) and Medicare data, comparing case-mix and health outcomes between MCEs and FFSBs across 3 time periods: 1992-1998, 1999-2004, and 2005-2011. We used multivariable analysis, stratified by, and pooled across the study periods. The unit of analysis was the person-wave (n=167,204). HRS participants who were also enrolled in Medicare. Outcome measures included self-reported fair/poor health, 2-year self-rated worse health, and 2-year mortality. Our main covariate was a composite measure of multimorbidity (MM), MM0-MM3, defined as the co-occurrence of chronic conditions, functional limitations, and/or geriatric syndromes. The case-mix differential between MCEs and FFSBs persisted over time. Results from multivariable models on the pooled data and incorporating interaction terms between managed care status and study period indicated that MCEs and FFSBs were as likely to die within 2 years from the HRS interview (P=0.073). This likelihood remained unchanged across the study periods. However, MCEs were more likely than FFSBs to report fair/poor health in the third study period (change in probability for the interaction term: 0.024, P=0.008), but less likely to rate their health worse in the last 2 years, albeit at borderline significance (change in probability: -0.021, P=0.059). Despite the persistence of selection bias, the differential in self-reported fair/poor status between MCEs and FFSBs seems to be closing over time.

  12. 40 CFR 437.40 - Applicability.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... portion of wastewater discharges from a centralized waste treatment facility that results from mixing any... standards) and establishes that it provides equivalent treatment as defined in § 437.2(h). (b) In order to...

  13. Dual Formulations of Mixed Finite Element Methods with Applications

    PubMed Central

    Gillette, Andrew; Bajaj, Chandrajit

    2011-01-01

    Mixed finite element methods solve a PDE using two or more variables. The theory of Discrete Exterior Calculus explains why the degrees of freedom associated to the different variables should be stored on both primal and dual domain meshes with a discrete Hodge star used to transfer information between the meshes. We show through analysis and examples that the choice of discrete Hodge star is essential to the numerical stability of the method. Additionally, we define interpolation functions and discrete Hodge stars on dual meshes which can be used to create previously unconsidered mixed methods. Examples from magnetostatics and Darcy flow are examined in detail. PMID:21984841

  14. Inside out: Speed-dependent barriers to reactive mixing

    NASA Astrophysics Data System (ADS)

    Kelley, Douglas; Nevins, Thomas

    2015-11-01

    Reactive mixing occurs wherever fluid flow and chemical or biological growth interact over time and space. Those interactions often lead to steep gradients in reactant and product concentration, arranged in complex spatial structures that can cause wide variation in the global reaction rate and concentrations. By simultaneously measuring fluid velocity and reaction front locations in laboratory experiments with the Belousov-Zhabotinsky reaction, we find that the barriers defining those structures vary dramatically with speed. In particular, we find that increasing flow speed causes reacted regions to move from vortex edges to vortex cores, thus turning the barriers ``inside out''. This observation has implications for reactive mixing of phytoplankton in global oceans.

  15. Increased Attentional Focus Modulates Eye Movements in a Mixed Antisaccade Task for Younger and Older Adults

    PubMed Central

    Wang, Jingxin; Tian, Jing; Wang, Rong; Benson, Valerie

    2013-01-01

    We examined performance in the antisaccade task for younger and older adults by comparing latencies and errors in what we defined as high attentional focus (mixed antisaccades and prosaccades in the same block) and low attentional focus (antisaccades and prosaccades in separate blocks) conditions. Shorter saccade latencies for correctly executed eye movements were observed for both groups in mixed, compared to blocked, antisaccade tasks, but antisaccade error rates were higher for older participants across both conditions. The results are discussed in relation to the inhibitory hypothesis, the goal neglect theory and attentional control theory. PMID:23620767

  16. Rayleigh-Taylor and Richtmyer-Meshkov instability induced flow, turbulence, and mixing. II

    NASA Astrophysics Data System (ADS)

    Zhou, Ye

    2017-12-01

    Rayleigh-Taylor (RT) and Richtmyer-Meshkov(RM) instabilities are well-known pathways towards turbulent mixing layers, in many cases characterized by significant mass and species exchange across the mixing layers (Zhou, 2017. Physics Reports, 720-722, 1-136). Mathematically, the pathway to turbulent mixing requires that the initial interface be multimodal, to permit cross-mode coupling leading to turbulence. Practically speaking, it is difficult to experimentally produce a non-multi-mode initial interface. Numerous methods and approaches have been developed to describe the late, multimodal, turbulent stages of RT and RM mixing layers. This paper first presents the initial condition dependence of RT mixing layers, and introduces parameters that are used to evaluate the level of "mixedness" and "mixed mass" within the layers, as well as the dependence on density differences, as well as the characteristic anisotropy of this acceleration-driven flow, emphasizing some of the key differences between the two-dimensional and three-dimensional RT mixing layers. Next, the RM mixing layers are discussed, and differences with the RT mixing layer are elucidated, including the RM mixing layers dependence on the Mach number of the initiating shock. Another key feature of the RM induced flows is its response to a reshock event, as frequently seen in shock-tube experiments as well as inertial confinement events. A number of approaches to modeling the evolution of these mixing layers are then described, in order of increasing complexity. These include simple buoyancy-drag models, Reynolds-averaged Navier-Stokes models of increased complexity, including K- ε, K-L, and K- L- a models, up to full Reynolds-stress models with more than one length-scale. Multifield models and multiphase models have also been implemented. Additional complexities to these flows are examined as well as modifications to the models to understand the effects of these complexities. These complexities include the presence of magnetic fields, compressibility, rotation, stratification and additional instabilities. The complications induced by the presence of converging geometries are also considered. Finally, the unique problems of astrophysical and high-energy-density applications, and efforts to model these are discussed.

  17. How ocean lateral mixing changes Southern Ocean variability in coupled climate models

    NASA Astrophysics Data System (ADS)

    Pradal, M. A. S.; Gnanadesikan, A.; Thomas, J. L.

    2016-02-01

    The lateral mixing of tracers represents a major uncertainty in the formulation of coupled climate models. The mixing of tracers along density surfaces in the interior and horizontally within the mixed layer is often parameterized using a mixing coefficient ARedi. The models used in the Coupled Model Intercomparison Project 5 exhibit more than an order of magnitude range in the values of this coefficient used within the Southern Ocean. The impacts of such uncertainty on Southern Ocean variability have remained unclear, even as recent work has shown that this variability differs between different models. In this poster, we change the lateral mixing coefficient within GFDL ESM2Mc, a coarse-resolution Earth System model that nonetheless has a reasonable circulation within the Southern Ocean. As the coefficient varies from 400 to 2400 m2/s the amplitude of the variability varies significantly. The low-mixing case shows strong decadal variability with an annual mean RMS temperature variability exceeding 1C in the Circumpolar Current. The highest-mixing case shows a very similar spatial pattern of variability, but with amplitudes only about 60% as large. The suppression of mixing is larger in the Atlantic Sector of the Southern Ocean relatively to the Pacific sector. We examine the salinity budgets of convective regions, paying particular attention to the extent to which high mixing prevents the buildup of low-saline waters that are capable of shutting off deep convection entirely.

  18. Evaluation of joint probability density function models for turbulent nonpremixed combustion with complex chemistry

    NASA Technical Reports Server (NTRS)

    Smith, N. S. A.; Frolov, S. M.; Bowman, C. T.

    1996-01-01

    Two types of mixing sub-models are evaluated in connection with a joint-scalar probability density function method for turbulent nonpremixed combustion. Model calculations are made and compared to simulation results for homogeneously distributed methane-air reaction zones mixing and reacting in decaying turbulence within a two-dimensional enclosed domain. The comparison is arranged to ensure that both the simulation and model calculations a) make use of exactly the same chemical mechanism, b) do not involve non-unity Lewis number transport of species, and c) are free from radiation loss. The modified Curl mixing sub-model was found to provide superior predictive accuracy over the simple relaxation-to-mean submodel in the case studied. Accuracy to within 10-20% was found for global means of major species and temperature; however, nitric oxide prediction accuracy was lower and highly dependent on the choice of mixing sub-model. Both mixing submodels were found to produce non-physical mixing behavior for mixture fractions removed from the immediate reaction zone. A suggestion for a further modified Curl mixing sub-model is made in connection with earlier work done in the field.

  19. CFD simulation of gas and non-Newtonian fluid two-phase flow in anaerobic digesters.

    PubMed

    Wu, Binxin

    2010-07-01

    This paper presents an Eulerian multiphase flow model that characterizes gas mixing in anaerobic digesters. In the model development, liquid manure is assumed to be water or a non-Newtonian fluid that is dependent on total solids (TS) concentration. To establish the appropriate models for different TS levels, twelve turbulence models are evaluated by comparing the frictional pressure drops of gas and non-Newtonian fluid two-phase flow in a horizontal pipe obtained from computational fluid dynamics (CFD) with those from a correlation analysis. The commercial CFD software, Fluent12.0, is employed to simulate the multiphase flow in the digesters. The simulation results in a small-sized digester are validated against the experimental data from literature. Comparison of two gas mixing designs in a medium-sized digester demonstrates that mixing intensity is insensitive to the TS in confined gas mixing, whereas there are significant decreases with increases of TS in unconfined gas mixing. Moreover, comparison of three mixing methods indicates that gas mixing is more efficient than mixing by pumped circulation while it is less efficient than mechanical mixing.

  20. Testing mixing models of old and young groundwater in a tropical lowland rain forest with environmental tracers

    NASA Astrophysics Data System (ADS)

    Solomon, D. Kip; Genereux, David P.; Plummer, L. Niel; Busenberg, Eurybiades

    2010-04-01

    We tested three models of mixing between old interbasin groundwater flow (IGF) and young, locally derived groundwater in a lowland rain forest in Costa Rica using a large suite of environmental tracers. We focus on the young fraction of water using the transient tracers CFC-11, CFC-12, CFC-113, SF6, 3H, and bomb 14C. We measured 3He, but 3H/3He dating is generally problematic due to the presence of mantle 3He. Because of their unique concentration histories in the atmosphere, combinations of transient tracers are sensitive not only to subsurface travel times but also to mixing between waters having different travel times. Samples fall into three distinct categories: (1) young waters that plot along a piston flow line, (2) old samples that have near-zero concentrations of the transient tracers, and (3) mixtures of 1 and 2. We have modeled the concentrations of the transient tracers using (1) a binary mixing model (BMM) of old and young water with the young fraction transported via piston flow, (2) an exponential mixing model (EMM) with a distribution of groundwater travel times characterized by a mean value, and (3) an exponential mixing model for the young fraction followed by binary mixing with an old fraction (EMM/BMM). In spite of the mathematical differences in the mixing models, they all lead to a similar conceptual model of young (0 to 10 year) groundwater that is locally derived mixing with old (>1000 years) groundwater that is recharged beyond the surface water boundary of the system.

  1. A systematic comparison of two-equation Reynolds-averaged Navier-Stokes turbulence models applied to shock-cloud interactions

    NASA Astrophysics Data System (ADS)

    Goodson, Matthew D.; Heitsch, Fabian; Eklund, Karl; Williams, Virginia A.

    2017-07-01

    Turbulence models attempt to account for unresolved dynamics and diffusion in hydrodynamical simulations. We develop a common framework for two-equation Reynolds-averaged Navier-Stokes turbulence models, and we implement six models in the athena code. We verify each implementation with the standard subsonic mixing layer, although the level of agreement depends on the definition of the mixing layer width. We then test the validity of each model into the supersonic regime, showing that compressibility corrections can improve agreement with experiment. For models with buoyancy effects, we also verify our implementation via the growth of the Rayleigh-Taylor instability in a stratified medium. The models are then applied to the ubiquitous astrophysical shock-cloud interaction in three dimensions. We focus on the mixing of shock and cloud material, comparing results from turbulence models to high-resolution simulations (up to 200 cells per cloud radius) and ensemble-averaged simulations. We find that the turbulence models lead to increased spreading and mixing of the cloud, although no two models predict the same result. Increased mixing is also observed in inviscid simulations at resolutions greater than 100 cells per radius, which suggests that the turbulent mixing begins to be resolved.

  2. A Mixing Length Scale of Unlike Impinging Jets

    NASA Astrophysics Data System (ADS)

    Inoue, Chihiro; Fujii, Go; Daimon, Yu

    2017-11-01

    Bi-propellant thrusters in space propulsion systems often utilize unlike-doublet or triplet injectors. The impingement of hypergolic liquid jet streams of fuel and oxidizer involves the expanding sheet, droplet fragmentation, mixing, evaporation, and chemical reactions in liquid and gas phases, in which the rate controlling phenomenon is the mixing step. In this study, a defined length scale demonstrates the distribution of fuel and oxidizer, and therefore, represents their mixing states, allowing for providing a physical meaning of widely accepted practical indicator, so called Rupe factor, over half a century of injector design history. We concisely formulate the characteristic velocity in a consistent manner for doublet and triplet injectors as a function of propellant injection conditions. The validity of the present formulation is convinced by comparing with hot firing tests.

  3. Attribution of horizontal and vertical contributions to spurious mixing in an Arbitrary Lagrangian-Eulerian ocean model

    NASA Astrophysics Data System (ADS)

    Gibson, Angus H.; Hogg, Andrew McC.; Kiss, Andrew E.; Shakespeare, Callum J.; Adcroft, Alistair

    2017-11-01

    We examine the separate contributions to spurious mixing from horizontal and vertical processes in an ALE ocean model, MOM6, using reference potential energy (RPE). The RPE is a global diagnostic which changes only due to mixing between density classes. We extend this diagnostic to a sub-timestep timescale in order to individually separate contributions to spurious mixing through horizontal (tracer advection) and vertical (regridding/remapping) processes within the model. We both evaluate the overall spurious mixing in MOM6 against previously published output from other models (MOM5, MITGCM and MPAS-O), and investigate impacts on the components of spurious mixing in MOM6 across a suite of test cases: a lock exchange, internal wave propagation, and a baroclinically-unstable eddying channel. The split RPE diagnostic demonstrates that the spurious mixing in a lock exchange test case is dominated by horizontal tracer advection, due to the spatial variability in the velocity field. In contrast, the vertical component of spurious mixing dominates in an internal waves test case. MOM6 performs well in this test case owing to its quasi-Lagrangian implementation of ALE. Finally, the effects of model resolution are examined in a baroclinic eddies test case. In particular, the vertical component of spurious mixing dominates as horizontal resolution increases, an important consideration as global models evolve towards higher horizontal resolutions.

  4. Predictors of switch from depression to mania in bipolar disorder.

    PubMed

    Niitsu, Tomihisa; Fabbri, Chiara; Serretti, Alessandro

    2015-01-01

    Manic switch is a relevant issue when treating bipolar depression. Some risk factors have been suggested, but unequivocal findings are lacking. We therefore investigated predictors of switch from depression to mania in the Systematic Treatment Enhancement Program for Bipolar Disorder (STEP-BD) sample. Manic switch was defined as a depressive episode followed by a (hypo)manic or mixed episode within the following 12 weeks. We assessed possible predictors of switch using generalized linear mixed models (GLMM). 8403 episodes without switch and 512 episodes with switch (1720 subjects) were included in the analysis. Several baseline variables were associated with a higher risk of switch. They were younger age, previous history of: rapid cycling, severe manic symptoms, suicide attempts, amphetamine use and some pharmacological and psychotherapeutic treatments. During the current depressive episode, the identified risk factors were: any possible mood elevation, multiple mania-associated symptoms with at least moderate severity, and comorbid panic attacks. In conclusion, our study suggests that both characteristics of the disease history and clinical features of the current depressive episode may be risk factors for manic switch. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Spectral Upscaling for Graph Laplacian Problems with Application to Reservoir Simulation

    DOE PAGES

    Barker, Andrew T.; Lee, Chak S.; Vassilevski, Panayot S.

    2017-10-26

    Here, we consider coarsening procedures for graph Laplacian problems written in a mixed saddle-point form. In that form, in addition to the original (vertex) degrees of freedom (dofs), we also have edge degrees of freedom. We extend previously developed aggregation-based coarsening procedures applied to both sets of dofs to now allow more than one coarse vertex dof per aggregate. Those dofs are selected as certain eigenvectors of local graph Laplacians associated with each aggregate. Additionally, we coarsen the edge dofs by using traces of the discrete gradients of the already constructed coarse vertex dofs. These traces are defined on themore » interface edges that connect any two adjacent aggregates. The overall procedure is a modification of the spectral upscaling procedure developed in for the mixed finite element discretization of diffusion type PDEs which has the important property of maintaining inf-sup stability on coarse levels and having provable approximation properties. We consider applications to partitioning a general graph and to a finite volume discretization interpreted as a graph Laplacian, developing consistent and accurate coarse-scale models of a fine-scale problem.« less

  6. Reading speed and phonological awareness deficits among Arabic-speaking children with dyslexia.

    PubMed

    Layes, Smail; Lalonde, Robert; Rebaï, Mohamed

    2015-02-01

    Although reading accuracy of isolated words and phonological awareness represent the main criteria of subtyping developmental dyslexia, there is increasing evidence that reduced reading speed also represents a defining characteristic. In the present study, reading speed and accuracy were measured in Arabic-speaking phonological and mixed dyslexic children matched with controls of the same age. Participants in third and fourth grades, aged from 9-10 to 9-8 years, were given single frequent and infrequent word and pseudo-word reading and phonological awareness tasks. Results showed that the group with dyslexia scored significantly lower than controls in accuracy and speed in reading tasks. Phonological and mixed dyslexic subgroups differed in infrequent and frequent word reading accuracy, the latter being worse. In contrast, the subgroups were comparable in pseudo-word identification and phonological awareness. Delayed phonological and recognition processes of infrequent and frequent words, respectively, were placed in the context of the dual route model of reading and the specific orthographic features of the Arabic language. Copyright © 2014 John Wiley & Sons, Ltd.

  7. 7 CFR 810.107 - Special grades and special grade requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... injurious to stored grain. (b) Infested barley, canola, corn, oats, sorghum, soybeans, sunflower seed, and..., soybeans, sunflower seed, and mixed grain are defined according to sampling designations as follows: (1...

  8. 7 CFR 810.107 - Special grades and special grade requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... injurious to stored grain. (b) Infested barley, canola, corn, oats, sorghum, soybeans, sunflower seed, and..., soybeans, sunflower seed, and mixed grain are defined according to sampling designations as follows: (1...

  9. 40 CFR 355.61 - How are key words in this part defined?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... waste when mixed or commingled with bedding, compost, feed, soil and other typical materials found with... aqueous or organic solutions, slurries, viscous solutions, suspensions, emulsions, or pastes. State means...

  10. 40 CFR 355.61 - How are key words in this part defined?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... waste when mixed or commingled with bedding, compost, feed, soil and other typical materials found with... aqueous or organic solutions, slurries, viscous solutions, suspensions, emulsions, or pastes. State means...

  11. Modeling Broadband X-Ray Absorption of Massive Star Winds

    NASA Technical Reports Server (NTRS)

    Leutenegger, Maurice A.; Cohen,David H.; Zsargo, Janos; Martell, Erin M.; MacArthur, James P.; Owocki, Stanley P.; Gagne, Marc; Hillier, D. John

    2010-01-01

    We present a method for computing the net transition of X-rays emitted by shock-heated plasma distributed throughout a partially optically thick stellar wind from a massive star. We find the transmission by an exact integration of the formal solution, assuming the emitting plasma and absorbing plasma are mixed at a constant mass ratio above some minimum radius, below which there is assumed to be no emission. This model is more realistic than either the slab absorption associated with a corona at the base of the wind or the exospheric approximation that assumes all observed X-rays are emitted without attenuation from above the radius of optical depth unity. Our model is implemented in XSPEC as a pre-calculated table that can be coupled to a user-defined table of the wavelength dependent wind opacity. We provide a default wind opacity model that is more representative of real wind opacities than the commonly used neutral ISM tabulation. Preliminary modeling of Chandra grating data indicates that the X-ray hardness trend of OB stars with spectral subtype cars largely be understood as a wind absorption effect.

  12. Aquifer development planning to supply a seaside resort: a case study in Goa, India

    NASA Astrophysics Data System (ADS)

    Lobo Ferreira, J. P. Cárcomo; da Conceição Cunha, Maria; Chachadi, A. G.; Nagel, Kai; Diamantino, Catarina; Oliveira, Manuel Mendes

    2007-09-01

    Using the hydrogeological and socio-economic data derived from a European Commission research project on the measurement, monitoring and sustainability of the coastal environment, two optimization models have been applied to satisfy the future water resources needs of the coastal zone of Bardez in Goa, India. The number of tourists visiting Goa since the 1970s has risen considerably, and roughly a third of them go to Bardez taluka, prompting growth in the tourist-related infrastructure in the region. The optimization models are non-linear mixed integer models that have been solved using GAMS/DICOPT++ commercial software. Optimization models were used, firstly, to indicate the most suitable zones for building seaside resorts and wells to supply the tourist industry with an adequate amount of water, and secondly, to indicate the best location for wells to adequately supply pre-existing hotels. The models presented will help to define the optimal locations for the wells and the hydraulic infrastructures needed to satisfy demand at minimum cost, taking into account environmental constraints such as the risk of saline intrusion.

  13. Quantitative stress measurement of elastic deformation using mechanoluminescent sensor: An intensity ratio model

    NASA Astrophysics Data System (ADS)

    Cai, Tao; Guo, Songtao; Li, Yongzeng; Peng, Di; Zhao, Xiaofeng; Liu, Yingzheng

    2018-04-01

    The mechanoluminescent (ML) sensor is a newly developed non-invasive technique for stress/strain measurement. However, its application has been mostly restricted to qualitative measurement due to the lack of a well-defined relationship between ML intensity and stress. To achieve accurate stress measurement, an intensity ratio model was proposed in this study to establish a quantitative relationship between the stress condition and its ML intensity in elastic deformation. To verify the proposed model, experiments were carried out on a ML measurement system using resin samples mixed with the sensor material SrAl2O4:Eu2+, Dy3+. The ML intensity ratio was found to be dependent on the applied stress and strain rate, and the relationship acquired from the experimental results agreed well with the proposed model. The current study provided a physical explanation for the relationship between ML intensity and its stress condition. The proposed model was applicable in various SrAl2O4:Eu2+, Dy3+-based ML measurement in elastic deformation, and could provide a useful reference for quantitative stress measurement using the ML sensor in general.

  14. Mechanisms underlying transfer of task-defined rules across feature dimensions.

    PubMed

    Baroni, Giulia; Yamaguchi, Motonori; Chen, Jing; Proctor, Robert W

    2013-01-01

    The Simon effect can be reversed, favoring spatially noncorresponding responses, when people respond to stimulus colors (e.g., green) by pressing a key labeled with the alternative color (i.e., red). This Hedge and Marsh reversal is most often attributed to transfer of logical recoding rules from the color dimension to the location dimension. A recent study showed that this transfer of logical recoding rules can occur not only within a single task but also across two separate tasks that are intermixed. The present study investigated the conditions that determine the transfer of logical recoding rules across tasks. Experiment 1 examined whether it occurs in a transfer paradigm, that is when the two tasks are performed separately, but provided little support for this possibility. Experiment 2 investigated the role of task-set readiness, using a mixed-task paradigm with a predictable trials sequence, which indicated that there is no transfer of task-defined rules across tasks even when they are highly active during the Simon task. Finally, Experiments 3 and 4 used a mixed-task paradigm, where trials of the two tasks were mixed randomly and unpredictably, and manipulated the amount of feature overlap between tasks. Results indicated that task similarity is a determining factor for transfer of task-defined rules to occur. Overall, the study provides evidence that transfer of logical recoding rules tends to occur across two tasks when tasks are unpredictably intermixed and use stimuli that are highly similar and confusable.

  15. RFQ accelerator tuning system

    DOEpatents

    Bolie, V.W.

    1990-07-03

    A cooling system is provided for maintaining a preselected operating temperature in a device, which may be an RFQ accelerator, having a variable heat removal requirement, by circulating a cooling fluid through a cooling system remote from the device. Internal sensors in the device enable an estimated error signal to be generated from parameters which are indicative of the heat removal requirement from the device. Sensors are provided at predetermined locations in the cooling system for outputting operational temperature signals. Analog and digital computers define a control signal functionally related to the temperature signals and the estimated error signal, where the control signal is defined effective to return the device to the preselected operating temperature in a stable manner. The cooling system includes a first heat sink responsive to a first portion of the control signal to remove heat from a major portion of the circulating fluid. A second heat sink is responsive to a second portion of the control signal to remove heat from a minor portion of the circulating fluid. The cooled major and minor portions of the circulating fluid are mixed in response to a mixing portion of the control signal, which is effective to proportion the major and minor portions of the circulating fluid to establish a mixed fluid temperature which is effective to define the preselected operating temperature for the remote device. In an RFQ environment the stable temperature control enables the resonant frequency of the device to be maintained at substantially a predetermined value during transient operations. 3 figs.

  16. RFQ accelerator tuning system

    DOEpatents

    Bolie, Victor W.

    1990-01-01

    A cooling system is provided for maintaining a preselected operating temperature in a device, which may be an RFQ accelerator, having a variable heat removal requirement, by circulating a cooling fluid through a cooling system remote from the device. Internal sensors in the device enable an estimated error signal to be generated from parameters which are indicative of the heat removal requirement from the device. Sensors are provided at predetermined locations in the cooling system for outputting operational temperature signals. Analog and digital computers define a control signal functionally related to the temperature signals and the estimated error signal, where the control signal is defined effective to return the device to the preselected operating temperature in a stable manner. The cooling system includes a first heat sink responsive to a first portion of the control signal to remove heat from a major portion of the circulating fluid. A second heat sink is responsive to a second portion of the control signal to remove heat from a minor portion of the circulating fluid. The cooled major and minor portions of the circulating fluid are mixed in response to a mixing portion of the control signal, which is effective to proportion the major and minor portions of the circulating fluid to establish a mixed fluid temperature which is effective to define the preselected operating temperature for the remote device. In an RFQ environment the stable temperature control enables the resonant frequency of the device to be maintained at substantially a predetermined value during transient operations.

  17. Entrainment Zone Characteristics and Entrainment Rates in Cloud-Topped Boundary Layers from DYCOMS-II

    DTIC Science & Technology

    2012-03-01

    water and ozone across the EIL. The scalar variables from this flight (not shown) suggest significant horizontal variation in the free- troposphere ...near the cloud top where mixing occurs between dry free- troposphere air and moist turbulent air. Although the concept of the entrainment zone is...mixing occurs between dry free- troposphere air and moist turbulent air. Although the concept of the entrainment zone is clear, defining the top and

  18. 43 CFR Appendix I to Part 11 - Methods for Estimating the Areas of Ground Water and Surface Water Exposure During the...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...

  19. 43 CFR Appendix I to Part 11 - Methods for Estimating the Areas of Ground Water and Surface Water Exposure During the...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...

  20. 43 CFR Appendix I to Part 11 - Methods for Estimating the Areas of Ground Water and Surface Water Exposure During the...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...

Top